repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
timestamp[ns, tz=UTC] | date_merged
timestamp[ns, tz=UTC] | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 4,949 | Fix typos in Sorts and Bit_manipulation | ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | Manan-Rathi | 2021-10-03T08:24:44Z | 2021-10-20T08:42:32Z | 83cf5786cddd694a2af25827f8861b7dbcbf706c | 50485f7c8e33b0a3bf6e603cdae3505d40b1d97a | Fix typos in Sorts and Bit_manipulation. ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| -1 |
TheAlgorithms/Python | 4,949 | Fix typos in Sorts and Bit_manipulation | ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | Manan-Rathi | 2021-10-03T08:24:44Z | 2021-10-20T08:42:32Z | 83cf5786cddd694a2af25827f8861b7dbcbf706c | 50485f7c8e33b0a3bf6e603cdae3505d40b1d97a | Fix typos in Sorts and Bit_manipulation. ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | """
Author: P Shreyas Shetty
Implementation of Newton-Raphson method for solving equations of kind
f(x) = 0. It is an iterative method where solution is found by the expression
x[n+1] = x[n] + f(x[n])/f'(x[n])
If no solution exists, then either the solution will not be found when iteration
limit is reached or the gradient f'(x[n]) approaches zero. In both cases, exception
is raised. If iteration limit is reached, try increasing maxiter.
"""
import math as m
def calc_derivative(f, a, h=0.001):
"""
Calculates derivative at point a for function f using finite difference
method
"""
return (f(a + h) - f(a - h)) / (2 * h)
def newton_raphson(f, x0=0, maxiter=100, step=0.0001, maxerror=1e-6, logsteps=False):
a = x0 # set the initial guess
steps = [a]
error = abs(f(a))
f1 = lambda x: calc_derivative(f, x, h=step) # noqa: E731 Derivative of f(x)
for _ in range(maxiter):
if f1(a) == 0:
raise ValueError("No converging solution found")
a = a - f(a) / f1(a) # Calculate the next estimate
if logsteps:
steps.append(a)
if error < maxerror:
break
else:
raise ValueError("Iteration limit reached, no converging solution found")
if logsteps:
# If logstep is true, then log intermediate steps
return a, error, steps
return a, error
if __name__ == "__main__":
from matplotlib import pyplot as plt
f = lambda x: m.tanh(x) ** 2 - m.exp(3 * x) # noqa: E731
solution, error, steps = newton_raphson(
f, x0=10, maxiter=1000, step=1e-6, logsteps=True
)
plt.plot([abs(f(x)) for x in steps])
plt.xlabel("step")
plt.ylabel("error")
plt.show()
print(f"solution = {{{solution:f}}}, error = {{{error:f}}}")
| """
Author: P Shreyas Shetty
Implementation of Newton-Raphson method for solving equations of kind
f(x) = 0. It is an iterative method where solution is found by the expression
x[n+1] = x[n] + f(x[n])/f'(x[n])
If no solution exists, then either the solution will not be found when iteration
limit is reached or the gradient f'(x[n]) approaches zero. In both cases, exception
is raised. If iteration limit is reached, try increasing maxiter.
"""
import math as m
def calc_derivative(f, a, h=0.001):
"""
Calculates derivative at point a for function f using finite difference
method
"""
return (f(a + h) - f(a - h)) / (2 * h)
def newton_raphson(f, x0=0, maxiter=100, step=0.0001, maxerror=1e-6, logsteps=False):
a = x0 # set the initial guess
steps = [a]
error = abs(f(a))
f1 = lambda x: calc_derivative(f, x, h=step) # noqa: E731 Derivative of f(x)
for _ in range(maxiter):
if f1(a) == 0:
raise ValueError("No converging solution found")
a = a - f(a) / f1(a) # Calculate the next estimate
if logsteps:
steps.append(a)
if error < maxerror:
break
else:
raise ValueError("Iteration limit reached, no converging solution found")
if logsteps:
# If logstep is true, then log intermediate steps
return a, error, steps
return a, error
if __name__ == "__main__":
from matplotlib import pyplot as plt
f = lambda x: m.tanh(x) ** 2 - m.exp(3 * x) # noqa: E731
solution, error, steps = newton_raphson(
f, x0=10, maxiter=1000, step=1e-6, logsteps=True
)
plt.plot([abs(f(x)) for x in steps])
plt.xlabel("step")
plt.ylabel("error")
plt.show()
print(f"solution = {{{solution:f}}}, error = {{{error:f}}}")
| -1 |
TheAlgorithms/Python | 4,949 | Fix typos in Sorts and Bit_manipulation | ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | Manan-Rathi | 2021-10-03T08:24:44Z | 2021-10-20T08:42:32Z | 83cf5786cddd694a2af25827f8861b7dbcbf706c | 50485f7c8e33b0a3bf6e603cdae3505d40b1d97a | Fix typos in Sorts and Bit_manipulation. ### **Describe your change:**
* [x] Fixes the typos in the sorts and bit_manipulation folder
* [x] Fixes the initially messed links in bit_manipulation/README.md

### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). | def dencrypt(s: str, n: int = 13) -> str:
"""
https://en.wikipedia.org/wiki/ROT13
>>> msg = "My secret bank account number is 173-52946 so don't tell anyone!!"
>>> s = dencrypt(msg)
>>> s
"Zl frperg onax nppbhag ahzore vf 173-52946 fb qba'g gryy nalbar!!"
>>> dencrypt(s) == msg
True
"""
out = ""
for c in s:
if "A" <= c <= "Z":
out += chr(ord("A") + (ord(c) - ord("A") + n) % 26)
elif "a" <= c <= "z":
out += chr(ord("a") + (ord(c) - ord("a") + n) % 26)
else:
out += c
return out
def main() -> None:
s0 = input("Enter message: ")
s1 = dencrypt(s0, 13)
print("Encryption:", s1)
s2 = dencrypt(s1, 13)
print("Decryption: ", s2)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| def dencrypt(s: str, n: int = 13) -> str:
"""
https://en.wikipedia.org/wiki/ROT13
>>> msg = "My secret bank account number is 173-52946 so don't tell anyone!!"
>>> s = dencrypt(msg)
>>> s
"Zl frperg onax nppbhag ahzore vf 173-52946 fb qba'g gryy nalbar!!"
>>> dencrypt(s) == msg
True
"""
out = ""
for c in s:
if "A" <= c <= "Z":
out += chr(ord("A") + (ord(c) - ord("A") + n) % 26)
elif "a" <= c <= "z":
out += chr(ord("a") + (ord(c) - ord("a") + n) % 26)
else:
out += c
return out
def main() -> None:
s0 = input("Enter message: ")
s1 = dencrypt(s0, 13)
print("Encryption:", s1)
s2 = dencrypt(s1, 13)
print("Decryption: ", s2)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Bead sort only works for sequences of nonegative integers.
https://en.wikipedia.org/wiki/Bead_sort
"""
def bead_sort(sequence: list) -> list:
"""
>>> bead_sort([6, 11, 12, 4, 1, 5])
[1, 4, 5, 6, 11, 12]
>>> bead_sort([9, 8, 7, 6, 5, 4 ,3, 2, 1])
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bead_sort([5, 0, 4, 3])
[0, 3, 4, 5]
>>> bead_sort([8, 2, 1])
[1, 2, 8]
>>> bead_sort([1, .9, 0.0, 0, -1, -.9])
Traceback (most recent call last):
...
TypeError: Sequence must be list of nonnegative integers
>>> bead_sort("Hello world")
Traceback (most recent call last):
...
TypeError: Sequence must be list of nonnegative integers
"""
if any(not isinstance(x, int) or x < 0 for x in sequence):
raise TypeError("Sequence must be list of nonnegative integers")
for _ in range(len(sequence)):
for i, (rod_upper, rod_lower) in enumerate(zip(sequence, sequence[1:])):
if rod_upper > rod_lower:
sequence[i] -= rod_upper - rod_lower
sequence[i + 1] += rod_upper - rod_lower
return sequence
if __name__ == "__main__":
assert bead_sort([5, 4, 3, 2, 1]) == [1, 2, 3, 4, 5]
assert bead_sort([7, 9, 4, 3, 5]) == [3, 4, 5, 7, 9]
| """
Bead sort only works for sequences of non-negative integers.
https://en.wikipedia.org/wiki/Bead_sort
"""
def bead_sort(sequence: list) -> list:
"""
>>> bead_sort([6, 11, 12, 4, 1, 5])
[1, 4, 5, 6, 11, 12]
>>> bead_sort([9, 8, 7, 6, 5, 4 ,3, 2, 1])
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bead_sort([5, 0, 4, 3])
[0, 3, 4, 5]
>>> bead_sort([8, 2, 1])
[1, 2, 8]
>>> bead_sort([1, .9, 0.0, 0, -1, -.9])
Traceback (most recent call last):
...
TypeError: Sequence must be list of nonnegative integers
>>> bead_sort("Hello world")
Traceback (most recent call last):
...
TypeError: Sequence must be list of nonnegative integers
"""
if any(not isinstance(x, int) or x < 0 for x in sequence):
raise TypeError("Sequence must be list of nonnegative integers")
for _ in range(len(sequence)):
for i, (rod_upper, rod_lower) in enumerate(zip(sequence, sequence[1:])):
if rod_upper > rod_lower:
sequence[i] -= rod_upper - rod_lower
sequence[i + 1] += rod_upper - rod_lower
return sequence
if __name__ == "__main__":
assert bead_sort([5, 4, 3, 2, 1]) == [1, 2, 3, 4, 5]
assert bead_sort([7, 9, 4, 3, 5]) == [3, 4, 5, 7, 9]
| 1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Source: https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
This is a non-parallelized implementation of odd-even transpostiion sort.
Normally the swaps in each set happen simultaneously, without that the algorithm
is no better than bubble sort.
"""
def odd_even_transposition(arr: list) -> list:
"""
>>> odd_even_transposition([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> odd_even_transposition([13, 11, 18, 0, -1])
[-1, 0, 11, 13, 18]
>>> odd_even_transposition([-.1, 1.1, .1, -2.9])
[-2.9, -0.1, 0.1, 1.1]
"""
arr_size = len(arr)
for _ in range(arr_size):
for i in range(_ % 2, arr_size - 1, 2):
if arr[i + 1] < arr[i]:
arr[i], arr[i + 1] = arr[i + 1], arr[i]
return arr
if __name__ == "__main__":
arr = list(range(10, 0, -1))
print(f"Original: {arr}. Sorted: {odd_even_transposition(arr)}")
| """
Source: https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
This is a non-parallelized implementation of odd-even transposition sort.
Normally the swaps in each set happen simultaneously, without that the algorithm
is no better than bubble sort.
"""
def odd_even_transposition(arr: list) -> list:
"""
>>> odd_even_transposition([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> odd_even_transposition([13, 11, 18, 0, -1])
[-1, 0, 11, 13, 18]
>>> odd_even_transposition([-.1, 1.1, .1, -2.9])
[-2.9, -0.1, 0.1, 1.1]
"""
arr_size = len(arr)
for _ in range(arr_size):
for i in range(_ % 2, arr_size - 1, 2):
if arr[i + 1] < arr[i]:
arr[i], arr[i + 1] = arr[i + 1], arr[i]
return arr
if __name__ == "__main__":
arr = list(range(10, 0, -1))
print(f"Original: {arr}. Sorted: {odd_even_transposition(arr)}")
| 1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topolical sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| 1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://cp-algorithms.com/string/prefix-function.html
Prefix function Knuth–Morris–Pratt algorithm
Different algorithm than Knuth-Morris-Pratt pattern finding
E.x. Finding longest prefix which is also suffix
Time Complexity: O(n) - where n is the length of the string
"""
def prefix_function(input_string: str) -> list:
"""
For the given string this function computes value for each index(i),
which represents the longest coincidence of prefix and sufix
for given substring (input_str[0...i])
For the value of the first element the algorithm always returns 0
>>> prefix_function("aabcdaabc")
[0, 1, 0, 0, 0, 1, 2, 3, 4]
>>> prefix_function("asdasdad")
[0, 0, 0, 1, 2, 3, 4, 0]
"""
# list for the result values
prefix_result = [0] * len(input_string)
for i in range(1, len(input_string)):
# use last results for better performance - dynamic programming
j = prefix_result[i - 1]
while j > 0 and input_string[i] != input_string[j]:
j = prefix_result[j - 1]
if input_string[i] == input_string[j]:
j += 1
prefix_result[i] = j
return prefix_result
def longest_prefix(input_str: str) -> int:
"""
Prefix-function use case
Finding longest prefix which is sufix as well
>>> longest_prefix("aabcdaabc")
4
>>> longest_prefix("asdasdad")
4
>>> longest_prefix("abcab")
2
"""
# just returning maximum value of the array gives us answer
return max(prefix_function(input_str))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://cp-algorithms.com/string/prefix-function.html
Prefix function Knuth–Morris–Pratt algorithm
Different algorithm than Knuth-Morris-Pratt pattern finding
E.x. Finding longest prefix which is also suffix
Time Complexity: O(n) - where n is the length of the string
"""
def prefix_function(input_string: str) -> list:
"""
For the given string this function computes value for each index(i),
which represents the longest coincidence of prefix and suffix
for given substring (input_str[0...i])
For the value of the first element the algorithm always returns 0
>>> prefix_function("aabcdaabc")
[0, 1, 0, 0, 0, 1, 2, 3, 4]
>>> prefix_function("asdasdad")
[0, 0, 0, 1, 2, 3, 4, 0]
"""
# list for the result values
prefix_result = [0] * len(input_string)
for i in range(1, len(input_string)):
# use last results for better performance - dynamic programming
j = prefix_result[i - 1]
while j > 0 and input_string[i] != input_string[j]:
j = prefix_result[j - 1]
if input_string[i] == input_string[j]:
j += 1
prefix_result[i] = j
return prefix_result
def longest_prefix(input_str: str) -> int:
"""
Prefix-function use case
Finding longest prefix which is suffix as well
>>> longest_prefix("aabcdaabc")
4
>>> longest_prefix("asdasdad")
4
>>> longest_prefix("abcab")
2
"""
# just returning maximum value of the array gives us answer
return max(prefix_function(input_str))
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Pure Python implementations of a Fixed Priority Queue and an Element Priority Queue
using Python lists.
"""
class OverFlowError(Exception):
pass
class UnderFlowError(Exception):
pass
class FixedPriorityQueue:
"""
Tasks can be added to a Priority Queue at any time and in any order but when Tasks
are removed then the Task with the highest priority is removed in FIFO order. In
code we will use three levels of priority with priority zero Tasks being the most
urgent (high priority) and priority 2 tasks being the least urgent.
Examples
>>> fpq = FixedPriorityQueue()
>>> fpq.enqueue(0, 10)
>>> fpq.enqueue(1, 70)
>>> fpq.enqueue(0, 100)
>>> fpq.enqueue(2, 1)
>>> fpq.enqueue(2, 5)
>>> fpq.enqueue(1, 7)
>>> fpq.enqueue(2, 4)
>>> fpq.enqueue(1, 64)
>>> fpq.enqueue(0, 128)
>>> print(fpq)
Priority 0: [10, 100, 128]
Priority 1: [70, 7, 64]
Priority 2: [1, 5, 4]
>>> fpq.dequeue()
10
>>> fpq.dequeue()
100
>>> fpq.dequeue()
128
>>> fpq.dequeue()
70
>>> fpq.dequeue()
7
>>> print(fpq)
Priority 0: []
Priority 1: [64]
Priority 2: [1, 5, 4]
>>> fpq.dequeue()
64
>>> fpq.dequeue()
1
>>> fpq.dequeue()
5
>>> fpq.dequeue()
4
>>> fpq.dequeue()
Traceback (most recent call last):
...
data_structures.queue.priority_queue_using_list.UnderFlowError: All queues are empty
>>> print(fpq)
Priority 0: []
Priority 1: []
Priority 2: []
"""
def __init__(self):
self.queues = [
[],
[],
[],
]
def enqueue(self, priority: int, data: int) -> None:
"""
Add an element to a queue based on its priority.
If the priority is invalid ValueError is raised.
If the queue is full an OverFlowError is raised.
"""
try:
if len(self.queues[priority]) >= 100:
raise OverflowError("Maximum queue size is 100")
self.queues[priority].append(data)
except IndexError:
raise ValueError("Valid priorities are 0, 1, and 2")
def dequeue(self) -> int:
"""
Return the highest priority element in FIFO order.
If the queue is empty then an under flow exception is raised.
"""
for queue in self.queues:
if queue:
return queue.pop(0)
raise UnderFlowError("All queues are empty")
def __str__(self) -> str:
return "\n".join(f"Priority {i}: {q}" for i, q in enumerate(self.queues))
class ElementPriorityQueue:
"""
Element Priority Queue is the same as Fixed Priority Queue except that the value of
the element itself is the priority. The rules for priorities are the same the as
Fixed Priority Queue.
>>> epq = ElementPriorityQueue()
>>> epq.enqueue(10)
>>> epq.enqueue(70)
>>> epq.enqueue(4)
>>> epq.enqueue(1)
>>> epq.enqueue(5)
>>> epq.enqueue(7)
>>> epq.enqueue(4)
>>> epq.enqueue(64)
>>> epq.enqueue(128)
>>> print(epq)
[10, 70, 4, 1, 5, 7, 4, 64, 128]
>>> epq.dequeue()
1
>>> epq.dequeue()
4
>>> epq.dequeue()
4
>>> epq.dequeue()
5
>>> epq.dequeue()
7
>>> epq.dequeue()
10
>>> print(epq)
[70, 64, 128]
>>> epq.dequeue()
64
>>> epq.dequeue()
70
>>> epq.dequeue()
128
>>> epq.dequeue()
Traceback (most recent call last):
...
data_structures.queue.priority_queue_using_list.UnderFlowError: The queue is empty
>>> print(epq)
[]
"""
def __init__(self):
self.queue = []
def enqueue(self, data: int) -> None:
"""
This function enters the element into the queue
If the queue is full an Exception is raised saying Over Flow!
"""
if len(self.queue) == 100:
raise OverFlowError("Maximum queue size is 100")
self.queue.append(data)
def dequeue(self) -> int:
"""
Return the highest priority element in FIFO order.
If the queue is empty then an under flow exception is raised.
"""
if not self.queue:
raise UnderFlowError("The queue is empty")
else:
data = min(self.queue)
self.queue.remove(data)
return data
def __str__(self) -> str:
"""
Prints all the elements within the Element Priority Queue
"""
return str(self.queue)
def fixed_priority_queue():
fpq = FixedPriorityQueue()
fpq.enqueue(0, 10)
fpq.enqueue(1, 70)
fpq.enqueue(0, 100)
fpq.enqueue(2, 1)
fpq.enqueue(2, 5)
fpq.enqueue(1, 7)
fpq.enqueue(2, 4)
fpq.enqueue(1, 64)
fpq.enqueue(0, 128)
print(fpq)
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq)
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
def element_priority_queue():
epq = ElementPriorityQueue()
epq.enqueue(10)
epq.enqueue(70)
epq.enqueue(100)
epq.enqueue(1)
epq.enqueue(5)
epq.enqueue(7)
epq.enqueue(4)
epq.enqueue(64)
epq.enqueue(128)
print(epq)
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq)
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
if __name__ == "__main__":
fixed_priority_queue()
element_priority_queue()
| """
Pure Python implementations of a Fixed Priority Queue and an Element Priority Queue
using Python lists.
"""
class OverFlowError(Exception):
pass
class UnderFlowError(Exception):
pass
class FixedPriorityQueue:
"""
Tasks can be added to a Priority Queue at any time and in any order but when Tasks
are removed then the Task with the highest priority is removed in FIFO order. In
code we will use three levels of priority with priority zero Tasks being the most
urgent (high priority) and priority 2 tasks being the least urgent.
Examples
>>> fpq = FixedPriorityQueue()
>>> fpq.enqueue(0, 10)
>>> fpq.enqueue(1, 70)
>>> fpq.enqueue(0, 100)
>>> fpq.enqueue(2, 1)
>>> fpq.enqueue(2, 5)
>>> fpq.enqueue(1, 7)
>>> fpq.enqueue(2, 4)
>>> fpq.enqueue(1, 64)
>>> fpq.enqueue(0, 128)
>>> print(fpq)
Priority 0: [10, 100, 128]
Priority 1: [70, 7, 64]
Priority 2: [1, 5, 4]
>>> fpq.dequeue()
10
>>> fpq.dequeue()
100
>>> fpq.dequeue()
128
>>> fpq.dequeue()
70
>>> fpq.dequeue()
7
>>> print(fpq)
Priority 0: []
Priority 1: [64]
Priority 2: [1, 5, 4]
>>> fpq.dequeue()
64
>>> fpq.dequeue()
1
>>> fpq.dequeue()
5
>>> fpq.dequeue()
4
>>> fpq.dequeue()
Traceback (most recent call last):
...
data_structures.queue.priority_queue_using_list.UnderFlowError: All queues are empty
>>> print(fpq)
Priority 0: []
Priority 1: []
Priority 2: []
"""
def __init__(self):
self.queues = [
[],
[],
[],
]
def enqueue(self, priority: int, data: int) -> None:
"""
Add an element to a queue based on its priority.
If the priority is invalid ValueError is raised.
If the queue is full an OverFlowError is raised.
"""
try:
if len(self.queues[priority]) >= 100:
raise OverflowError("Maximum queue size is 100")
self.queues[priority].append(data)
except IndexError:
raise ValueError("Valid priorities are 0, 1, and 2")
def dequeue(self) -> int:
"""
Return the highest priority element in FIFO order.
If the queue is empty then an under flow exception is raised.
"""
for queue in self.queues:
if queue:
return queue.pop(0)
raise UnderFlowError("All queues are empty")
def __str__(self) -> str:
return "\n".join(f"Priority {i}: {q}" for i, q in enumerate(self.queues))
class ElementPriorityQueue:
"""
Element Priority Queue is the same as Fixed Priority Queue except that the value of
the element itself is the priority. The rules for priorities are the same the as
Fixed Priority Queue.
>>> epq = ElementPriorityQueue()
>>> epq.enqueue(10)
>>> epq.enqueue(70)
>>> epq.enqueue(4)
>>> epq.enqueue(1)
>>> epq.enqueue(5)
>>> epq.enqueue(7)
>>> epq.enqueue(4)
>>> epq.enqueue(64)
>>> epq.enqueue(128)
>>> print(epq)
[10, 70, 4, 1, 5, 7, 4, 64, 128]
>>> epq.dequeue()
1
>>> epq.dequeue()
4
>>> epq.dequeue()
4
>>> epq.dequeue()
5
>>> epq.dequeue()
7
>>> epq.dequeue()
10
>>> print(epq)
[70, 64, 128]
>>> epq.dequeue()
64
>>> epq.dequeue()
70
>>> epq.dequeue()
128
>>> epq.dequeue()
Traceback (most recent call last):
...
data_structures.queue.priority_queue_using_list.UnderFlowError: The queue is empty
>>> print(epq)
[]
"""
def __init__(self):
self.queue = []
def enqueue(self, data: int) -> None:
"""
This function enters the element into the queue
If the queue is full an Exception is raised saying Over Flow!
"""
if len(self.queue) == 100:
raise OverFlowError("Maximum queue size is 100")
self.queue.append(data)
def dequeue(self) -> int:
"""
Return the highest priority element in FIFO order.
If the queue is empty then an under flow exception is raised.
"""
if not self.queue:
raise UnderFlowError("The queue is empty")
else:
data = min(self.queue)
self.queue.remove(data)
return data
def __str__(self) -> str:
"""
Prints all the elements within the Element Priority Queue
"""
return str(self.queue)
def fixed_priority_queue():
fpq = FixedPriorityQueue()
fpq.enqueue(0, 10)
fpq.enqueue(1, 70)
fpq.enqueue(0, 100)
fpq.enqueue(2, 1)
fpq.enqueue(2, 5)
fpq.enqueue(1, 7)
fpq.enqueue(2, 4)
fpq.enqueue(1, 64)
fpq.enqueue(0, 128)
print(fpq)
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq)
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
print(fpq.dequeue())
def element_priority_queue():
epq = ElementPriorityQueue()
epq.enqueue(10)
epq.enqueue(70)
epq.enqueue(100)
epq.enqueue(1)
epq.enqueue(5)
epq.enqueue(7)
epq.enqueue(4)
epq.enqueue(64)
epq.enqueue(128)
print(epq)
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq)
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
print(epq.dequeue())
if __name__ == "__main__":
fixed_priority_queue()
element_priority_queue()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Convert a Decimal Number to an Octal Number."""
import math
# Modified from:
# https://github.com/TheAlgorithms/Javascript/blob/master/Conversions/DecimalToOctal.js
def decimal_to_octal(num: int) -> str:
"""Convert a Decimal Number to an Octal Number.
>>> all(decimal_to_octal(i) == oct(i) for i
... in (0, 2, 8, 64, 65, 216, 255, 256, 512))
True
"""
octal = 0
counter = 0
while num > 0:
remainder = num % 8
octal = octal + (remainder * math.floor(math.pow(10, counter)))
counter += 1
num = math.floor(num / 8) # basically /= 8 without remainder if any
# This formatting removes trailing '.0' from `octal`.
return f"0o{int(octal)}"
def main() -> None:
"""Print octal equivalents of decimal numbers."""
print("\n2 in octal is:")
print(decimal_to_octal(2)) # = 2
print("\n8 in octal is:")
print(decimal_to_octal(8)) # = 10
print("\n65 in octal is:")
print(decimal_to_octal(65)) # = 101
print("\n216 in octal is:")
print(decimal_to_octal(216)) # = 330
print("\n512 in octal is:")
print(decimal_to_octal(512)) # = 1000
print("\n")
if __name__ == "__main__":
main()
| """Convert a Decimal Number to an Octal Number."""
import math
# Modified from:
# https://github.com/TheAlgorithms/Javascript/blob/master/Conversions/DecimalToOctal.js
def decimal_to_octal(num: int) -> str:
"""Convert a Decimal Number to an Octal Number.
>>> all(decimal_to_octal(i) == oct(i) for i
... in (0, 2, 8, 64, 65, 216, 255, 256, 512))
True
"""
octal = 0
counter = 0
while num > 0:
remainder = num % 8
octal = octal + (remainder * math.floor(math.pow(10, counter)))
counter += 1
num = math.floor(num / 8) # basically /= 8 without remainder if any
# This formatting removes trailing '.0' from `octal`.
return f"0o{int(octal)}"
def main() -> None:
"""Print octal equivalents of decimal numbers."""
print("\n2 in octal is:")
print(decimal_to_octal(2)) # = 2
print("\n8 in octal is:")
print(decimal_to_octal(8)) # = 10
print("\n65 in octal is:")
print(decimal_to_octal(65)) # = 101
print("\n216 in octal is:")
print(decimal_to_octal(216)) # = 330
print("\n512 in octal is:")
print(decimal_to_octal(512)) # = 1000
print("\n")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class Matrix:
"""
<class Matrix>
Matrix structure.
"""
def __init__(self, row: int, column: int, default_value: float = 0):
"""
<method Matrix.__init__>
Initialize matrix with given size and default value.
Example:
>>> a = Matrix(2, 3, 1)
>>> a
Matrix consist of 2 rows and 3 columns
[1, 1, 1]
[1, 1, 1]
"""
self.row, self.column = row, column
self.array = [[default_value for c in range(column)] for r in range(row)]
def __str__(self):
"""
<method Matrix.__str__>
Return string representation of this matrix.
"""
# Prefix
s = "Matrix consist of %d rows and %d columns\n" % (self.row, self.column)
# Make string identifier
max_element_length = 0
for row_vector in self.array:
for obj in row_vector:
max_element_length = max(max_element_length, len(str(obj)))
string_format_identifier = "%%%ds" % (max_element_length,)
# Make string and return
def single_line(row_vector):
nonlocal string_format_identifier
line = "["
line += ", ".join(string_format_identifier % (obj,) for obj in row_vector)
line += "]"
return line
s += "\n".join(single_line(row_vector) for row_vector in self.array)
return s
def __repr__(self):
return str(self)
def validateIndices(self, loc: tuple):
"""
<method Matrix.validateIndices>
Check if given indices are valid to pick element from matrix.
Example:
>>> a = Matrix(2, 6, 0)
>>> a.validateIndices((2, 7))
False
>>> a.validateIndices((0, 0))
True
"""
if not (isinstance(loc, (list, tuple)) and len(loc) == 2):
return False
elif not (0 <= loc[0] < self.row and 0 <= loc[1] < self.column):
return False
else:
return True
def __getitem__(self, loc: tuple):
"""
<method Matrix.__getitem__>
Return array[row][column] where loc = (row, column).
Example:
>>> a = Matrix(3, 2, 7)
>>> a[1, 0]
7
"""
assert self.validateIndices(loc)
return self.array[loc[0]][loc[1]]
def __setitem__(self, loc: tuple, value: float):
"""
<method Matrix.__setitem__>
Set array[row][column] = value where loc = (row, column).
Example:
>>> a = Matrix(2, 3, 1)
>>> a[1, 2] = 51
>>> a
Matrix consist of 2 rows and 3 columns
[ 1, 1, 1]
[ 1, 1, 51]
"""
assert self.validateIndices(loc)
self.array[loc[0]][loc[1]] = value
def __add__(self, another):
"""
<method Matrix.__add__>
Return self + another.
Example:
>>> a = Matrix(2, 1, -4)
>>> b = Matrix(2, 1, 3)
>>> a+b
Matrix consist of 2 rows and 1 columns
[-1]
[-1]
"""
# Validation
assert isinstance(another, Matrix)
assert self.row == another.row and self.column == another.column
# Add
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = self[r, c] + another[r, c]
return result
def __neg__(self):
"""
<method Matrix.__neg__>
Return -self.
Example:
>>> a = Matrix(2, 2, 3)
>>> a[0, 1] = a[1, 0] = -2
>>> -a
Matrix consist of 2 rows and 2 columns
[-3, 2]
[ 2, -3]
"""
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = -self[r, c]
return result
def __sub__(self, another):
return self + (-another)
def __mul__(self, another):
"""
<method Matrix.__mul__>
Return self * another.
Example:
>>> a = Matrix(2, 3, 1)
>>> a[0,2] = a[1,2] = 3
>>> a * -2
Matrix consist of 2 rows and 3 columns
[-2, -2, -6]
[-2, -2, -6]
"""
if isinstance(another, (int, float)): # Scalar multiplication
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = self[r, c] * another
return result
elif isinstance(another, Matrix): # Matrix multiplication
assert self.column == another.row
result = Matrix(self.row, another.column)
for r in range(self.row):
for c in range(another.column):
for i in range(self.column):
result[r, c] += self[r, i] * another[i, c]
return result
else:
raise TypeError(f"Unsupported type given for another ({type(another)})")
def transpose(self):
"""
<method Matrix.transpose>
Return self^T.
Example:
>>> a = Matrix(2, 3)
>>> for r in range(2):
... for c in range(3):
... a[r,c] = r*c
...
>>> a.transpose()
Matrix consist of 3 rows and 2 columns
[0, 0]
[0, 1]
[0, 2]
"""
result = Matrix(self.column, self.row)
for r in range(self.row):
for c in range(self.column):
result[c, r] = self[r, c]
return result
def ShermanMorrison(self, u, v):
"""
<method Matrix.ShermanMorrison>
Apply Sherman-Morrison formula in O(n^2).
To learn this formula, please look this:
https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula
This method returns (A + uv^T)^(-1) where A^(-1) is self. Returns None if it's
impossible to calculate.
Warning: This method doesn't check if self is invertible.
Make sure self is invertible before execute this method.
Example:
>>> ainv = Matrix(3, 3, 0)
>>> for i in range(3): ainv[i,i] = 1
...
>>> u = Matrix(3, 1, 0)
>>> u[0,0], u[1,0], u[2,0] = 1, 2, -3
>>> v = Matrix(3, 1, 0)
>>> v[0,0], v[1,0], v[2,0] = 4, -2, 5
>>> ainv.ShermanMorrison(u, v)
Matrix consist of 3 rows and 3 columns
[ 1.2857142857142856, -0.14285714285714285, 0.3571428571428571]
[ 0.5714285714285714, 0.7142857142857143, 0.7142857142857142]
[ -0.8571428571428571, 0.42857142857142855, -0.0714285714285714]
"""
# Size validation
assert isinstance(u, Matrix) and isinstance(v, Matrix)
assert self.row == self.column == u.row == v.row # u, v should be column vector
assert u.column == v.column == 1 # u, v should be column vector
# Calculate
vT = v.transpose()
numerator_factor = (vT * self * u)[0, 0] + 1
if numerator_factor == 0:
return None # It's not invertable
return self - ((self * u) * (vT * self) * (1.0 / numerator_factor))
# Testing
if __name__ == "__main__":
def test1():
# a^(-1)
ainv = Matrix(3, 3, 0)
for i in range(3):
ainv[i, i] = 1
print(f"a^(-1) is {ainv}")
# u, v
u = Matrix(3, 1, 0)
u[0, 0], u[1, 0], u[2, 0] = 1, 2, -3
v = Matrix(3, 1, 0)
v[0, 0], v[1, 0], v[2, 0] = 4, -2, 5
print(f"u is {u}")
print(f"v is {v}")
print("uv^T is %s" % (u * v.transpose()))
# Sherman Morrison
print(f"(a + uv^T)^(-1) is {ainv.ShermanMorrison(u, v)}")
def test2():
import doctest
doctest.testmod()
test2()
| class Matrix:
"""
<class Matrix>
Matrix structure.
"""
def __init__(self, row: int, column: int, default_value: float = 0):
"""
<method Matrix.__init__>
Initialize matrix with given size and default value.
Example:
>>> a = Matrix(2, 3, 1)
>>> a
Matrix consist of 2 rows and 3 columns
[1, 1, 1]
[1, 1, 1]
"""
self.row, self.column = row, column
self.array = [[default_value for c in range(column)] for r in range(row)]
def __str__(self):
"""
<method Matrix.__str__>
Return string representation of this matrix.
"""
# Prefix
s = "Matrix consist of %d rows and %d columns\n" % (self.row, self.column)
# Make string identifier
max_element_length = 0
for row_vector in self.array:
for obj in row_vector:
max_element_length = max(max_element_length, len(str(obj)))
string_format_identifier = "%%%ds" % (max_element_length,)
# Make string and return
def single_line(row_vector):
nonlocal string_format_identifier
line = "["
line += ", ".join(string_format_identifier % (obj,) for obj in row_vector)
line += "]"
return line
s += "\n".join(single_line(row_vector) for row_vector in self.array)
return s
def __repr__(self):
return str(self)
def validateIndices(self, loc: tuple):
"""
<method Matrix.validateIndices>
Check if given indices are valid to pick element from matrix.
Example:
>>> a = Matrix(2, 6, 0)
>>> a.validateIndices((2, 7))
False
>>> a.validateIndices((0, 0))
True
"""
if not (isinstance(loc, (list, tuple)) and len(loc) == 2):
return False
elif not (0 <= loc[0] < self.row and 0 <= loc[1] < self.column):
return False
else:
return True
def __getitem__(self, loc: tuple):
"""
<method Matrix.__getitem__>
Return array[row][column] where loc = (row, column).
Example:
>>> a = Matrix(3, 2, 7)
>>> a[1, 0]
7
"""
assert self.validateIndices(loc)
return self.array[loc[0]][loc[1]]
def __setitem__(self, loc: tuple, value: float):
"""
<method Matrix.__setitem__>
Set array[row][column] = value where loc = (row, column).
Example:
>>> a = Matrix(2, 3, 1)
>>> a[1, 2] = 51
>>> a
Matrix consist of 2 rows and 3 columns
[ 1, 1, 1]
[ 1, 1, 51]
"""
assert self.validateIndices(loc)
self.array[loc[0]][loc[1]] = value
def __add__(self, another):
"""
<method Matrix.__add__>
Return self + another.
Example:
>>> a = Matrix(2, 1, -4)
>>> b = Matrix(2, 1, 3)
>>> a+b
Matrix consist of 2 rows and 1 columns
[-1]
[-1]
"""
# Validation
assert isinstance(another, Matrix)
assert self.row == another.row and self.column == another.column
# Add
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = self[r, c] + another[r, c]
return result
def __neg__(self):
"""
<method Matrix.__neg__>
Return -self.
Example:
>>> a = Matrix(2, 2, 3)
>>> a[0, 1] = a[1, 0] = -2
>>> -a
Matrix consist of 2 rows and 2 columns
[-3, 2]
[ 2, -3]
"""
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = -self[r, c]
return result
def __sub__(self, another):
return self + (-another)
def __mul__(self, another):
"""
<method Matrix.__mul__>
Return self * another.
Example:
>>> a = Matrix(2, 3, 1)
>>> a[0,2] = a[1,2] = 3
>>> a * -2
Matrix consist of 2 rows and 3 columns
[-2, -2, -6]
[-2, -2, -6]
"""
if isinstance(another, (int, float)): # Scalar multiplication
result = Matrix(self.row, self.column)
for r in range(self.row):
for c in range(self.column):
result[r, c] = self[r, c] * another
return result
elif isinstance(another, Matrix): # Matrix multiplication
assert self.column == another.row
result = Matrix(self.row, another.column)
for r in range(self.row):
for c in range(another.column):
for i in range(self.column):
result[r, c] += self[r, i] * another[i, c]
return result
else:
raise TypeError(f"Unsupported type given for another ({type(another)})")
def transpose(self):
"""
<method Matrix.transpose>
Return self^T.
Example:
>>> a = Matrix(2, 3)
>>> for r in range(2):
... for c in range(3):
... a[r,c] = r*c
...
>>> a.transpose()
Matrix consist of 3 rows and 2 columns
[0, 0]
[0, 1]
[0, 2]
"""
result = Matrix(self.column, self.row)
for r in range(self.row):
for c in range(self.column):
result[c, r] = self[r, c]
return result
def ShermanMorrison(self, u, v):
"""
<method Matrix.ShermanMorrison>
Apply Sherman-Morrison formula in O(n^2).
To learn this formula, please look this:
https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula
This method returns (A + uv^T)^(-1) where A^(-1) is self. Returns None if it's
impossible to calculate.
Warning: This method doesn't check if self is invertible.
Make sure self is invertible before execute this method.
Example:
>>> ainv = Matrix(3, 3, 0)
>>> for i in range(3): ainv[i,i] = 1
...
>>> u = Matrix(3, 1, 0)
>>> u[0,0], u[1,0], u[2,0] = 1, 2, -3
>>> v = Matrix(3, 1, 0)
>>> v[0,0], v[1,0], v[2,0] = 4, -2, 5
>>> ainv.ShermanMorrison(u, v)
Matrix consist of 3 rows and 3 columns
[ 1.2857142857142856, -0.14285714285714285, 0.3571428571428571]
[ 0.5714285714285714, 0.7142857142857143, 0.7142857142857142]
[ -0.8571428571428571, 0.42857142857142855, -0.0714285714285714]
"""
# Size validation
assert isinstance(u, Matrix) and isinstance(v, Matrix)
assert self.row == self.column == u.row == v.row # u, v should be column vector
assert u.column == v.column == 1 # u, v should be column vector
# Calculate
vT = v.transpose()
numerator_factor = (vT * self * u)[0, 0] + 1
if numerator_factor == 0:
return None # It's not invertable
return self - ((self * u) * (vT * self) * (1.0 / numerator_factor))
# Testing
if __name__ == "__main__":
def test1():
# a^(-1)
ainv = Matrix(3, 3, 0)
for i in range(3):
ainv[i, i] = 1
print(f"a^(-1) is {ainv}")
# u, v
u = Matrix(3, 1, 0)
u[0, 0], u[1, 0], u[2, 0] = 1, 2, -3
v = Matrix(3, 1, 0)
v[0, 0], v[1, 0], v[2, 0] = 4, -2, 5
print(f"u is {u}")
print(f"v is {v}")
print("uv^T is %s" % (u * v.transpose()))
# Sherman Morrison
print(f"(a + uv^T)^(-1) is {ainv.ShermanMorrison(u, v)}")
def test2():
import doctest
doctest.testmod()
test2()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections import deque
def tarjan(g):
"""
Tarjan's algo for finding strongly connected components in a directed graph
Uses two main attributes of each node to track reachability, the index of that node
within a component(index), and the lowest index reachable from that node(lowlink).
We then perform a dfs of the each component making sure to update these parameters
for each node and saving the nodes we visit on the way.
If ever we find that the lowest reachable node from a current node is equal to the
index of the current node then it must be the root of a strongly connected
component and so we save it and it's equireachable vertices as a strongly
connected component.
Complexity: strong_connect() is called at most once for each node and has a
complexity of O(|E|) as it is DFS.
Therefore this has complexity O(|V| + |E|) for a graph G = (V, E)
"""
n = len(g)
stack = deque()
on_stack = [False for _ in range(n)]
index_of = [-1 for _ in range(n)]
lowlink_of = index_of[:]
def strong_connect(v, index, components):
index_of[v] = index # the number when this node is seen
lowlink_of[v] = index # lowest rank node reachable from here
index += 1
stack.append(v)
on_stack[v] = True
for w in g[v]:
if index_of[w] == -1:
index = strong_connect(w, index, components)
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
elif on_stack[w]:
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
if lowlink_of[v] == index_of[v]:
component = []
w = stack.pop()
on_stack[w] = False
component.append(w)
while w != v:
w = stack.pop()
on_stack[w] = False
component.append(w)
components.append(component)
return index
components = []
for v in range(n):
if index_of[v] == -1:
strong_connect(v, 0, components)
return components
def create_graph(n, edges):
g = [[] for _ in range(n)]
for u, v in edges:
g[u].append(v)
return g
if __name__ == "__main__":
# Test
n_vertices = 7
source = [0, 0, 1, 2, 3, 3, 4, 4, 6]
target = [1, 3, 2, 0, 1, 4, 5, 6, 5]
edges = [(u, v) for u, v in zip(source, target)]
g = create_graph(n_vertices, edges)
assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
| from collections import deque
def tarjan(g):
"""
Tarjan's algo for finding strongly connected components in a directed graph
Uses two main attributes of each node to track reachability, the index of that node
within a component(index), and the lowest index reachable from that node(lowlink).
We then perform a dfs of the each component making sure to update these parameters
for each node and saving the nodes we visit on the way.
If ever we find that the lowest reachable node from a current node is equal to the
index of the current node then it must be the root of a strongly connected
component and so we save it and it's equireachable vertices as a strongly
connected component.
Complexity: strong_connect() is called at most once for each node and has a
complexity of O(|E|) as it is DFS.
Therefore this has complexity O(|V| + |E|) for a graph G = (V, E)
"""
n = len(g)
stack = deque()
on_stack = [False for _ in range(n)]
index_of = [-1 for _ in range(n)]
lowlink_of = index_of[:]
def strong_connect(v, index, components):
index_of[v] = index # the number when this node is seen
lowlink_of[v] = index # lowest rank node reachable from here
index += 1
stack.append(v)
on_stack[v] = True
for w in g[v]:
if index_of[w] == -1:
index = strong_connect(w, index, components)
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
elif on_stack[w]:
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
if lowlink_of[v] == index_of[v]:
component = []
w = stack.pop()
on_stack[w] = False
component.append(w)
while w != v:
w = stack.pop()
on_stack[w] = False
component.append(w)
components.append(component)
return index
components = []
for v in range(n):
if index_of[v] == -1:
strong_connect(v, 0, components)
return components
def create_graph(n, edges):
g = [[] for _ in range(n)]
for u, v in edges:
g[u].append(v)
return g
if __name__ == "__main__":
# Test
n_vertices = 7
source = [0, 0, 1, 2, 3, 3, 4, 4, 6]
target = [1, 3, 2, 0, 1, 4, 5, 6, 5]
edges = [(u, v) for u, v in zip(source, target)]
g = create_graph(n_vertices, edges)
assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import os
import random
import sys
from . import cryptomath_module as cryptomath
from . import rabin_miller
min_primitive_root = 3
# I have written my code naively same as definition of primitive root
# however every time I run this program, memory exceeded...
# so I used 4.80 Algorithm in
# Handbook of Applied Cryptography(CRC Press, ISBN : 0-8493-8523-7, October 1996)
# and it seems to run nicely!
def primitive_root(p_val: int) -> int:
print("Generating primitive root of p")
while True:
g = random.randrange(3, p_val)
if pow(g, 2, p_val) == 1:
continue
if pow(g, p_val, p_val) == 1:
continue
return g
def generate_key(key_size: int) -> tuple[tuple[int, int, int, int], tuple[int, int]]:
print("Generating prime p...")
p = rabin_miller.generateLargePrime(key_size) # select large prime number.
e_1 = primitive_root(p) # one primitive root on modulo p.
d = random.randrange(3, p) # private_key -> have to be greater than 2 for safety.
e_2 = cryptomath.find_mod_inverse(pow(e_1, d, p), p)
public_key = (key_size, e_1, e_2, p)
private_key = (key_size, d)
return public_key, private_key
def make_key_files(name: str, keySize: int) -> None:
if os.path.exists("%s_pubkey.txt" % name) or os.path.exists(
"%s_privkey.txt" % name
):
print("\nWARNING:")
print(
'"%s_pubkey.txt" or "%s_privkey.txt" already exists. \n'
"Use a different name or delete these files and re-run this program."
% (name, name)
)
sys.exit()
publicKey, privateKey = generate_key(keySize)
print("\nWriting public key to file %s_pubkey.txt..." % name)
with open("%s_pubkey.txt" % name, "w") as fo:
fo.write(
"%d,%d,%d,%d" % (publicKey[0], publicKey[1], publicKey[2], publicKey[3])
)
print("Writing private key to file %s_privkey.txt..." % name)
with open("%s_privkey.txt" % name, "w") as fo:
fo.write("%d,%d" % (privateKey[0], privateKey[1]))
def main() -> None:
print("Making key files...")
make_key_files("elgamal", 2048)
print("Key files generation successful")
if __name__ == "__main__":
main()
| import os
import random
import sys
from . import cryptomath_module as cryptomath
from . import rabin_miller
min_primitive_root = 3
# I have written my code naively same as definition of primitive root
# however every time I run this program, memory exceeded...
# so I used 4.80 Algorithm in
# Handbook of Applied Cryptography(CRC Press, ISBN : 0-8493-8523-7, October 1996)
# and it seems to run nicely!
def primitive_root(p_val: int) -> int:
print("Generating primitive root of p")
while True:
g = random.randrange(3, p_val)
if pow(g, 2, p_val) == 1:
continue
if pow(g, p_val, p_val) == 1:
continue
return g
def generate_key(key_size: int) -> tuple[tuple[int, int, int, int], tuple[int, int]]:
print("Generating prime p...")
p = rabin_miller.generateLargePrime(key_size) # select large prime number.
e_1 = primitive_root(p) # one primitive root on modulo p.
d = random.randrange(3, p) # private_key -> have to be greater than 2 for safety.
e_2 = cryptomath.find_mod_inverse(pow(e_1, d, p), p)
public_key = (key_size, e_1, e_2, p)
private_key = (key_size, d)
return public_key, private_key
def make_key_files(name: str, keySize: int) -> None:
if os.path.exists("%s_pubkey.txt" % name) or os.path.exists(
"%s_privkey.txt" % name
):
print("\nWARNING:")
print(
'"%s_pubkey.txt" or "%s_privkey.txt" already exists. \n'
"Use a different name or delete these files and re-run this program."
% (name, name)
)
sys.exit()
publicKey, privateKey = generate_key(keySize)
print("\nWriting public key to file %s_pubkey.txt..." % name)
with open("%s_pubkey.txt" % name, "w") as fo:
fo.write(
"%d,%d,%d,%d" % (publicKey[0], publicKey[1], publicKey[2], publicKey[3])
)
print("Writing private key to file %s_privkey.txt..." % name)
with open("%s_privkey.txt" % name, "w") as fo:
fo.write("%d,%d" % (privateKey[0], privateKey[1]))
def main() -> None:
print("Making key files...")
make_key_files("elgamal", 2048)
print("Key files generation successful")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Check whether Graph is Bipartite or Not using BFS
# A Bipartite Graph is a graph whose vertices can be divided into two independent sets,
# U and V such that every edge (u, v) either connects a vertex from U to V or a vertex
# from V to U. In other words, for every edge (u, v), either u belongs to U and v to V,
# or u belongs to V and v to U. We can also say that there is no edge that connects
# vertices of same set.
def checkBipartite(graph):
queue = []
visited = [False] * len(graph)
color = [-1] * len(graph)
def bfs():
while queue:
u = queue.pop(0)
visited[u] = True
for neighbour in graph[u]:
if neighbour == u:
return False
if color[neighbour] == -1:
color[neighbour] = 1 - color[u]
queue.append(neighbour)
elif color[neighbour] == color[u]:
return False
return True
for i in range(len(graph)):
if not visited[i]:
queue.append(i)
color[i] = 0
if bfs() is False:
return False
return True
if __name__ == "__main__":
# Adjacency List of graph
print(checkBipartite({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]}))
| # Check whether Graph is Bipartite or Not using BFS
# A Bipartite Graph is a graph whose vertices can be divided into two independent sets,
# U and V such that every edge (u, v) either connects a vertex from U to V or a vertex
# from V to U. In other words, for every edge (u, v), either u belongs to U and v to V,
# or u belongs to V and v to U. We can also say that there is no edge that connects
# vertices of same set.
def checkBipartite(graph):
queue = []
visited = [False] * len(graph)
color = [-1] * len(graph)
def bfs():
while queue:
u = queue.pop(0)
visited[u] = True
for neighbour in graph[u]:
if neighbour == u:
return False
if color[neighbour] == -1:
color[neighbour] = 1 - color[u]
queue.append(neighbour)
elif color[neighbour] == color[u]:
return False
return True
for i in range(len(graph)):
if not visited[i]:
queue.append(i)
color[i] = 0
if bfs() is False:
return False
return True
if __name__ == "__main__":
# Adjacency List of graph
print(checkBipartite({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]}))
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """example of simple chaos machine"""
# Chaos Machine (K, t, m)
K = [0.33, 0.44, 0.55, 0.44, 0.33]
t = 3
m = 5
# Buffer Space (with Parameters Space)
buffer_space: list[float] = []
params_space: list[float] = []
# Machine Time
machine_time = 0
def push(seed):
global buffer_space, params_space, machine_time, K, m, t
# Choosing Dynamical Systems (All)
for key, value in enumerate(buffer_space):
# Evolution Parameter
e = float(seed / value)
# Control Theory: Orbit Change
value = (buffer_space[(key + 1) % m] + e) % 1
# Control Theory: Trajectory Change
r = (params_space[key] + e) % 1 + 3
# Modification (Transition Function) - Jumps
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = r # Saving to Parameters Space
# Logistic Map
assert max(buffer_space) < 1
assert max(params_space) < 4
# Machine Time
machine_time += 1
def pull():
global buffer_space, params_space, machine_time, K, m, t
# PRNG (Xorshift by George Marsaglia)
def xorshift(X, Y):
X ^= Y >> 13
Y ^= X << 17
X ^= Y >> 5
return X
# Choosing Dynamical Systems (Increment)
key = machine_time % m
# Evolution (Time Length)
for i in range(0, t):
# Variables (Position + Parameters)
r = params_space[key]
value = buffer_space[key]
# Modification (Transition Function) - Flow
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = (machine_time * 0.01 + r * 1.01) % 1 + 3
# Choosing Chaotic Data
X = int(buffer_space[(key + 2) % m] * (10 ** 10))
Y = int(buffer_space[(key - 2) % m] * (10 ** 10))
# Machine Time
machine_time += 1
return xorshift(X, Y) % 0xFFFFFFFF
def reset():
global buffer_space, params_space, machine_time, K, m, t
buffer_space = K
params_space = [0] * m
machine_time = 0
if __name__ == "__main__":
# Initialization
reset()
# Pushing Data (Input)
import random
message = random.sample(range(0xFFFFFFFF), 100)
for chunk in message:
push(chunk)
# for controlling
inp = ""
# Pulling Data (Output)
while inp in ("e", "E"):
print("%s" % format(pull(), "#04x"))
print(buffer_space)
print(params_space)
inp = input("(e)exit? ").strip()
| """example of simple chaos machine"""
# Chaos Machine (K, t, m)
K = [0.33, 0.44, 0.55, 0.44, 0.33]
t = 3
m = 5
# Buffer Space (with Parameters Space)
buffer_space: list[float] = []
params_space: list[float] = []
# Machine Time
machine_time = 0
def push(seed):
global buffer_space, params_space, machine_time, K, m, t
# Choosing Dynamical Systems (All)
for key, value in enumerate(buffer_space):
# Evolution Parameter
e = float(seed / value)
# Control Theory: Orbit Change
value = (buffer_space[(key + 1) % m] + e) % 1
# Control Theory: Trajectory Change
r = (params_space[key] + e) % 1 + 3
# Modification (Transition Function) - Jumps
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = r # Saving to Parameters Space
# Logistic Map
assert max(buffer_space) < 1
assert max(params_space) < 4
# Machine Time
machine_time += 1
def pull():
global buffer_space, params_space, machine_time, K, m, t
# PRNG (Xorshift by George Marsaglia)
def xorshift(X, Y):
X ^= Y >> 13
Y ^= X << 17
X ^= Y >> 5
return X
# Choosing Dynamical Systems (Increment)
key = machine_time % m
# Evolution (Time Length)
for i in range(0, t):
# Variables (Position + Parameters)
r = params_space[key]
value = buffer_space[key]
# Modification (Transition Function) - Flow
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = (machine_time * 0.01 + r * 1.01) % 1 + 3
# Choosing Chaotic Data
X = int(buffer_space[(key + 2) % m] * (10 ** 10))
Y = int(buffer_space[(key - 2) % m] * (10 ** 10))
# Machine Time
machine_time += 1
return xorshift(X, Y) % 0xFFFFFFFF
def reset():
global buffer_space, params_space, machine_time, K, m, t
buffer_space = K
params_space = [0] * m
machine_time = 0
if __name__ == "__main__":
# Initialization
reset()
# Pushing Data (Input)
import random
message = random.sample(range(0xFFFFFFFF), 100)
for chunk in message:
push(chunk)
# for controlling
inp = ""
# Pulling Data (Output)
while inp in ("e", "E"):
print("%s" % format(pull(), "#04x"))
print(buffer_space)
print(params_space)
inp = input("(e)exit? ").strip()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class things:
def __init__(self, name, value, weight):
self.name = name
self.value = value
self.weight = weight
def __repr__(self):
return f"{self.__class__.__name__}({self.name}, {self.value}, {self.weight})"
def get_value(self):
return self.value
def get_name(self):
return self.name
def get_weight(self):
return self.weight
def value_Weight(self):
return self.value / self.weight
def build_menu(name, value, weight):
menu = []
for i in range(len(value)):
menu.append(things(name[i], value[i], weight[i]))
return menu
def greedy(item, maxCost, keyFunc):
itemsCopy = sorted(item, key=keyFunc, reverse=True)
result = []
totalValue, total_cost = 0.0, 0.0
for i in range(len(itemsCopy)):
if (total_cost + itemsCopy[i].get_weight()) <= maxCost:
result.append(itemsCopy[i])
total_cost += itemsCopy[i].get_weight()
totalValue += itemsCopy[i].get_value()
return (result, totalValue)
def test_greedy():
"""
>>> food = ["Burger", "Pizza", "Coca Cola", "Rice",
... "Sambhar", "Chicken", "Fries", "Milk"]
>>> value = [80, 100, 60, 70, 50, 110, 90, 60]
>>> weight = [40, 60, 40, 70, 100, 85, 55, 70]
>>> foods = build_menu(food, value, weight)
>>> foods # doctest: +NORMALIZE_WHITESPACE
[things(Burger, 80, 40), things(Pizza, 100, 60), things(Coca Cola, 60, 40),
things(Rice, 70, 70), things(Sambhar, 50, 100), things(Chicken, 110, 85),
things(Fries, 90, 55), things(Milk, 60, 70)]
>>> greedy(foods, 500, things.get_value) # doctest: +NORMALIZE_WHITESPACE
([things(Chicken, 110, 85), things(Pizza, 100, 60), things(Fries, 90, 55),
things(Burger, 80, 40), things(Rice, 70, 70), things(Coca Cola, 60, 40),
things(Milk, 60, 70)], 570.0)
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| class things:
def __init__(self, name, value, weight):
self.name = name
self.value = value
self.weight = weight
def __repr__(self):
return f"{self.__class__.__name__}({self.name}, {self.value}, {self.weight})"
def get_value(self):
return self.value
def get_name(self):
return self.name
def get_weight(self):
return self.weight
def value_Weight(self):
return self.value / self.weight
def build_menu(name, value, weight):
menu = []
for i in range(len(value)):
menu.append(things(name[i], value[i], weight[i]))
return menu
def greedy(item, maxCost, keyFunc):
itemsCopy = sorted(item, key=keyFunc, reverse=True)
result = []
totalValue, total_cost = 0.0, 0.0
for i in range(len(itemsCopy)):
if (total_cost + itemsCopy[i].get_weight()) <= maxCost:
result.append(itemsCopy[i])
total_cost += itemsCopy[i].get_weight()
totalValue += itemsCopy[i].get_value()
return (result, totalValue)
def test_greedy():
"""
>>> food = ["Burger", "Pizza", "Coca Cola", "Rice",
... "Sambhar", "Chicken", "Fries", "Milk"]
>>> value = [80, 100, 60, 70, 50, 110, 90, 60]
>>> weight = [40, 60, 40, 70, 100, 85, 55, 70]
>>> foods = build_menu(food, value, weight)
>>> foods # doctest: +NORMALIZE_WHITESPACE
[things(Burger, 80, 40), things(Pizza, 100, 60), things(Coca Cola, 60, 40),
things(Rice, 70, 70), things(Sambhar, 50, 100), things(Chicken, 110, 85),
things(Fries, 90, 55), things(Milk, 60, 70)]
>>> greedy(foods, 500, things.get_value) # doctest: +NORMALIZE_WHITESPACE
([things(Chicken, 110, 85), things(Pizza, 100, 60), things(Fries, 90, 55),
things(Burger, 80, 40), things(Rice, 70, 70), things(Coca Cola, 60, 40),
things(Milk, 60, 70)], 570.0)
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| END = "#"
class Trie:
def __init__(self):
self._trie = {}
def insert_word(self, text):
trie = self._trie
for char in text:
if char not in trie:
trie[char] = {}
trie = trie[char]
trie[END] = True
def find_word(self, prefix):
trie = self._trie
for char in prefix:
if char in trie:
trie = trie[char]
else:
return []
return self._elements(trie)
def _elements(self, d):
result = []
for c, v in d.items():
if c == END:
sub_result = [" "]
else:
sub_result = [c + s for s in self._elements(v)]
result.extend(sub_result)
return tuple(result)
trie = Trie()
words = ("depart", "detergent", "daring", "dog", "deer", "deal")
for word in words:
trie.insert_word(word)
def autocomplete_using_trie(s):
"""
>>> trie = Trie()
>>> for word in words:
... trie.insert_word(word)
...
>>> matches = autocomplete_using_trie("de")
"detergent " in matches
True
"dog " in matches
False
"""
suffixes = trie.find_word(s)
return tuple(s + w for w in suffixes)
def main():
print(autocomplete_using_trie("de"))
if __name__ == "__main__":
main()
| END = "#"
class Trie:
def __init__(self):
self._trie = {}
def insert_word(self, text):
trie = self._trie
for char in text:
if char not in trie:
trie[char] = {}
trie = trie[char]
trie[END] = True
def find_word(self, prefix):
trie = self._trie
for char in prefix:
if char in trie:
trie = trie[char]
else:
return []
return self._elements(trie)
def _elements(self, d):
result = []
for c, v in d.items():
if c == END:
sub_result = [" "]
else:
sub_result = [c + s for s in self._elements(v)]
result.extend(sub_result)
return tuple(result)
trie = Trie()
words = ("depart", "detergent", "daring", "dog", "deer", "deal")
for word in words:
trie.insert_word(word)
def autocomplete_using_trie(s):
"""
>>> trie = Trie()
>>> for word in words:
... trie.insert_word(word)
...
>>> matches = autocomplete_using_trie("de")
"detergent " in matches
True
"dog " in matches
False
"""
suffixes = trie.find_word(s)
return tuple(s + w for w in suffixes)
def main():
print(autocomplete_using_trie("de"))
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import argparse
import datetime
def zeller(date_input: str) -> str:
"""
Zellers Congruence Algorithm
Find the day of the week for nearly any Gregorian or Julian calendar date
>>> zeller('01-31-2010')
'Your date 01-31-2010, is a Sunday!'
Validate out of range month
>>> zeller('13-31-2010')
Traceback (most recent call last):
...
ValueError: Month must be between 1 - 12
>>> zeller('.2-31-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.2'
Validate out of range date:
>>> zeller('01-33-2010')
Traceback (most recent call last):
...
ValueError: Date must be between 1 - 31
>>> zeller('01-.4-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.4'
Validate second separator:
>>> zeller('01-31*2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate first separator:
>>> zeller('01^31-2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate out of range year:
>>> zeller('01-31-8999')
Traceback (most recent call last):
...
ValueError: Year out of range. There has to be some sort of limit...right?
Test null input:
>>> zeller()
Traceback (most recent call last):
...
TypeError: zeller() missing 1 required positional argument: 'date_input'
Test length of date_input:
>>> zeller('')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long
>>> zeller('01-31-19082939')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long"""
# Days of the week for response
days = {
"0": "Sunday",
"1": "Monday",
"2": "Tuesday",
"3": "Wednesday",
"4": "Thursday",
"5": "Friday",
"6": "Saturday",
}
convert_datetime_days = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 0}
# Validate
if not 0 < len(date_input) < 11:
raise ValueError("Must be 10 characters long")
# Get month
m: int = int(date_input[0] + date_input[1])
# Validate
if not 0 < m < 13:
raise ValueError("Month must be between 1 - 12")
sep_1: str = date_input[2]
# Validate
if sep_1 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get day
d: int = int(date_input[3] + date_input[4])
# Validate
if not 0 < d < 32:
raise ValueError("Date must be between 1 - 31")
# Get second separator
sep_2: str = date_input[5]
# Validate
if sep_2 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get year
y: int = int(date_input[6] + date_input[7] + date_input[8] + date_input[9])
# Arbitrary year range
if not 45 < y < 8500:
raise ValueError(
"Year out of range. There has to be some sort of limit...right?"
)
# Get datetime obj for validation
dt_ck = datetime.date(int(y), int(m), int(d))
# Start math
if m <= 2:
y = y - 1
m = m + 12
# maths var
c: int = int(str(y)[:2])
k: int = int(str(y)[2:])
t: int = int(2.6 * m - 5.39)
u: int = int(c / 4)
v: int = int(k / 4)
x: int = int(d + k)
z: int = int(t + u + v + x)
w: int = int(z - (2 * c))
f: int = round(w % 7)
# End math
# Validate math
if f != convert_datetime_days[dt_ck.weekday()]:
raise AssertionError("The date was evaluated incorrectly. Contact developer.")
# Response
response: str = f"Your date {date_input}, is a {days[str(f)]}!"
return response
if __name__ == "__main__":
import doctest
doctest.testmod()
parser = argparse.ArgumentParser(
description=(
"Find out what day of the week nearly any date is or was. Enter "
"date as a string in the mm-dd-yyyy or mm/dd/yyyy format"
)
)
parser.add_argument(
"date_input", type=str, help="Date as a string (mm-dd-yyyy or mm/dd/yyyy)"
)
args = parser.parse_args()
zeller(args.date_input)
| import argparse
import datetime
def zeller(date_input: str) -> str:
"""
Zellers Congruence Algorithm
Find the day of the week for nearly any Gregorian or Julian calendar date
>>> zeller('01-31-2010')
'Your date 01-31-2010, is a Sunday!'
Validate out of range month
>>> zeller('13-31-2010')
Traceback (most recent call last):
...
ValueError: Month must be between 1 - 12
>>> zeller('.2-31-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.2'
Validate out of range date:
>>> zeller('01-33-2010')
Traceback (most recent call last):
...
ValueError: Date must be between 1 - 31
>>> zeller('01-.4-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.4'
Validate second separator:
>>> zeller('01-31*2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate first separator:
>>> zeller('01^31-2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate out of range year:
>>> zeller('01-31-8999')
Traceback (most recent call last):
...
ValueError: Year out of range. There has to be some sort of limit...right?
Test null input:
>>> zeller()
Traceback (most recent call last):
...
TypeError: zeller() missing 1 required positional argument: 'date_input'
Test length of date_input:
>>> zeller('')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long
>>> zeller('01-31-19082939')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long"""
# Days of the week for response
days = {
"0": "Sunday",
"1": "Monday",
"2": "Tuesday",
"3": "Wednesday",
"4": "Thursday",
"5": "Friday",
"6": "Saturday",
}
convert_datetime_days = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 0}
# Validate
if not 0 < len(date_input) < 11:
raise ValueError("Must be 10 characters long")
# Get month
m: int = int(date_input[0] + date_input[1])
# Validate
if not 0 < m < 13:
raise ValueError("Month must be between 1 - 12")
sep_1: str = date_input[2]
# Validate
if sep_1 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get day
d: int = int(date_input[3] + date_input[4])
# Validate
if not 0 < d < 32:
raise ValueError("Date must be between 1 - 31")
# Get second separator
sep_2: str = date_input[5]
# Validate
if sep_2 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get year
y: int = int(date_input[6] + date_input[7] + date_input[8] + date_input[9])
# Arbitrary year range
if not 45 < y < 8500:
raise ValueError(
"Year out of range. There has to be some sort of limit...right?"
)
# Get datetime obj for validation
dt_ck = datetime.date(int(y), int(m), int(d))
# Start math
if m <= 2:
y = y - 1
m = m + 12
# maths var
c: int = int(str(y)[:2])
k: int = int(str(y)[2:])
t: int = int(2.6 * m - 5.39)
u: int = int(c / 4)
v: int = int(k / 4)
x: int = int(d + k)
z: int = int(t + u + v + x)
w: int = int(z - (2 * c))
f: int = round(w % 7)
# End math
# Validate math
if f != convert_datetime_days[dt_ck.weekday()]:
raise AssertionError("The date was evaluated incorrectly. Contact developer.")
# Response
response: str = f"Your date {date_input}, is a {days[str(f)]}!"
return response
if __name__ == "__main__":
import doctest
doctest.testmod()
parser = argparse.ArgumentParser(
description=(
"Find out what day of the week nearly any date is or was. Enter "
"date as a string in the mm-dd-yyyy or mm/dd/yyyy format"
)
)
parser.add_argument(
"date_input", type=str, help="Date as a string (mm-dd-yyyy or mm/dd/yyyy)"
)
args = parser.parse_args()
zeller(args.date_input)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance"""
def jaro_winkler(str1: str, str2: str) -> float:
"""
Jaro–Winkler distance is a string metric measuring an edit distance between two
sequences.
Output value is between 0.0 and 1.0.
>>> jaro_winkler("martha", "marhta")
0.9611111111111111
>>> jaro_winkler("CRATE", "TRACE")
0.7333333333333334
>>> jaro_winkler("test", "dbdbdbdb")
0.0
>>> jaro_winkler("test", "test")
1.0
>>> jaro_winkler("hello world", "HeLLo W0rlD")
0.6363636363636364
>>> jaro_winkler("test", "")
0.0
>>> jaro_winkler("hello", "world")
0.4666666666666666
>>> jaro_winkler("hell**o", "*world")
0.4365079365079365
"""
def get_matched_characters(_str1: str, _str2: str) -> str:
matched = []
limit = min(len(_str1), len(_str2)) // 2
for i, l in enumerate(_str1):
left = int(max(0, i - limit))
right = int(min(i + limit + 1, len(_str2)))
if l in _str2[left:right]:
matched.append(l)
_str2 = f"{_str2[0:_str2.index(l)]} {_str2[_str2.index(l) + 1:]}"
return "".join(matched)
# matching characters
matching_1 = get_matched_characters(str1, str2)
matching_2 = get_matched_characters(str2, str1)
match_count = len(matching_1)
# transposition
transpositions = (
len([(c1, c2) for c1, c2 in zip(matching_1, matching_2) if c1 != c2]) // 2
)
if not match_count:
jaro = 0.0
else:
jaro = (
1
/ 3
* (
match_count / len(str1)
+ match_count / len(str2)
+ (match_count - transpositions) / match_count
)
)
# common prefix up to 4 characters
prefix_len = 0
for c1, c2 in zip(str1[:4], str2[:4]):
if c1 == c2:
prefix_len += 1
else:
break
return jaro + 0.1 * prefix_len * (1 - jaro)
if __name__ == "__main__":
import doctest
doctest.testmod()
print(jaro_winkler("hello", "world"))
| """https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance"""
def jaro_winkler(str1: str, str2: str) -> float:
"""
Jaro–Winkler distance is a string metric measuring an edit distance between two
sequences.
Output value is between 0.0 and 1.0.
>>> jaro_winkler("martha", "marhta")
0.9611111111111111
>>> jaro_winkler("CRATE", "TRACE")
0.7333333333333334
>>> jaro_winkler("test", "dbdbdbdb")
0.0
>>> jaro_winkler("test", "test")
1.0
>>> jaro_winkler("hello world", "HeLLo W0rlD")
0.6363636363636364
>>> jaro_winkler("test", "")
0.0
>>> jaro_winkler("hello", "world")
0.4666666666666666
>>> jaro_winkler("hell**o", "*world")
0.4365079365079365
"""
def get_matched_characters(_str1: str, _str2: str) -> str:
matched = []
limit = min(len(_str1), len(_str2)) // 2
for i, l in enumerate(_str1):
left = int(max(0, i - limit))
right = int(min(i + limit + 1, len(_str2)))
if l in _str2[left:right]:
matched.append(l)
_str2 = f"{_str2[0:_str2.index(l)]} {_str2[_str2.index(l) + 1:]}"
return "".join(matched)
# matching characters
matching_1 = get_matched_characters(str1, str2)
matching_2 = get_matched_characters(str2, str1)
match_count = len(matching_1)
# transposition
transpositions = (
len([(c1, c2) for c1, c2 in zip(matching_1, matching_2) if c1 != c2]) // 2
)
if not match_count:
jaro = 0.0
else:
jaro = (
1
/ 3
* (
match_count / len(str1)
+ match_count / len(str2)
+ (match_count - transpositions) / match_count
)
)
# common prefix up to 4 characters
prefix_len = 0
for c1, c2 in zip(str1[:4], str2[:4]):
if c1 == c2:
prefix_len += 1
else:
break
return jaro + 0.1 * prefix_len * (1 - jaro)
if __name__ == "__main__":
import doctest
doctest.testmod()
print(jaro_winkler("hello", "world"))
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Introspective Sort is hybrid sort (Quick Sort + Heap Sort + Insertion Sort)
if the size of the list is under 16, use insertion sort
https://en.wikipedia.org/wiki/Introsort
"""
import math
def insertion_sort(array: list, start: int = 0, end: int = 0) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> insertion_sort(array, 0, len(array))
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
end = end or len(array)
for i in range(start, end):
temp_index = i
temp_index_value = array[i]
while temp_index != start and temp_index_value < array[temp_index - 1]:
array[temp_index] = array[temp_index - 1]
temp_index -= 1
array[temp_index] = temp_index_value
return array
def heapify(array: list, index: int, heap_size: int) -> None: # Max Heap
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heapify(array, len(array) // 2 ,len(array))
"""
largest = index
left_index = 2 * index + 1 # Left Node
right_index = 2 * index + 2 # Right Node
if left_index < heap_size and array[largest] < array[left_index]:
largest = left_index
if right_index < heap_size and array[largest] < array[right_index]:
largest = right_index
if largest != index:
array[index], array[largest] = array[largest], array[index]
heapify(array, largest, heap_size)
def heap_sort(array: list) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heap_sort(array)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
n = len(array)
for i in range(n // 2, -1, -1):
heapify(array, i, n)
for i in range(n - 1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, 0, i)
return array
def median_of_3(
array: list, first_index: int, middle_index: int, last_index: int
) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> median_of_3(array, 0, 0 + ((len(array) - 0) // 2) + 1, len(array) - 1)
12
"""
if (array[first_index] > array[middle_index]) != (
array[first_index] > array[last_index]
):
return array[first_index]
elif (array[middle_index] > array[first_index]) != (
array[middle_index] > array[last_index]
):
return array[middle_index]
else:
return array[last_index]
def partition(array: list, low: int, high: int, pivot: int) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> partition(array, 0, len(array), 12)
8
"""
i = low
j = high
while True:
while array[i] < pivot:
i += 1
j -= 1
while pivot < array[j]:
j -= 1
if i >= j:
return i
array[i], array[j] = array[j], array[i]
i += 1
def sort(array: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> sort([4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12])
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
>>> sort([-1, -5, -3, -13, -44])
[-44, -13, -5, -3, -1]
>>> sort([])
[]
>>> sort([5])
[5]
>>> sort([-3, 0, -7, 6, 23, -34])
[-34, -7, -3, 0, 6, 23]
>>> sort([1.7, 1.0, 3.3, 2.1, 0.3 ])
[0.3, 1.0, 1.7, 2.1, 3.3]
>>> sort(['d', 'a', 'b', 'e', 'c'])
['a', 'b', 'c', 'd', 'e']
"""
if len(array) == 0:
return array
max_depth = 2 * math.ceil(math.log2(len(array)))
size_threshold = 16
return intro_sort(array, 0, len(array), size_threshold, max_depth)
def intro_sort(
array: list, start: int, end: int, size_threshold: int, max_depth: int
) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> max_depth = 2 * math.ceil(math.log2(len(array)))
>>> intro_sort(array, 0, len(array), 16, max_depth)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
while end - start > size_threshold:
if max_depth == 0:
return heap_sort(array)
max_depth -= 1
pivot = median_of_3(array, start, start + ((end - start) // 2) + 1, end - 1)
p = partition(array, start, end, pivot)
intro_sort(array, p, end, size_threshold, max_depth)
end = p
return insertion_sort(array, start, end)
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma : ").strip()
unsorted = [float(item) for item in user_input.split(",")]
print(sort(unsorted))
| """
Introspective Sort is hybrid sort (Quick Sort + Heap Sort + Insertion Sort)
if the size of the list is under 16, use insertion sort
https://en.wikipedia.org/wiki/Introsort
"""
import math
def insertion_sort(array: list, start: int = 0, end: int = 0) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> insertion_sort(array, 0, len(array))
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
end = end or len(array)
for i in range(start, end):
temp_index = i
temp_index_value = array[i]
while temp_index != start and temp_index_value < array[temp_index - 1]:
array[temp_index] = array[temp_index - 1]
temp_index -= 1
array[temp_index] = temp_index_value
return array
def heapify(array: list, index: int, heap_size: int) -> None: # Max Heap
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heapify(array, len(array) // 2 ,len(array))
"""
largest = index
left_index = 2 * index + 1 # Left Node
right_index = 2 * index + 2 # Right Node
if left_index < heap_size and array[largest] < array[left_index]:
largest = left_index
if right_index < heap_size and array[largest] < array[right_index]:
largest = right_index
if largest != index:
array[index], array[largest] = array[largest], array[index]
heapify(array, largest, heap_size)
def heap_sort(array: list) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heap_sort(array)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
n = len(array)
for i in range(n // 2, -1, -1):
heapify(array, i, n)
for i in range(n - 1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, 0, i)
return array
def median_of_3(
array: list, first_index: int, middle_index: int, last_index: int
) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> median_of_3(array, 0, 0 + ((len(array) - 0) // 2) + 1, len(array) - 1)
12
"""
if (array[first_index] > array[middle_index]) != (
array[first_index] > array[last_index]
):
return array[first_index]
elif (array[middle_index] > array[first_index]) != (
array[middle_index] > array[last_index]
):
return array[middle_index]
else:
return array[last_index]
def partition(array: list, low: int, high: int, pivot: int) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> partition(array, 0, len(array), 12)
8
"""
i = low
j = high
while True:
while array[i] < pivot:
i += 1
j -= 1
while pivot < array[j]:
j -= 1
if i >= j:
return i
array[i], array[j] = array[j], array[i]
i += 1
def sort(array: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> sort([4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12])
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
>>> sort([-1, -5, -3, -13, -44])
[-44, -13, -5, -3, -1]
>>> sort([])
[]
>>> sort([5])
[5]
>>> sort([-3, 0, -7, 6, 23, -34])
[-34, -7, -3, 0, 6, 23]
>>> sort([1.7, 1.0, 3.3, 2.1, 0.3 ])
[0.3, 1.0, 1.7, 2.1, 3.3]
>>> sort(['d', 'a', 'b', 'e', 'c'])
['a', 'b', 'c', 'd', 'e']
"""
if len(array) == 0:
return array
max_depth = 2 * math.ceil(math.log2(len(array)))
size_threshold = 16
return intro_sort(array, 0, len(array), size_threshold, max_depth)
def intro_sort(
array: list, start: int, end: int, size_threshold: int, max_depth: int
) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> max_depth = 2 * math.ceil(math.log2(len(array)))
>>> intro_sort(array, 0, len(array), 16, max_depth)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
while end - start > size_threshold:
if max_depth == 0:
return heap_sort(array)
max_depth -= 1
pivot = median_of_3(array, start, start + ((end - start) // 2) + 1, end - 1)
p = partition(array, start, end, pivot)
intro_sort(array, p, end, size_threshold, max_depth)
end = p
return insertion_sort(array, start, end)
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma : ").strip()
unsorted = [float(item) for item in user_input.split(",")]
print(sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections import defaultdict
from graphs.minimum_spanning_tree_prims import PrimsAlgorithm as mst
def test_prim_successful_result():
num_nodes, num_edges = 9, 14 # noqa: F841
edges = [
[0, 1, 4],
[0, 7, 8],
[1, 2, 8],
[7, 8, 7],
[7, 6, 1],
[2, 8, 2],
[8, 6, 6],
[2, 3, 7],
[2, 5, 4],
[6, 5, 2],
[3, 5, 14],
[3, 4, 9],
[5, 4, 10],
[1, 7, 11],
]
adjancency = defaultdict(list)
for node1, node2, cost in edges:
adjancency[node1].append([node2, cost])
adjancency[node2].append([node1, cost])
result = mst(adjancency)
expected = [
[7, 6, 1],
[2, 8, 2],
[6, 5, 2],
[0, 1, 4],
[2, 5, 4],
[2, 3, 7],
[0, 7, 8],
[3, 4, 9],
]
for answer in expected:
edge = tuple(answer[:2])
reverse = tuple(edge[::-1])
assert edge in result or reverse in result
| from collections import defaultdict
from graphs.minimum_spanning_tree_prims import PrimsAlgorithm as mst
def test_prim_successful_result():
num_nodes, num_edges = 9, 14 # noqa: F841
edges = [
[0, 1, 4],
[0, 7, 8],
[1, 2, 8],
[7, 8, 7],
[7, 6, 1],
[2, 8, 2],
[8, 6, 6],
[2, 3, 7],
[2, 5, 4],
[6, 5, 2],
[3, 5, 14],
[3, 4, 9],
[5, 4, 10],
[1, 7, 11],
]
adjancency = defaultdict(list)
for node1, node2, cost in edges:
adjancency[node1].append([node2, cost])
adjancency[node2].append([node1, cost])
result = mst(adjancency)
expected = [
[7, 6, 1],
[2, 8, 2],
[6, 5, 2],
[0, 1, 4],
[2, 5, 4],
[2, 3, 7],
[0, 7, 8],
[3, 4, 9],
]
for answer in expected:
edge = tuple(answer[:2])
reverse = tuple(edge[::-1])
assert edge in result or reverse in result
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 91: https://projecteuler.net/problem=91
The points P (x1, y1) and Q (x2, y2) are plotted at integer coordinates and
are joined to the origin, O(0,0), to form ΔOPQ.

There are exactly fourteen triangles containing a right angle that can be formed
when each coordinate lies between 0 and 2 inclusive; that is,
0 ≤ x1, y1, x2, y2 ≤ 2.

Given that 0 ≤ x1, y1, x2, y2 ≤ 50, how many right triangles can be formed?
"""
from itertools import combinations, product
def is_right(x1: int, y1: int, x2: int, y2: int) -> bool:
"""
Check if the triangle described by P(x1,y1), Q(x2,y2) and O(0,0) is right-angled.
Note: this doesn't check if P and Q are equal, but that's handled by the use of
itertools.combinations in the solution function.
>>> is_right(0, 1, 2, 0)
True
>>> is_right(1, 0, 2, 2)
False
"""
if x1 == y1 == 0 or x2 == y2 == 0:
return False
a_square = x1 * x1 + y1 * y1
b_square = x2 * x2 + y2 * y2
c_square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)
return (
a_square + b_square == c_square
or a_square + c_square == b_square
or b_square + c_square == a_square
)
def solution(limit: int = 50) -> int:
"""
Return the number of right triangles OPQ that can be formed by two points P, Q
which have both x- and y- coordinates between 0 and limit inclusive.
>>> solution(2)
14
>>> solution(10)
448
"""
return sum(
1
for pt1, pt2 in combinations(product(range(limit + 1), repeat=2), 2)
if is_right(*pt1, *pt2)
)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 91: https://projecteuler.net/problem=91
The points P (x1, y1) and Q (x2, y2) are plotted at integer coordinates and
are joined to the origin, O(0,0), to form ΔOPQ.

There are exactly fourteen triangles containing a right angle that can be formed
when each coordinate lies between 0 and 2 inclusive; that is,
0 ≤ x1, y1, x2, y2 ≤ 2.

Given that 0 ≤ x1, y1, x2, y2 ≤ 50, how many right triangles can be formed?
"""
from itertools import combinations, product
def is_right(x1: int, y1: int, x2: int, y2: int) -> bool:
"""
Check if the triangle described by P(x1,y1), Q(x2,y2) and O(0,0) is right-angled.
Note: this doesn't check if P and Q are equal, but that's handled by the use of
itertools.combinations in the solution function.
>>> is_right(0, 1, 2, 0)
True
>>> is_right(1, 0, 2, 2)
False
"""
if x1 == y1 == 0 or x2 == y2 == 0:
return False
a_square = x1 * x1 + y1 * y1
b_square = x2 * x2 + y2 * y2
c_square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)
return (
a_square + b_square == c_square
or a_square + c_square == b_square
or b_square + c_square == a_square
)
def solution(limit: int = 50) -> int:
"""
Return the number of right triangles OPQ that can be formed by two points P, Q
which have both x- and y- coordinates between 0 and limit inclusive.
>>> solution(2)
14
>>> solution(10)
448
"""
return sum(
1
for pt1, pt2 in combinations(product(range(limit + 1), repeat=2), 2)
if is_right(*pt1, *pt2)
)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research
for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century
unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century
(1 Jan 1901 to 31 Dec 2000)?
"""
def solution():
"""Returns the number of mondays that fall on the first of the month during
the twentieth century (1 Jan 1901 to 31 Dec 2000)?
>>> solution()
171
"""
days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
day = 6
month = 1
year = 1901
sundays = 0
while year < 2001:
day += 7
if (year % 4 == 0 and not year % 100 == 0) or (year % 400 == 0):
if day > days_per_month[month - 1] and month != 2:
month += 1
day = day - days_per_month[month - 2]
elif day > 29 and month == 2:
month += 1
day = day - 29
else:
if day > days_per_month[month - 1]:
month += 1
day = day - days_per_month[month - 2]
if month > 12:
year += 1
month = 1
if year < 2001 and day == 1:
sundays += 1
return sundays
if __name__ == "__main__":
print(solution())
| """
Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research
for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century
unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century
(1 Jan 1901 to 31 Dec 2000)?
"""
def solution():
"""Returns the number of mondays that fall on the first of the month during
the twentieth century (1 Jan 1901 to 31 Dec 2000)?
>>> solution()
171
"""
days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
day = 6
month = 1
year = 1901
sundays = 0
while year < 2001:
day += 7
if (year % 4 == 0 and not year % 100 == 0) or (year % 400 == 0):
if day > days_per_month[month - 1] and month != 2:
month += 1
day = day - days_per_month[month - 2]
elif day > 29 and month == 2:
month += 1
day = day - 29
else:
if day > days_per_month[month - 1]:
month += 1
day = day - days_per_month[month - 2]
if month > 12:
year += 1
month = 1
if year < 2001 and day == 1:
sundays += 1
return sundays
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The 5-digit number, 16807=75, is also a fifth power. Similarly, the 9-digit number,
134217728=89, is a ninth power.
How many n-digit positive integers exist which are also an nth power?
"""
"""
The maximum base can be 9 because all n-digit numbers < 10^n.
Now 9**23 has 22 digits so the maximum power can be 22.
Using these conclusions, we will calculate the result.
"""
def solution(max_base: int = 10, max_power: int = 22) -> int:
"""
Returns the count of all n-digit numbers which are nth power
>>> solution(10, 22)
49
>>> solution(0, 0)
0
>>> solution(1, 1)
0
>>> solution(-1, -1)
0
"""
bases = range(1, max_base)
powers = range(1, max_power)
return sum(
1 for power in powers for base in bases if len(str(base ** power)) == power
)
if __name__ == "__main__":
print(f"{solution(10, 22) = }")
| """
The 5-digit number, 16807=75, is also a fifth power. Similarly, the 9-digit number,
134217728=89, is a ninth power.
How many n-digit positive integers exist which are also an nth power?
"""
"""
The maximum base can be 9 because all n-digit numbers < 10^n.
Now 9**23 has 22 digits so the maximum power can be 22.
Using these conclusions, we will calculate the result.
"""
def solution(max_base: int = 10, max_power: int = 22) -> int:
"""
Returns the count of all n-digit numbers which are nth power
>>> solution(10, 22)
49
>>> solution(0, 0)
0
>>> solution(1, 1)
0
>>> solution(-1, -1)
0
"""
bases = range(1, max_base)
powers = range(1, max_power)
return sum(
1 for power in powers for base in bases if len(str(base ** power)) == power
)
if __name__ == "__main__":
print(f"{solution(10, 22) = }")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #
| #
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author: Alexander Joslin
GitHub: github.com/echoaj
Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack-
algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea
We can use Dijkstra's two stack algorithm to solve an equation
such as: (5 + ((4 * 2) * (2 + 3)))
THESE ARE THE ALGORITHM'S RULES:
RULE 1: Scan the expression from left to right. When an operand is encountered,
push it onto the the operand stack.
RULE 2: When an operator is encountered in the expression,
push it onto the operator stack.
RULE 3: When a left parenthesis is encountered in the expression, ignore it.
RULE 4: When a right parenthesis is encountered in the expression,
pop an operator off the operator stack. The two operands it must
operate on must be the last two operands pushed onto the operand stack.
We therefore pop the operand stack twice, perform the operation,
and push the result back onto the operand stack so it will be available
for use as an operand of the next operator popped off the operator stack.
RULE 5: When the entire infix expression has been scanned, the value left on
the operand stack represents the value of the expression.
NOTE: It only works with whole numbers.
"""
__author__ = "Alexander Joslin"
import operator as op
from .stack import Stack
def dijkstras_two_stack_algorithm(equation: str) -> int:
"""
DocTests
>>> dijkstras_two_stack_algorithm("(5 + 3)")
8
>>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))")
5
>>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)")
-3
:param equation: a string
:return: result: an integer
"""
operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub}
operand_stack = Stack()
operator_stack = Stack()
for i in equation:
if i.isdigit():
# RULE 1
operand_stack.push(int(i))
elif i in operators:
# RULE 2
operator_stack.push(i)
elif i == ")":
# RULE 4
opr = operator_stack.peek()
operator_stack.pop()
num1 = operand_stack.peek()
operand_stack.pop()
num2 = operand_stack.peek()
operand_stack.pop()
total = operators[opr](num2, num1)
operand_stack.push(total)
# RULE 5
return operand_stack.peek()
if __name__ == "__main__":
equation = "(5 + ((4 * 2) * (2 + 3)))"
# answer = 45
print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
| """
Author: Alexander Joslin
GitHub: github.com/echoaj
Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack-
algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea
We can use Dijkstra's two stack algorithm to solve an equation
such as: (5 + ((4 * 2) * (2 + 3)))
THESE ARE THE ALGORITHM'S RULES:
RULE 1: Scan the expression from left to right. When an operand is encountered,
push it onto the the operand stack.
RULE 2: When an operator is encountered in the expression,
push it onto the operator stack.
RULE 3: When a left parenthesis is encountered in the expression, ignore it.
RULE 4: When a right parenthesis is encountered in the expression,
pop an operator off the operator stack. The two operands it must
operate on must be the last two operands pushed onto the operand stack.
We therefore pop the operand stack twice, perform the operation,
and push the result back onto the operand stack so it will be available
for use as an operand of the next operator popped off the operator stack.
RULE 5: When the entire infix expression has been scanned, the value left on
the operand stack represents the value of the expression.
NOTE: It only works with whole numbers.
"""
__author__ = "Alexander Joslin"
import operator as op
from .stack import Stack
def dijkstras_two_stack_algorithm(equation: str) -> int:
"""
DocTests
>>> dijkstras_two_stack_algorithm("(5 + 3)")
8
>>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))")
5
>>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)")
-3
:param equation: a string
:return: result: an integer
"""
operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub}
operand_stack = Stack()
operator_stack = Stack()
for i in equation:
if i.isdigit():
# RULE 1
operand_stack.push(int(i))
elif i in operators:
# RULE 2
operator_stack.push(i)
elif i == ")":
# RULE 4
opr = operator_stack.peek()
operator_stack.pop()
num1 = operand_stack.peek()
operand_stack.pop()
num2 = operand_stack.peek()
operand_stack.pop()
total = operators[opr](num2, num1)
operand_stack.push(total)
# RULE 5
return operand_stack.peek()
if __name__ == "__main__":
equation = "(5 + ((4 * 2) * (2 + 3)))"
# answer = 45
print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def stable_matching(
donor_pref: list[list[int]], recipient_pref: list[list[int]]
) -> list[int]:
"""
Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects
prefer each other over their partner. The function accepts the preferences of
oegan donors and recipients (where both are assigned numbers from 0 to n-1) and
returns a list where the index position corresponds to the donor and value at the
index is the organ recipient.
To better understand the algorithm, see also:
https://github.com/akashvshroff/Gale_Shapley_Stable_Matching (README).
https://www.youtube.com/watch?v=Qcv1IqHWAzg&t=13s (Numberphile YouTube).
>>> donor_pref = [[0, 1, 3, 2], [0, 2, 3, 1], [1, 0, 2, 3], [0, 3, 1, 2]]
>>> recipient_pref = [[3, 1, 2, 0], [3, 1, 0, 2], [0, 3, 1, 2], [1, 0, 3, 2]]
>>> print(stable_matching(donor_pref, recipient_pref))
[1, 2, 3, 0]
"""
assert len(donor_pref) == len(recipient_pref)
n = len(donor_pref)
unmatched_donors = list(range(n))
donor_record = [-1] * n # who the donor has donated to
rec_record = [-1] * n # who the recipient has received from
num_donations = [0] * n
while unmatched_donors:
donor = unmatched_donors[0]
donor_preference = donor_pref[donor]
recipient = donor_preference[num_donations[donor]]
num_donations[donor] += 1
rec_preference = recipient_pref[recipient]
prev_donor = rec_record[recipient]
if prev_donor != -1:
if rec_preference.index(prev_donor) > rec_preference.index(donor):
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.append(prev_donor)
unmatched_donors.remove(donor)
else:
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.remove(donor)
return donor_record
| from __future__ import annotations
def stable_matching(
donor_pref: list[list[int]], recipient_pref: list[list[int]]
) -> list[int]:
"""
Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects
prefer each other over their partner. The function accepts the preferences of
oegan donors and recipients (where both are assigned numbers from 0 to n-1) and
returns a list where the index position corresponds to the donor and value at the
index is the organ recipient.
To better understand the algorithm, see also:
https://github.com/akashvshroff/Gale_Shapley_Stable_Matching (README).
https://www.youtube.com/watch?v=Qcv1IqHWAzg&t=13s (Numberphile YouTube).
>>> donor_pref = [[0, 1, 3, 2], [0, 2, 3, 1], [1, 0, 2, 3], [0, 3, 1, 2]]
>>> recipient_pref = [[3, 1, 2, 0], [3, 1, 0, 2], [0, 3, 1, 2], [1, 0, 3, 2]]
>>> print(stable_matching(donor_pref, recipient_pref))
[1, 2, 3, 0]
"""
assert len(donor_pref) == len(recipient_pref)
n = len(donor_pref)
unmatched_donors = list(range(n))
donor_record = [-1] * n # who the donor has donated to
rec_record = [-1] * n # who the recipient has received from
num_donations = [0] * n
while unmatched_donors:
donor = unmatched_donors[0]
donor_preference = donor_pref[donor]
recipient = donor_preference[num_donations[donor]]
num_donations[donor] += 1
rec_preference = recipient_pref[recipient]
prev_donor = rec_record[recipient]
if prev_donor != -1:
if rec_preference.index(prev_donor) > rec_preference.index(donor):
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.append(prev_donor)
unmatched_donors.remove(donor)
else:
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.remove(donor)
return donor_record
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Print all the Catalan numbers from 0 to n, n being the user input.
* The Catalan numbers are a sequence of positive integers that
* appear in many counting problems in combinatorics [1]. Such
* problems include counting [2]:
* - The number of Dyck words of length 2n
* - The number well-formed expressions with n pairs of parentheses
* (e.g., `()()` is valid but `())(` is not)
* - The number of different ways n + 1 factors can be completely
* parenthesized (e.g., for n = 2, C(n) = 2 and (ab)c and a(bc)
* are the two valid ways to parenthesize.
* - The number of full binary trees with n + 1 leaves
* A Catalan number satisfies the following recurrence relation
* which we will use in this algorithm [1].
* C(0) = C(1) = 1
* C(n) = sum(C(i).C(n-i-1)), from i = 0 to n-1
* In addition, the n-th Catalan number can be calculated using
* the closed form formula below [1]:
* C(n) = (1 / (n + 1)) * (2n choose n)
* Sources:
* [1] https://brilliant.org/wiki/catalan-numbers/
* [2] https://en.wikipedia.org/wiki/Catalan_number
"""
def catalan_numbers(upper_limit: int) -> "list[int]":
"""
Return a list of the Catalan number sequence from 0 through `upper_limit`.
>>> catalan_numbers(5)
[1, 1, 2, 5, 14, 42]
>>> catalan_numbers(2)
[1, 1, 2]
>>> catalan_numbers(-1)
Traceback (most recent call last):
ValueError: Limit for the Catalan sequence must be ≥ 0
"""
if upper_limit < 0:
raise ValueError("Limit for the Catalan sequence must be ≥ 0")
catalan_list = [0] * (upper_limit + 1)
# Base case: C(0) = C(1) = 1
catalan_list[0] = 1
if upper_limit > 0:
catalan_list[1] = 1
# Recurrence relation: C(i) = sum(C(j).C(i-j-1)), from j = 0 to i
for i in range(2, upper_limit + 1):
for j in range(i):
catalan_list[i] += catalan_list[j] * catalan_list[i - j - 1]
return catalan_list
if __name__ == "__main__":
print("\n********* Catalan Numbers Using Dynamic Programming ************\n")
print("\n*** Enter -1 at any time to quit ***")
print("\nEnter the upper limit (≥ 0) for the Catalan number sequence: ", end="")
try:
while True:
N = int(input().strip())
if N < 0:
print("\n********* Goodbye!! ************")
break
else:
print(f"The Catalan numbers from 0 through {N} are:")
print(catalan_numbers(N))
print("Try another upper limit for the sequence: ", end="")
except (NameError, ValueError):
print("\n********* Invalid input, goodbye! ************\n")
import doctest
doctest.testmod()
| """
Print all the Catalan numbers from 0 to n, n being the user input.
* The Catalan numbers are a sequence of positive integers that
* appear in many counting problems in combinatorics [1]. Such
* problems include counting [2]:
* - The number of Dyck words of length 2n
* - The number well-formed expressions with n pairs of parentheses
* (e.g., `()()` is valid but `())(` is not)
* - The number of different ways n + 1 factors can be completely
* parenthesized (e.g., for n = 2, C(n) = 2 and (ab)c and a(bc)
* are the two valid ways to parenthesize.
* - The number of full binary trees with n + 1 leaves
* A Catalan number satisfies the following recurrence relation
* which we will use in this algorithm [1].
* C(0) = C(1) = 1
* C(n) = sum(C(i).C(n-i-1)), from i = 0 to n-1
* In addition, the n-th Catalan number can be calculated using
* the closed form formula below [1]:
* C(n) = (1 / (n + 1)) * (2n choose n)
* Sources:
* [1] https://brilliant.org/wiki/catalan-numbers/
* [2] https://en.wikipedia.org/wiki/Catalan_number
"""
def catalan_numbers(upper_limit: int) -> "list[int]":
"""
Return a list of the Catalan number sequence from 0 through `upper_limit`.
>>> catalan_numbers(5)
[1, 1, 2, 5, 14, 42]
>>> catalan_numbers(2)
[1, 1, 2]
>>> catalan_numbers(-1)
Traceback (most recent call last):
ValueError: Limit for the Catalan sequence must be ≥ 0
"""
if upper_limit < 0:
raise ValueError("Limit for the Catalan sequence must be ≥ 0")
catalan_list = [0] * (upper_limit + 1)
# Base case: C(0) = C(1) = 1
catalan_list[0] = 1
if upper_limit > 0:
catalan_list[1] = 1
# Recurrence relation: C(i) = sum(C(j).C(i-j-1)), from j = 0 to i
for i in range(2, upper_limit + 1):
for j in range(i):
catalan_list[i] += catalan_list[j] * catalan_list[i - j - 1]
return catalan_list
if __name__ == "__main__":
print("\n********* Catalan Numbers Using Dynamic Programming ************\n")
print("\n*** Enter -1 at any time to quit ***")
print("\nEnter the upper limit (≥ 0) for the Catalan number sequence: ", end="")
try:
while True:
N = int(input().strip())
if N < 0:
print("\n********* Goodbye!! ************")
break
else:
print(f"The Catalan numbers from 0 through {N} are:")
print(catalan_numbers(N))
print("Try another upper limit for the sequence: ", end="")
except (NameError, ValueError):
print("\n********* Invalid input, goodbye! ************\n")
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = """73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450"""
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = """73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450"""
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import cv2
import numpy as np
from digital_image_processing.filters.convolve import img_convolve
from digital_image_processing.filters.sobel_filter import sobel_filter
PI = 180
def gen_gaussian_kernel(k_size, sigma):
center = k_size // 2
x, y = np.mgrid[0 - center : k_size - center, 0 - center : k_size - center]
g = (
1
/ (2 * np.pi * sigma)
* np.exp(-(np.square(x) + np.square(y)) / (2 * np.square(sigma)))
)
return g
def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255):
image_row, image_col = image.shape[0], image.shape[1]
# gaussian_filter
gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4))
# get the gradient and degree by sobel_filter
sobel_grad, sobel_theta = sobel_filter(gaussian_out)
gradient_direction = np.rad2deg(sobel_theta)
gradient_direction += PI
dst = np.zeros((image_row, image_col))
"""
Non-maximum suppression. If the edge strength of the current pixel is the largest
compared to the other pixels in the mask with the same direction, the value will be
preserved. Otherwise, the value will be suppressed.
"""
for row in range(1, image_row - 1):
for col in range(1, image_col - 1):
direction = gradient_direction[row, col]
if (
0 <= direction < 22.5
or 15 * PI / 8 <= direction <= 2 * PI
or 7 * PI / 8 <= direction <= 9 * PI / 8
):
W = sobel_grad[row, col - 1]
E = sobel_grad[row, col + 1]
if sobel_grad[row, col] >= W and sobel_grad[row, col] >= E:
dst[row, col] = sobel_grad[row, col]
elif (PI / 8 <= direction < 3 * PI / 8) or (
9 * PI / 8 <= direction < 11 * PI / 8
):
SW = sobel_grad[row + 1, col - 1]
NE = sobel_grad[row - 1, col + 1]
if sobel_grad[row, col] >= SW and sobel_grad[row, col] >= NE:
dst[row, col] = sobel_grad[row, col]
elif (3 * PI / 8 <= direction < 5 * PI / 8) or (
11 * PI / 8 <= direction < 13 * PI / 8
):
N = sobel_grad[row - 1, col]
S = sobel_grad[row + 1, col]
if sobel_grad[row, col] >= N and sobel_grad[row, col] >= S:
dst[row, col] = sobel_grad[row, col]
elif (5 * PI / 8 <= direction < 7 * PI / 8) or (
13 * PI / 8 <= direction < 15 * PI / 8
):
NW = sobel_grad[row - 1, col - 1]
SE = sobel_grad[row + 1, col + 1]
if sobel_grad[row, col] >= NW and sobel_grad[row, col] >= SE:
dst[row, col] = sobel_grad[row, col]
"""
High-Low threshold detection. If an edge pixel’s gradient value is higher
than the high threshold value, it is marked as a strong edge pixel. If an
edge pixel’s gradient value is smaller than the high threshold value and
larger than the low threshold value, it is marked as a weak edge pixel. If
an edge pixel's value is smaller than the low threshold value, it will be
suppressed.
"""
if dst[row, col] >= threshold_high:
dst[row, col] = strong
elif dst[row, col] <= threshold_low:
dst[row, col] = 0
else:
dst[row, col] = weak
"""
Edge tracking. Usually a weak edge pixel caused from true edges will be connected
to a strong edge pixel while noise responses are unconnected. As long as there is
one strong edge pixel that is involved in its 8-connected neighborhood, that weak
edge point can be identified as one that should be preserved.
"""
for row in range(1, image_row):
for col in range(1, image_col):
if dst[row, col] == weak:
if 255 in (
dst[row, col + 1],
dst[row, col - 1],
dst[row - 1, col],
dst[row + 1, col],
dst[row - 1, col - 1],
dst[row + 1, col - 1],
dst[row - 1, col + 1],
dst[row + 1, col + 1],
):
dst[row, col] = strong
else:
dst[row, col] = 0
return dst
if __name__ == "__main__":
# read original image in gray mode
lena = cv2.imread(r"../image_data/lena.jpg", 0)
# canny edge detection
canny_dst = canny(lena)
cv2.imshow("canny", canny_dst)
cv2.waitKey(0)
| import cv2
import numpy as np
from digital_image_processing.filters.convolve import img_convolve
from digital_image_processing.filters.sobel_filter import sobel_filter
PI = 180
def gen_gaussian_kernel(k_size, sigma):
center = k_size // 2
x, y = np.mgrid[0 - center : k_size - center, 0 - center : k_size - center]
g = (
1
/ (2 * np.pi * sigma)
* np.exp(-(np.square(x) + np.square(y)) / (2 * np.square(sigma)))
)
return g
def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255):
image_row, image_col = image.shape[0], image.shape[1]
# gaussian_filter
gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4))
# get the gradient and degree by sobel_filter
sobel_grad, sobel_theta = sobel_filter(gaussian_out)
gradient_direction = np.rad2deg(sobel_theta)
gradient_direction += PI
dst = np.zeros((image_row, image_col))
"""
Non-maximum suppression. If the edge strength of the current pixel is the largest
compared to the other pixels in the mask with the same direction, the value will be
preserved. Otherwise, the value will be suppressed.
"""
for row in range(1, image_row - 1):
for col in range(1, image_col - 1):
direction = gradient_direction[row, col]
if (
0 <= direction < 22.5
or 15 * PI / 8 <= direction <= 2 * PI
or 7 * PI / 8 <= direction <= 9 * PI / 8
):
W = sobel_grad[row, col - 1]
E = sobel_grad[row, col + 1]
if sobel_grad[row, col] >= W and sobel_grad[row, col] >= E:
dst[row, col] = sobel_grad[row, col]
elif (PI / 8 <= direction < 3 * PI / 8) or (
9 * PI / 8 <= direction < 11 * PI / 8
):
SW = sobel_grad[row + 1, col - 1]
NE = sobel_grad[row - 1, col + 1]
if sobel_grad[row, col] >= SW and sobel_grad[row, col] >= NE:
dst[row, col] = sobel_grad[row, col]
elif (3 * PI / 8 <= direction < 5 * PI / 8) or (
11 * PI / 8 <= direction < 13 * PI / 8
):
N = sobel_grad[row - 1, col]
S = sobel_grad[row + 1, col]
if sobel_grad[row, col] >= N and sobel_grad[row, col] >= S:
dst[row, col] = sobel_grad[row, col]
elif (5 * PI / 8 <= direction < 7 * PI / 8) or (
13 * PI / 8 <= direction < 15 * PI / 8
):
NW = sobel_grad[row - 1, col - 1]
SE = sobel_grad[row + 1, col + 1]
if sobel_grad[row, col] >= NW and sobel_grad[row, col] >= SE:
dst[row, col] = sobel_grad[row, col]
"""
High-Low threshold detection. If an edge pixel’s gradient value is higher
than the high threshold value, it is marked as a strong edge pixel. If an
edge pixel’s gradient value is smaller than the high threshold value and
larger than the low threshold value, it is marked as a weak edge pixel. If
an edge pixel's value is smaller than the low threshold value, it will be
suppressed.
"""
if dst[row, col] >= threshold_high:
dst[row, col] = strong
elif dst[row, col] <= threshold_low:
dst[row, col] = 0
else:
dst[row, col] = weak
"""
Edge tracking. Usually a weak edge pixel caused from true edges will be connected
to a strong edge pixel while noise responses are unconnected. As long as there is
one strong edge pixel that is involved in its 8-connected neighborhood, that weak
edge point can be identified as one that should be preserved.
"""
for row in range(1, image_row):
for col in range(1, image_col):
if dst[row, col] == weak:
if 255 in (
dst[row, col + 1],
dst[row, col - 1],
dst[row - 1, col],
dst[row + 1, col],
dst[row - 1, col - 1],
dst[row + 1, col - 1],
dst[row - 1, col + 1],
dst[row + 1, col + 1],
):
dst[row, col] = strong
else:
dst[row, col] = 0
return dst
if __name__ == "__main__":
# read original image in gray mode
lena = cv2.imread(r"../image_data/lena.jpg", 0)
# canny edge detection
canny_dst = canny(lena)
cv2.imshow("canny", canny_dst)
cv2.waitKey(0)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Hey, we are going to find an exciting number called Catalan number which is use to find
the number of possible binary search trees from tree of a given number of nodes.
We will use the formula: t(n) = SUMMATION(i = 1 to n)t(i-1)t(n-i)
Further details at Wikipedia: https://en.wikipedia.org/wiki/Catalan_number
"""
"""
Our Contribution:
Basically we Create the 2 function:
1. catalan_number(node_count: int) -> int
Returns the number of possible binary search trees for n nodes.
2. binary_tree_count(node_count: int) -> int
Returns the number of possible binary trees for n nodes.
"""
def binomial_coefficient(n: int, k: int) -> int:
"""
Since Here we Find the Binomial Coefficient:
https://en.wikipedia.org/wiki/Binomial_coefficient
C(n,k) = n! / k!(n-k)!
:param n: 2 times of Number of nodes
:param k: Number of nodes
:return: Integer Value
>>> binomial_coefficient(4, 2)
6
"""
result = 1 # To kept the Calculated Value
# Since C(n, k) = C(n, n-k)
if k > (n - k):
k = n - k
# Calculate C(n,k)
for i in range(k):
result *= n - i
result //= i + 1
return result
def catalan_number(node_count: int) -> int:
"""
We can find Catalan number many ways but here we use Binomial Coefficient because it
does the job in O(n)
return the Catalan number of n using 2nCn/(n+1).
:param n: number of nodes
:return: Catalan number of n nodes
>>> catalan_number(5)
42
>>> catalan_number(6)
132
"""
return binomial_coefficient(2 * node_count, node_count) // (node_count + 1)
def factorial(n: int) -> int:
"""
Return the factorial of a number.
:param n: Number to find the Factorial of.
:return: Factorial of n.
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(10))
True
>>> factorial(-5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if n < 0:
raise ValueError("factorial() not defined for negative values")
result = 1
for i in range(1, n + 1):
result *= i
return result
def binary_tree_count(node_count: int) -> int:
"""
Return the number of possible of binary trees.
:param n: number of nodes
:return: Number of possible binary trees
>>> binary_tree_count(5)
5040
>>> binary_tree_count(6)
95040
"""
return catalan_number(node_count) * factorial(node_count)
if __name__ == "__main__":
node_count = int(input("Enter the number of nodes: ").strip() or 0)
if node_count <= 0:
raise ValueError("We need some nodes to work with.")
print(
f"Given {node_count} nodes, there are {binary_tree_count(node_count)} "
f"binary trees and {catalan_number(node_count)} binary search trees."
)
| """
Hey, we are going to find an exciting number called Catalan number which is use to find
the number of possible binary search trees from tree of a given number of nodes.
We will use the formula: t(n) = SUMMATION(i = 1 to n)t(i-1)t(n-i)
Further details at Wikipedia: https://en.wikipedia.org/wiki/Catalan_number
"""
"""
Our Contribution:
Basically we Create the 2 function:
1. catalan_number(node_count: int) -> int
Returns the number of possible binary search trees for n nodes.
2. binary_tree_count(node_count: int) -> int
Returns the number of possible binary trees for n nodes.
"""
def binomial_coefficient(n: int, k: int) -> int:
"""
Since Here we Find the Binomial Coefficient:
https://en.wikipedia.org/wiki/Binomial_coefficient
C(n,k) = n! / k!(n-k)!
:param n: 2 times of Number of nodes
:param k: Number of nodes
:return: Integer Value
>>> binomial_coefficient(4, 2)
6
"""
result = 1 # To kept the Calculated Value
# Since C(n, k) = C(n, n-k)
if k > (n - k):
k = n - k
# Calculate C(n,k)
for i in range(k):
result *= n - i
result //= i + 1
return result
def catalan_number(node_count: int) -> int:
"""
We can find Catalan number many ways but here we use Binomial Coefficient because it
does the job in O(n)
return the Catalan number of n using 2nCn/(n+1).
:param n: number of nodes
:return: Catalan number of n nodes
>>> catalan_number(5)
42
>>> catalan_number(6)
132
"""
return binomial_coefficient(2 * node_count, node_count) // (node_count + 1)
def factorial(n: int) -> int:
"""
Return the factorial of a number.
:param n: Number to find the Factorial of.
:return: Factorial of n.
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(10))
True
>>> factorial(-5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if n < 0:
raise ValueError("factorial() not defined for negative values")
result = 1
for i in range(1, n + 1):
result *= i
return result
def binary_tree_count(node_count: int) -> int:
"""
Return the number of possible of binary trees.
:param n: number of nodes
:return: Number of possible binary trees
>>> binary_tree_count(5)
5040
>>> binary_tree_count(6)
95040
"""
return catalan_number(node_count) * factorial(node_count)
if __name__ == "__main__":
node_count = int(input("Enter the number of nodes: ").strip() or 0)
if node_count <= 0:
raise ValueError("We need some nodes to work with.")
print(
f"Given {node_count} nodes, there are {binary_tree_count(node_count)} "
f"binary trees and {catalan_number(node_count)} binary search trees."
)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This algorithm helps you to swap cases.
User will give input and then program will perform swap cases.
In other words, convert all lowercase letters to uppercase letters and vice versa.
For example:
1. Please input sentence: Algorithm.Python@89
aLGORITHM.pYTHON@89
2. Please input sentence: github.com/mayur200
GITHUB.COM/MAYUR200
"""
def swap_case(sentence: str) -> str:
"""
This function will convert all lowercase letters to uppercase letters
and vice versa.
>>> swap_case('Algorithm.Python@89')
'aLGORITHM.pYTHON@89'
"""
new_string = ""
for char in sentence:
if char.isupper():
new_string += char.lower()
elif char.islower():
new_string += char.upper()
else:
new_string += char
return new_string
if __name__ == "__main__":
print(swap_case(input("Please input sentence: ")))
| """
This algorithm helps you to swap cases.
User will give input and then program will perform swap cases.
In other words, convert all lowercase letters to uppercase letters and vice versa.
For example:
1. Please input sentence: Algorithm.Python@89
aLGORITHM.pYTHON@89
2. Please input sentence: github.com/mayur200
GITHUB.COM/MAYUR200
"""
def swap_case(sentence: str) -> str:
"""
This function will convert all lowercase letters to uppercase letters
and vice versa.
>>> swap_case('Algorithm.Python@89')
'aLGORITHM.pYTHON@89'
"""
new_string = ""
for char in sentence:
if char.isupper():
new_string += char.lower()
elif char.islower():
new_string += char.upper()
else:
new_string += char
return new_string
if __name__ == "__main__":
print(swap_case(input("Please input sentence: ")))
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 28
Url: https://projecteuler.net/problem=28
Statement:
Starting with the number 1 and moving to the right in a clockwise direction a 5
by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed
in the same way?
"""
from math import ceil
def solution(n: int = 1001) -> int:
"""Returns the sum of the numbers on the diagonals in a n by n spiral
formed in the same way.
>>> solution(1001)
669171001
>>> solution(500)
82959497
>>> solution(100)
651897
>>> solution(50)
79697
>>> solution(10)
537
"""
total = 1
for i in range(1, int(ceil(n / 2.0))):
odd = 2 * i + 1
even = 2 * i
total = total + 4 * odd ** 2 - 6 * even
return total
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution())
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number")
| """
Problem 28
Url: https://projecteuler.net/problem=28
Statement:
Starting with the number 1 and moving to the right in a clockwise direction a 5
by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed
in the same way?
"""
from math import ceil
def solution(n: int = 1001) -> int:
"""Returns the sum of the numbers on the diagonals in a n by n spiral
formed in the same way.
>>> solution(1001)
669171001
>>> solution(500)
82959497
>>> solution(100)
651897
>>> solution(50)
79697
>>> solution(10)
537
"""
total = 1
for i in range(1, int(ceil(n / 2.0))):
odd = 2 * i + 1
even = 2 * i
total = total + 4 * odd ** 2 - 6 * even
return total
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution())
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections import deque
from .hash_table import HashTable
class HashTableWithLinkedList(HashTable):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _set_value(self, key, data):
self.values[key] = deque([]) if self.values[key] is None else self.values[key]
self.values[key].appendleft(data)
self._keys[key] = self.values[key]
def balanced_factor(self):
return (
sum(self.charge_factor - len(slot) for slot in self.values)
/ self.size_table
* self.charge_factor
)
def _collision_resolution(self, key, data=None):
if not (
len(self.values[key]) == self.charge_factor and self.values.count(None) == 0
):
return key
return super()._collision_resolution(key, data)
| from collections import deque
from .hash_table import HashTable
class HashTableWithLinkedList(HashTable):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _set_value(self, key, data):
self.values[key] = deque([]) if self.values[key] is None else self.values[key]
self.values[key].appendleft(data)
self._keys[key] = self.values[key]
def balanced_factor(self):
return (
sum(self.charge_factor - len(slot) for slot in self.values)
/ self.size_table
* self.charge_factor
)
def _collision_resolution(self, key, data=None):
if not (
len(self.values[key]) == self.charge_factor and self.values.count(None) == 0
):
return key
return super()._collision_resolution(key, data)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 125: https://projecteuler.net/problem=125
The palindromic number 595 is interesting because it can be written as the sum
of consecutive squares: 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 + 12^2.
There are exactly eleven palindromes below one-thousand that can be written as
consecutive square sums, and the sum of these palindromes is 4164. Note that
1 = 0^2 + 1^2 has not been included as this problem is concerned with the
squares of positive integers.
Find the sum of all the numbers less than 10^8 that are both palindromic and can
be written as the sum of consecutive squares.
"""
def is_palindrome(n: int) -> bool:
"""
Check if an integer is palindromic.
>>> is_palindrome(12521)
True
>>> is_palindrome(12522)
False
>>> is_palindrome(12210)
False
"""
if n % 10 == 0:
return False
s = str(n)
return s == s[::-1]
def solution() -> int:
"""
Returns the sum of all numbers less than 1e8 that are both palindromic and
can be written as the sum of consecutive squares.
"""
LIMIT = 10 ** 8
answer = set()
first_square = 1
sum_squares = 5
while sum_squares < LIMIT:
last_square = first_square + 1
while sum_squares < LIMIT:
if is_palindrome(sum_squares):
answer.add(sum_squares)
last_square += 1
sum_squares += last_square ** 2
first_square += 1
sum_squares = first_square ** 2 + (first_square + 1) ** 2
return sum(answer)
if __name__ == "__main__":
print(solution())
| """
Problem 125: https://projecteuler.net/problem=125
The palindromic number 595 is interesting because it can be written as the sum
of consecutive squares: 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 + 12^2.
There are exactly eleven palindromes below one-thousand that can be written as
consecutive square sums, and the sum of these palindromes is 4164. Note that
1 = 0^2 + 1^2 has not been included as this problem is concerned with the
squares of positive integers.
Find the sum of all the numbers less than 10^8 that are both palindromic and can
be written as the sum of consecutive squares.
"""
def is_palindrome(n: int) -> bool:
"""
Check if an integer is palindromic.
>>> is_palindrome(12521)
True
>>> is_palindrome(12522)
False
>>> is_palindrome(12210)
False
"""
if n % 10 == 0:
return False
s = str(n)
return s == s[::-1]
def solution() -> int:
"""
Returns the sum of all numbers less than 1e8 that are both palindromic and
can be written as the sum of consecutive squares.
"""
LIMIT = 10 ** 8
answer = set()
first_square = 1
sum_squares = 5
while sum_squares < LIMIT:
last_square = first_square + 1
while sum_squares < LIMIT:
if is_palindrome(sum_squares):
answer.add(sum_squares)
last_square += 1
sum_squares += last_square ** 2
first_square += 1
sum_squares = first_square ** 2 + (first_square + 1) ** 2
return sum(answer)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.wikipedia.org/wiki/Hill_climbing
import math
class SearchProblem:
"""
An interface to define search problems.
The interface will be illustrated using the example of mathematical function.
"""
def __init__(self, x: int, y: int, step_size: int, function_to_optimize):
"""
The constructor of the search problem.
x: the x coordinate of the current search state.
y: the y coordinate of the current search state.
step_size: size of the step to take when looking for neighbors.
function_to_optimize: a function to optimize having the signature f(x, y).
"""
self.x = x
self.y = y
self.step_size = step_size
self.function = function_to_optimize
def score(self) -> int:
"""
Returns the output of the function called with current x and y coordinates.
>>> def test_function(x, y):
... return x + y
>>> SearchProblem(0, 0, 1, test_function).score() # 0 + 0 = 0
0
>>> SearchProblem(5, 7, 1, test_function).score() # 5 + 7 = 12
12
"""
return self.function(self.x, self.y)
def get_neighbors(self):
"""
Returns a list of coordinates of neighbors adjacent to the current coordinates.
Neighbors:
| 0 | 1 | 2 |
| 3 | _ | 4 |
| 5 | 6 | 7 |
"""
step_size = self.step_size
return [
SearchProblem(x, y, step_size, self.function)
for x, y in (
(self.x - step_size, self.y - step_size),
(self.x - step_size, self.y),
(self.x - step_size, self.y + step_size),
(self.x, self.y - step_size),
(self.x, self.y + step_size),
(self.x + step_size, self.y - step_size),
(self.x + step_size, self.y),
(self.x + step_size, self.y + step_size),
)
]
def __hash__(self):
"""
hash the string representation of the current search state.
"""
return hash(str(self))
def __eq__(self, obj):
"""
Check if the 2 objects are equal.
"""
if isinstance(obj, SearchProblem):
return hash(str(self)) == hash(str(obj))
return False
def __str__(self):
"""
string representation of the current search state.
>>> str(SearchProblem(0, 0, 1, None))
'x: 0 y: 0'
>>> str(SearchProblem(2, 5, 1, None))
'x: 2 y: 5'
"""
return f"x: {self.x} y: {self.y}"
def hill_climbing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
max_iter: int = 10000,
) -> SearchProblem:
"""
Implementation of the hill climbling algorithm.
We start with a given state, find all its neighbors,
move towards the neighbor which provides the maximum (or minimum) change.
We keep doing this until we are at a state where we do not have any
neighbors which can improve the solution.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the maximum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
max_iter: number of times to run the iteration.
Returns a search state having the maximum (or minimum) score.
"""
current_state = search_prob
scores = [] # list to store the current score at each iteration
iterations = 0
solution_found = False
visited = set()
while not solution_found and iterations < max_iter:
visited.add(current_state)
iterations += 1
current_score = current_state.score()
scores.append(current_score)
neighbors = current_state.get_neighbors()
max_change = -math.inf
min_change = math.inf
next_state = None # to hold the next best neighbor
for neighbor in neighbors:
if neighbor in visited:
continue # do not want to visit the same state again
if (
neighbor.x > max_x
or neighbor.x < min_x
or neighbor.y > max_y
or neighbor.y < min_y
):
continue # neighbor outside our bounds
change = neighbor.score() - current_score
if find_max: # finding max
# going to direction with greatest ascent
if change > max_change and change > 0:
max_change = change
next_state = neighbor
else: # finding min
# to direction with greatest descent
if change < min_change and change < 0:
min_change = change
next_state = neighbor
if next_state is not None:
# we found at least one neighbor which improved the current state
current_state = next_state
else:
# since we have no neighbor that improves the solution we stop the search
solution_found = True
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return current_state
if __name__ == "__main__":
import doctest
doctest.testmod()
def test_f1(x, y):
return (x ** 2) + (y ** 2)
# starting the problem with initial coordinates (3, 4)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(prob, find_max=False)
print(
"The minimum score for f(x, y) = x^2 + y^2 found via hill climbing: "
f"{local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x ** 2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(prob, find_max=True)
print(
"The maximum score for f(x, y) = x^2 + y^2 found via hill climbing: "
f"{local_min.score()}"
)
| # https://en.wikipedia.org/wiki/Hill_climbing
import math
class SearchProblem:
"""
An interface to define search problems.
The interface will be illustrated using the example of mathematical function.
"""
def __init__(self, x: int, y: int, step_size: int, function_to_optimize):
"""
The constructor of the search problem.
x: the x coordinate of the current search state.
y: the y coordinate of the current search state.
step_size: size of the step to take when looking for neighbors.
function_to_optimize: a function to optimize having the signature f(x, y).
"""
self.x = x
self.y = y
self.step_size = step_size
self.function = function_to_optimize
def score(self) -> int:
"""
Returns the output of the function called with current x and y coordinates.
>>> def test_function(x, y):
... return x + y
>>> SearchProblem(0, 0, 1, test_function).score() # 0 + 0 = 0
0
>>> SearchProblem(5, 7, 1, test_function).score() # 5 + 7 = 12
12
"""
return self.function(self.x, self.y)
def get_neighbors(self):
"""
Returns a list of coordinates of neighbors adjacent to the current coordinates.
Neighbors:
| 0 | 1 | 2 |
| 3 | _ | 4 |
| 5 | 6 | 7 |
"""
step_size = self.step_size
return [
SearchProblem(x, y, step_size, self.function)
for x, y in (
(self.x - step_size, self.y - step_size),
(self.x - step_size, self.y),
(self.x - step_size, self.y + step_size),
(self.x, self.y - step_size),
(self.x, self.y + step_size),
(self.x + step_size, self.y - step_size),
(self.x + step_size, self.y),
(self.x + step_size, self.y + step_size),
)
]
def __hash__(self):
"""
hash the string representation of the current search state.
"""
return hash(str(self))
def __eq__(self, obj):
"""
Check if the 2 objects are equal.
"""
if isinstance(obj, SearchProblem):
return hash(str(self)) == hash(str(obj))
return False
def __str__(self):
"""
string representation of the current search state.
>>> str(SearchProblem(0, 0, 1, None))
'x: 0 y: 0'
>>> str(SearchProblem(2, 5, 1, None))
'x: 2 y: 5'
"""
return f"x: {self.x} y: {self.y}"
def hill_climbing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
max_iter: int = 10000,
) -> SearchProblem:
"""
Implementation of the hill climbling algorithm.
We start with a given state, find all its neighbors,
move towards the neighbor which provides the maximum (or minimum) change.
We keep doing this until we are at a state where we do not have any
neighbors which can improve the solution.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the maximum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
max_iter: number of times to run the iteration.
Returns a search state having the maximum (or minimum) score.
"""
current_state = search_prob
scores = [] # list to store the current score at each iteration
iterations = 0
solution_found = False
visited = set()
while not solution_found and iterations < max_iter:
visited.add(current_state)
iterations += 1
current_score = current_state.score()
scores.append(current_score)
neighbors = current_state.get_neighbors()
max_change = -math.inf
min_change = math.inf
next_state = None # to hold the next best neighbor
for neighbor in neighbors:
if neighbor in visited:
continue # do not want to visit the same state again
if (
neighbor.x > max_x
or neighbor.x < min_x
or neighbor.y > max_y
or neighbor.y < min_y
):
continue # neighbor outside our bounds
change = neighbor.score() - current_score
if find_max: # finding max
# going to direction with greatest ascent
if change > max_change and change > 0:
max_change = change
next_state = neighbor
else: # finding min
# to direction with greatest descent
if change < min_change and change < 0:
min_change = change
next_state = neighbor
if next_state is not None:
# we found at least one neighbor which improved the current state
current_state = next_state
else:
# since we have no neighbor that improves the solution we stop the search
solution_found = True
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return current_state
if __name__ == "__main__":
import doctest
doctest.testmod()
def test_f1(x, y):
return (x ** 2) + (y ** 2)
# starting the problem with initial coordinates (3, 4)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(prob, find_max=False)
print(
"The minimum score for f(x, y) = x^2 + y^2 found via hill climbing: "
f"{local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x ** 2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = hill_climbing(prob, find_max=True)
print(
"The maximum score for f(x, y) = x^2 + y^2 found via hill climbing: "
f"{local_min.score()}"
)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
== Krishnamurthy Number ==
It is also known as Peterson Number
A Krishnamurthy Number is a number whose sum of the
factorial of the digits equals to the original
number itself.
For example: 145 = 1! + 4! + 5!
So, 145 is a Krishnamurthy Number
"""
def factorial(digit: int) -> int:
"""
>>> factorial(3)
6
>>> factorial(0)
1
>>> factorial(5)
120
"""
return 1 if digit in (0, 1) else (digit * factorial(digit - 1))
def krishnamurthy(number: int) -> bool:
"""
>>> krishnamurthy(145)
True
>>> krishnamurthy(240)
False
>>> krishnamurthy(1)
True
"""
factSum = 0
duplicate = number
while duplicate > 0:
duplicate, digit = divmod(duplicate, 10)
factSum += factorial(digit)
return factSum == number
if __name__ == "__main__":
print("Program to check whether a number is a Krisnamurthy Number or not.")
number = int(input("Enter number: ").strip())
print(
f"{number} is {'' if krishnamurthy(number) else 'not '}a Krishnamurthy Number."
)
| """
== Krishnamurthy Number ==
It is also known as Peterson Number
A Krishnamurthy Number is a number whose sum of the
factorial of the digits equals to the original
number itself.
For example: 145 = 1! + 4! + 5!
So, 145 is a Krishnamurthy Number
"""
def factorial(digit: int) -> int:
"""
>>> factorial(3)
6
>>> factorial(0)
1
>>> factorial(5)
120
"""
return 1 if digit in (0, 1) else (digit * factorial(digit - 1))
def krishnamurthy(number: int) -> bool:
"""
>>> krishnamurthy(145)
True
>>> krishnamurthy(240)
False
>>> krishnamurthy(1)
True
"""
factSum = 0
duplicate = number
while duplicate > 0:
duplicate, digit = divmod(duplicate, 10)
factSum += factorial(digit)
return factSum == number
if __name__ == "__main__":
print("Program to check whether a number is a Krisnamurthy Number or not.")
number = int(input("Enter number: ").strip())
print(
f"{number} is {'' if krishnamurthy(number) else 'not '}a Krishnamurthy Number."
)
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
def retroactive_resolution(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
sum = 0
for col in range(row + 1, columns):
sum += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - sum) / coefficients[row, row]
return x
def gaussian_elimination(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
def retroactive_resolution(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
sum = 0
for col in range(row + 1, columns):
sum += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - sum) / coefficients[row, row]
return x
def gaussian_elimination(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Functions useful for doing molecular chemistry:
* molarity_to_normality
* moles_to_pressure
* moles_to_volume
* pressure_and_volume_to_temperature
"""
def molarity_to_normality(nfactor: int, moles: float, volume: float) -> float:
"""
Convert molarity to normality.
Volume is taken in litres.
Wikipedia reference: https://en.wikipedia.org/wiki/Equivalent_concentration
Wikipedia reference: https://en.wikipedia.org/wiki/Molar_concentration
>>> molarity_to_normality(2, 3.1, 0.31)
20
>>> molarity_to_normality(4, 11.4, 5.7)
8
"""
return round(float(moles / volume) * nfactor)
def moles_to_pressure(volume: float, moles: float, temperature: float) -> float:
"""
Convert moles to pressure.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_pressure(0.82, 3, 300)
90
>>> moles_to_pressure(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (volume)))
def moles_to_volume(pressure: float, moles: float, temperature: float) -> float:
"""
Convert moles to volume.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_volume(0.82, 3, 300)
90
>>> moles_to_volume(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (pressure)))
def pressure_and_volume_to_temperature(
pressure: float, moles: float, volume: float
) -> float:
"""
Convert pressure and volume to temperature.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> pressure_and_volume_to_temperature(0.82, 1, 2)
20
>>> pressure_and_volume_to_temperature(8.2, 5, 3)
60
"""
return round(float((pressure * volume) / (0.0821 * moles)))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Functions useful for doing molecular chemistry:
* molarity_to_normality
* moles_to_pressure
* moles_to_volume
* pressure_and_volume_to_temperature
"""
def molarity_to_normality(nfactor: int, moles: float, volume: float) -> float:
"""
Convert molarity to normality.
Volume is taken in litres.
Wikipedia reference: https://en.wikipedia.org/wiki/Equivalent_concentration
Wikipedia reference: https://en.wikipedia.org/wiki/Molar_concentration
>>> molarity_to_normality(2, 3.1, 0.31)
20
>>> molarity_to_normality(4, 11.4, 5.7)
8
"""
return round(float(moles / volume) * nfactor)
def moles_to_pressure(volume: float, moles: float, temperature: float) -> float:
"""
Convert moles to pressure.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_pressure(0.82, 3, 300)
90
>>> moles_to_pressure(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (volume)))
def moles_to_volume(pressure: float, moles: float, temperature: float) -> float:
"""
Convert moles to volume.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_volume(0.82, 3, 300)
90
>>> moles_to_volume(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (pressure)))
def pressure_and_volume_to_temperature(
pressure: float, moles: float, volume: float
) -> float:
"""
Convert pressure and volume to temperature.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> pressure_and_volume_to_temperature(0.82, 1, 2)
20
>>> pressure_and_volume_to_temperature(8.2, 5, 3)
60
"""
return round(float((pressure * volume) / (0.0821 * moles)))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Password generator allows you to generate a random password of length N."""
from random import choice, shuffle
from string import ascii_letters, digits, punctuation
def password_generator(length=8):
"""
>>> len(password_generator())
8
>>> len(password_generator(length=16))
16
>>> len(password_generator(257))
257
>>> len(password_generator(length=0))
0
>>> len(password_generator(-1))
0
"""
chars = ascii_letters + digits + punctuation
return "".join(choice(chars) for x in range(length))
# ALTERNATIVE METHODS
# ctbi= characters that must be in password
# i= how many letters or characters the password length will be
def alternative_password_generator(ctbi, i):
# Password generator = full boot with random_number, random_letters, and
# random_character FUNCTIONS
# Put your code here...
i = i - len(ctbi)
quotient = int(i / 3)
remainder = i % 3
# chars = ctbi + random_letters(ascii_letters, i / 3 + remainder) +
# random_number(digits, i / 3) + random_characters(punctuation, i / 3)
chars = (
ctbi
+ random(ascii_letters, quotient + remainder)
+ random(digits, quotient)
+ random(punctuation, quotient)
)
chars = list(chars)
shuffle(chars)
return "".join(chars)
# random is a generalised function for letters, characters and numbers
def random(ctbi, i):
return "".join(choice(ctbi) for x in range(i))
def random_number(ctbi, i):
pass # Put your code here...
def random_letters(ctbi, i):
pass # Put your code here...
def random_characters(ctbi, i):
pass # Put your code here...
def main():
length = int(input("Please indicate the max length of your password: ").strip())
ctbi = input(
"Please indicate the characters that must be in your password: "
).strip()
print("Password generated:", password_generator(length))
print(
"Alternative Password generated:", alternative_password_generator(ctbi, length)
)
print("[If you are thinking of using this passsword, You better save it.]")
if __name__ == "__main__":
main()
| """Password generator allows you to generate a random password of length N."""
from random import choice, shuffle
from string import ascii_letters, digits, punctuation
def password_generator(length=8):
"""
>>> len(password_generator())
8
>>> len(password_generator(length=16))
16
>>> len(password_generator(257))
257
>>> len(password_generator(length=0))
0
>>> len(password_generator(-1))
0
"""
chars = ascii_letters + digits + punctuation
return "".join(choice(chars) for x in range(length))
# ALTERNATIVE METHODS
# ctbi= characters that must be in password
# i= how many letters or characters the password length will be
def alternative_password_generator(ctbi, i):
# Password generator = full boot with random_number, random_letters, and
# random_character FUNCTIONS
# Put your code here...
i = i - len(ctbi)
quotient = int(i / 3)
remainder = i % 3
# chars = ctbi + random_letters(ascii_letters, i / 3 + remainder) +
# random_number(digits, i / 3) + random_characters(punctuation, i / 3)
chars = (
ctbi
+ random(ascii_letters, quotient + remainder)
+ random(digits, quotient)
+ random(punctuation, quotient)
)
chars = list(chars)
shuffle(chars)
return "".join(chars)
# random is a generalised function for letters, characters and numbers
def random(ctbi, i):
return "".join(choice(ctbi) for x in range(i))
def random_number(ctbi, i):
pass # Put your code here...
def random_letters(ctbi, i):
pass # Put your code here...
def random_characters(ctbi, i):
pass # Put your code here...
def main():
length = int(input("Please indicate the max length of your password: ").strip())
ctbi = input(
"Please indicate the characters that must be in your password: "
).strip()
print("Password generated:", password_generator(length))
print(
"Alternative Password generated:", alternative_password_generator(ctbi, length)
)
print("[If you are thinking of using this passsword, You better save it.]")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| -1 |
TheAlgorithms/Python | 4,928 | Fix word typos in comments | ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| sarvesh4396 | 2021-10-02T18:54:17Z | 2021-10-04T04:07:58Z | d530d2bcf42391a8172c720e8d7c7d354a748abf | 90db98304e06a60a3c578e9f55fe139524a4c3f8 | Fix word typos in comments. ### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution(n: int = 1000) -> int:
"""
Return the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = n
>>> solution(36)
1620
>>> solution(126)
66780
"""
product = -1
candidate = 0
for a in range(1, n // 3):
# Solving the two equations a**2+b**2=c**2 and a+b+c=N eliminating c
b = (n * n - 2 * a * n) // (2 * n - 2 * a)
c = n - a - b
if c * c == (a * a + b * b):
candidate = a * b * c
if candidate >= product:
product = candidate
return product
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution(n: int = 1000) -> int:
"""
Return the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = n
>>> solution(36)
1620
>>> solution(126)
66780
"""
product = -1
candidate = 0
for a in range(1, n // 3):
# Solving the two equations a**2+b**2=c**2 and a+b+c=N eliminating c
b = (n * n - 2 * a * n) // (2 * n - 2 * a)
c = n - a - b
if c * c == (a * a + b * b):
candidate = a * b * c
if candidate >= product:
product = candidate
return product
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Linked Lists consists of Nodes.
Nodes contain data and also may link to other nodes:
- Head Node: First node, the address of the
head node gives us access of the complete list
- Last node: points to null
"""
from typing import Any
class Node:
def __init__(self, item: Any, next: Any) -> None:
self.item = item
self.next = next
class LinkedList:
def __init__(self) -> None:
self.head = None
self.size = 0
def add(self, item: Any) -> None:
self.head = Node(item, self.head)
self.size += 1
def remove(self) -> Any:
if self.is_empty():
return None
else:
item = self.head.item
self.head = self.head.next
self.size -= 1
return item
def is_empty(self) -> bool:
return self.head is None
def __str__(self) -> str:
"""
>>> linked_list = LinkedList()
>>> linked_list.add(23)
>>> linked_list.add(14)
>>> linked_list.add(9)
>>> print(linked_list)
9 --> 14 --> 23
"""
if not self.is_empty:
return ""
else:
iterate = self.head
item_str = ""
item_list = []
while iterate:
item_list.append(str(iterate.item))
iterate = iterate.next
item_str = " --> ".join(item_list)
return item_str
def __len__(self) -> int:
"""
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.add("a")
>>> len(linked_list)
1
>>> linked_list.add("b")
>>> len(linked_list)
2
>>> _ = linked_list.remove()
>>> len(linked_list)
1
>>> _ = linked_list.remove()
>>> len(linked_list)
0
"""
return self.size
| """
Linked Lists consists of Nodes.
Nodes contain data and also may link to other nodes:
- Head Node: First node, the address of the
head node gives us access of the complete list
- Last node: points to null
"""
from typing import Any, Optional
class Node:
def __init__(self, item: Any, next: Any) -> None:
self.item = item
self.next = next
class LinkedList:
def __init__(self) -> None:
self.head: Optional[Node] = None
self.size = 0
def add(self, item: Any) -> None:
self.head = Node(item, self.head)
self.size += 1
def remove(self) -> Any:
# Switched 'self.is_empty()' to 'self.head is None'
# because mypy was considering the possibility that 'self.head'
# can be None in below else part and giving error
if self.head is None:
return None
else:
item = self.head.item
self.head = self.head.next
self.size -= 1
return item
def is_empty(self) -> bool:
return self.head is None
def __str__(self) -> str:
"""
>>> linked_list = LinkedList()
>>> linked_list.add(23)
>>> linked_list.add(14)
>>> linked_list.add(9)
>>> print(linked_list)
9 --> 14 --> 23
"""
if not self.is_empty:
return ""
else:
iterate = self.head
item_str = ""
item_list: list[str] = []
while iterate:
item_list.append(str(iterate.item))
iterate = iterate.next
item_str = " --> ".join(item_list)
return item_str
def __len__(self) -> int:
"""
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.add("a")
>>> len(linked_list)
1
>>> linked_list.add("b")
>>> len(linked_list)
2
>>> _ = linked_list.remove()
>>> len(linked_list)
1
>>> _ = linked_list.remove()
>>> len(linked_list)
0
"""
return self.size
| 1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Any
class Node:
def __init__(self, data: Any):
self.data = data
self.next = None
class CircularLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
node = self.head
while self.head:
yield node.data
node = node.next
if node == self.head:
break
def __len__(self) -> int:
return len(tuple(iter(self)))
def __repr__(self):
return "->".join(str(item) for item in iter(self))
def insert_tail(self, data: Any) -> None:
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
new_node = Node(data)
if self.head is None:
new_node.next = new_node # first node points itself
self.tail = self.head = new_node
elif index == 0: # insert at head
new_node.next = self.head
self.head = self.tail.next = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
if index == len(self) - 1: # insert at tail
self.tail = new_node
def delete_front(self):
return self.delete_nth(0)
def delete_tail(self) -> None:
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0):
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
delete_node = self.head
if self.head == self.tail: # just one node
self.head = self.tail = None
elif index == 0: # delete head node
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
if index == len(self) - 1: # delete at tail
self.tail = temp
return delete_node.data
def is_empty(self):
return len(self) == 0
def test_circular_linked_list() -> None:
"""
>>> test_circular_linked_list()
"""
circular_linked_list = CircularLinkedList()
assert len(circular_linked_list) == 0
assert circular_linked_list.is_empty() is True
assert str(circular_linked_list) == ""
try:
circular_linked_list.delete_front()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
assert False
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
assert False
except IndexError:
assert True
assert circular_linked_list.is_empty() is True
for i in range(5):
assert len(circular_linked_list) == i
circular_linked_list.insert_nth(i, i + 1)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
circular_linked_list.insert_tail(6)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 7))
circular_linked_list.insert_head(0)
assert str(circular_linked_list) == "->".join(str(i) for i in range(0, 7))
assert circular_linked_list.delete_front() == 0
assert circular_linked_list.delete_tail() == 6
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.delete_nth(2) == 3
circular_linked_list.insert_nth(2, 3)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.is_empty() is False
if __name__ == "__main__":
import doctest
doctest.testmod()
| from typing import Any, Iterator, Optional
class Node:
def __init__(self, data: Any):
self.data: Any = data
self.next: Optional[Node] = None
class CircularLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self) -> Iterator[Any]:
node = self.head
while self.head:
yield node.data
node = node.next
if node == self.head:
break
def __len__(self) -> int:
return len(tuple(iter(self)))
def __repr__(self):
return "->".join(str(item) for item in iter(self))
def insert_tail(self, data: Any) -> None:
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
new_node = Node(data)
if self.head is None:
new_node.next = new_node # first node points itself
self.tail = self.head = new_node
elif index == 0: # insert at head
new_node.next = self.head
self.head = self.tail.next = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
if index == len(self) - 1: # insert at tail
self.tail = new_node
def delete_front(self):
return self.delete_nth(0)
def delete_tail(self) -> Any:
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
delete_node = self.head
if self.head == self.tail: # just one node
self.head = self.tail = None
elif index == 0: # delete head node
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
if index == len(self) - 1: # delete at tail
self.tail = temp
return delete_node.data
def is_empty(self) -> bool:
return len(self) == 0
def test_circular_linked_list() -> None:
"""
>>> test_circular_linked_list()
"""
circular_linked_list = CircularLinkedList()
assert len(circular_linked_list) == 0
assert circular_linked_list.is_empty() is True
assert str(circular_linked_list) == ""
try:
circular_linked_list.delete_front()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
assert False
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
assert False
except IndexError:
assert True
assert circular_linked_list.is_empty() is True
for i in range(5):
assert len(circular_linked_list) == i
circular_linked_list.insert_nth(i, i + 1)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
circular_linked_list.insert_tail(6)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 7))
circular_linked_list.insert_head(0)
assert str(circular_linked_list) == "->".join(str(i) for i in range(0, 7))
assert circular_linked_list.delete_front() == 0
assert circular_linked_list.delete_tail() == 6
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.delete_nth(2) == 3
circular_linked_list.insert_nth(2, 3)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.is_empty() is False
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Any
class ContainsLoopError(Exception):
pass
class Node:
def __init__(self, data: Any) -> None:
self.data = data
self.next_node = None
def __iter__(self):
node = self
visited = []
while node:
if node in visited:
raise ContainsLoopError
visited.append(node)
yield node.data
node = node.next_node
@property
def has_loop(self) -> bool:
"""
A loop is when the exact same Node appears more than once in a linked list.
>>> root_node = Node(1)
>>> root_node.next_node = Node(2)
>>> root_node.next_node.next_node = Node(3)
>>> root_node.next_node.next_node.next_node = Node(4)
>>> root_node.has_loop
False
>>> root_node.next_node.next_node.next_node = root_node.next_node
>>> root_node.has_loop
True
"""
try:
list(self)
return False
except ContainsLoopError:
return True
if __name__ == "__main__":
root_node = Node(1)
root_node.next_node = Node(2)
root_node.next_node.next_node = Node(3)
root_node.next_node.next_node.next_node = Node(4)
print(root_node.has_loop) # False
root_node.next_node.next_node.next_node = root_node.next_node
print(root_node.has_loop) # True
root_node = Node(5)
root_node.next_node = Node(6)
root_node.next_node.next_node = Node(5)
root_node.next_node.next_node.next_node = Node(6)
print(root_node.has_loop) # False
root_node = Node(1)
print(root_node.has_loop) # False
| from typing import Any, Optional
class ContainsLoopError(Exception):
pass
class Node:
def __init__(self, data: Any) -> None:
self.data: Any = data
self.next_node: Optional[Node] = None
def __iter__(self):
node = self
visited = []
while node:
if node in visited:
raise ContainsLoopError
visited.append(node)
yield node.data
node = node.next_node
@property
def has_loop(self) -> bool:
"""
A loop is when the exact same Node appears more than once in a linked list.
>>> root_node = Node(1)
>>> root_node.next_node = Node(2)
>>> root_node.next_node.next_node = Node(3)
>>> root_node.next_node.next_node.next_node = Node(4)
>>> root_node.has_loop
False
>>> root_node.next_node.next_node.next_node = root_node.next_node
>>> root_node.has_loop
True
"""
try:
list(self)
return False
except ContainsLoopError:
return True
if __name__ == "__main__":
root_node = Node(1)
root_node.next_node = Node(2)
root_node.next_node.next_node = Node(3)
root_node.next_node.next_node.next_node = Node(4)
print(root_node.has_loop) # False
root_node.next_node.next_node.next_node = root_node.next_node
print(root_node.has_loop) # True
root_node = Node(5)
root_node.next_node = Node(6)
root_node.next_node.next_node = Node(5)
root_node.next_node.next_node.next_node = Node(6)
print(root_node.has_loop) # False
root_node = Node(1)
print(root_node.has_loop) # False
| 1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class Node:
def __init__(self, data: int) -> int:
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def push(self, new_data: int) -> int:
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
return self.head.data
def middle_element(self) -> int:
"""
>>> link = LinkedList()
>>> link.middle_element()
No element found.
>>> link.push(5)
5
>>> link.push(6)
6
>>> link.push(8)
8
>>> link.push(8)
8
>>> link.push(10)
10
>>> link.push(12)
12
>>> link.push(17)
17
>>> link.push(7)
7
>>> link.push(3)
3
>>> link.push(20)
20
>>> link.push(-20)
-20
>>> link.middle_element()
12
>>>
"""
slow_pointer = self.head
fast_pointer = self.head
if self.head:
while fast_pointer and fast_pointer.next:
fast_pointer = fast_pointer.next.next
slow_pointer = slow_pointer.next
return slow_pointer.data
else:
print("No element found.")
if __name__ == "__main__":
link = LinkedList()
for i in range(int(input().strip())):
data = int(input().strip())
link.push(data)
print(link.middle_element())
| from typing import Optional
class Node:
def __init__(self, data: int) -> None:
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def push(self, new_data: int) -> int:
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
return self.head.data
def middle_element(self) -> Optional[int]:
"""
>>> link = LinkedList()
>>> link.middle_element()
No element found.
>>> link.push(5)
5
>>> link.push(6)
6
>>> link.push(8)
8
>>> link.push(8)
8
>>> link.push(10)
10
>>> link.push(12)
12
>>> link.push(17)
17
>>> link.push(7)
7
>>> link.push(3)
3
>>> link.push(20)
20
>>> link.push(-20)
-20
>>> link.middle_element()
12
>>>
"""
slow_pointer = self.head
fast_pointer = self.head
if self.head:
while fast_pointer and fast_pointer.next:
fast_pointer = fast_pointer.next.next
slow_pointer = slow_pointer.next
return slow_pointer.data
else:
print("No element found.")
return None
if __name__ == "__main__":
link = LinkedList()
for i in range(int(input().strip())):
data = int(input().strip())
link.push(data)
print(link.middle_element())
| 1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Based on "Skip Lists: A Probabilistic Alternative to Balanced Trees" by William Pugh
https://epaperpress.com/sortsearch/download/skiplist.pdf
"""
from __future__ import annotations
from random import random
from typing import Generic, TypeVar
KT = TypeVar("KT")
VT = TypeVar("VT")
class Node(Generic[KT, VT]):
def __init__(self, key: KT, value: VT):
self.key = key
self.value = value
self.forward: list[Node[KT, VT]] = []
def __repr__(self) -> str:
"""
:return: Visual representation of Node
>>> node = Node("Key", 2)
>>> repr(node)
'Node(Key: 2)'
"""
return f"Node({self.key}: {self.value})"
@property
def level(self) -> int:
"""
:return: Number of forward references
>>> node = Node("Key", 2)
>>> node.level
0
>>> node.forward.append(Node("Key2", 4))
>>> node.level
1
>>> node.forward.append(Node("Key3", 6))
>>> node.level
2
"""
return len(self.forward)
class SkipList(Generic[KT, VT]):
def __init__(self, p: float = 0.5, max_level: int = 16):
self.head = Node("root", None)
self.level = 0
self.p = p
self.max_level = max_level
def __str__(self) -> str:
"""
:return: Visual representation of SkipList
>>> skip_list = SkipList()
>>> print(skip_list)
SkipList(level=0)
>>> skip_list.insert("Key1", "Value")
>>> print(skip_list) # doctest: +ELLIPSIS
SkipList(level=...
[root]--...
[Key1]--Key1...
None *...
>>> skip_list.insert("Key2", "OtherValue")
>>> print(skip_list) # doctest: +ELLIPSIS
SkipList(level=...
[root]--...
[Key1]--Key1...
[Key2]--Key2...
None *...
"""
items = list(self)
if len(items) == 0:
return f"SkipList(level={self.level})"
label_size = max((len(str(item)) for item in items), default=4)
label_size = max(label_size, 4) + 4
node = self.head
lines = []
forwards = node.forward.copy()
lines.append(f"[{node.key}]".ljust(label_size, "-") + "* " * len(forwards))
lines.append(" " * label_size + "| " * len(forwards))
while len(node.forward) != 0:
node = node.forward[0]
lines.append(
f"[{node.key}]".ljust(label_size, "-")
+ " ".join(str(n.key) if n.key == node.key else "|" for n in forwards)
)
lines.append(" " * label_size + "| " * len(forwards))
forwards[: node.level] = node.forward
lines.append("None".ljust(label_size) + "* " * len(forwards))
return f"SkipList(level={self.level})\n" + "\n".join(lines)
def __iter__(self):
node = self.head
while len(node.forward) != 0:
yield node.forward[0].key
node = node.forward[0]
def random_level(self) -> int:
"""
:return: Random level from [1, self.max_level] interval.
Higher values are less likely.
"""
level = 1
while random() < self.p and level < self.max_level:
level += 1
return level
def _locate_node(self, key) -> tuple[Node[KT, VT] | None, list[Node[KT, VT]]]:
"""
:param key: Searched key,
:return: Tuple with searched node (or None if given key is not present)
and list of nodes that refer (if key is present) of should refer to
given node.
"""
# Nodes with refer or should refer to output node
update_vector = []
node = self.head
for i in reversed(range(self.level)):
# i < node.level - When node level is lesser than `i` decrement `i`.
# node.forward[i].key < key - Jumping to node with key value higher
# or equal to searched key would result
# in skipping searched key.
while i < node.level and node.forward[i].key < key:
node = node.forward[i]
# Each leftmost node (relative to searched node) will potentially have to
# be updated.
update_vector.append(node)
update_vector.reverse() # Note that we were inserting values in reverse order.
# len(node.forward) != 0 - If current node doesn't contain any further
# references then searched key is not present.
# node.forward[0].key == key - Next node key should be equal to search key
# if key is present.
if len(node.forward) != 0 and node.forward[0].key == key:
return node.forward[0], update_vector
else:
return None, update_vector
def delete(self, key: KT):
"""
:param key: Key to remove from list.
>>> skip_list = SkipList()
>>> skip_list.insert(2, "Two")
>>> skip_list.insert(1, "One")
>>> skip_list.insert(3, "Three")
>>> list(skip_list)
[1, 2, 3]
>>> skip_list.delete(2)
>>> list(skip_list)
[1, 3]
"""
node, update_vector = self._locate_node(key)
if node is not None:
for i, update_node in enumerate(update_vector):
# Remove or replace all references to removed node.
if update_node.level > i and update_node.forward[i].key == key:
if node.level > i:
update_node.forward[i] = node.forward[i]
else:
update_node.forward = update_node.forward[:i]
def insert(self, key: KT, value: VT):
"""
:param key: Key to insert.
:param value: Value associated with given key.
>>> skip_list = SkipList()
>>> skip_list.insert(2, "Two")
>>> skip_list.find(2)
'Two'
>>> list(skip_list)
[2]
"""
node, update_vector = self._locate_node(key)
if node is not None:
node.value = value
else:
level = self.random_level()
if level > self.level:
# After level increase we have to add additional nodes to head.
for i in range(self.level - 1, level):
update_vector.append(self.head)
self.level = level
new_node = Node(key, value)
for i, update_node in enumerate(update_vector[:level]):
# Change references to pass through new node.
if update_node.level > i:
new_node.forward.append(update_node.forward[i])
if update_node.level < i + 1:
update_node.forward.append(new_node)
else:
update_node.forward[i] = new_node
def find(self, key: VT) -> VT | None:
"""
:param key: Search key.
:return: Value associated with given key or None if given key is not present.
>>> skip_list = SkipList()
>>> skip_list.find(2)
>>> skip_list.insert(2, "Two")
>>> skip_list.find(2)
'Two'
>>> skip_list.insert(2, "Three")
>>> skip_list.find(2)
'Three'
"""
node, _ = self._locate_node(key)
if node is not None:
return node.value
return None
def test_insert():
skip_list = SkipList()
skip_list.insert("Key1", 3)
skip_list.insert("Key2", 12)
skip_list.insert("Key3", 41)
skip_list.insert("Key4", -19)
node = skip_list.head
all_values = {}
while node.level != 0:
node = node.forward[0]
all_values[node.key] = node.value
assert len(all_values) == 4
assert all_values["Key1"] == 3
assert all_values["Key2"] == 12
assert all_values["Key3"] == 41
assert all_values["Key4"] == -19
def test_insert_overrides_existing_value():
skip_list = SkipList()
skip_list.insert("Key1", 10)
skip_list.insert("Key1", 12)
skip_list.insert("Key5", 7)
skip_list.insert("Key7", 10)
skip_list.insert("Key10", 5)
skip_list.insert("Key7", 7)
skip_list.insert("Key5", 5)
skip_list.insert("Key10", 10)
node = skip_list.head
all_values = {}
while node.level != 0:
node = node.forward[0]
all_values[node.key] = node.value
if len(all_values) != 4:
print()
assert len(all_values) == 4
assert all_values["Key1"] == 12
assert all_values["Key7"] == 7
assert all_values["Key5"] == 5
assert all_values["Key10"] == 10
def test_searching_empty_list_returns_none():
skip_list = SkipList()
assert skip_list.find("Some key") is None
def test_search():
skip_list = SkipList()
skip_list.insert("Key2", 20)
assert skip_list.find("Key2") == 20
skip_list.insert("Some Key", 10)
skip_list.insert("Key2", 8)
skip_list.insert("V", 13)
assert skip_list.find("Y") is None
assert skip_list.find("Key2") == 8
assert skip_list.find("Some Key") == 10
assert skip_list.find("V") == 13
def test_deleting_item_from_empty_list_do_nothing():
skip_list = SkipList()
skip_list.delete("Some key")
assert len(skip_list.head.forward) == 0
def test_deleted_items_are_not_founded_by_find_method():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 14)
skip_list.insert("Key2", 15)
skip_list.delete("V")
skip_list.delete("Key2")
assert skip_list.find("V") is None
assert skip_list.find("Key2") is None
def test_delete_removes_only_given_key():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 14)
skip_list.insert("Key2", 15)
skip_list.delete("V")
assert skip_list.find("V") is None
assert skip_list.find("X") == 14
assert skip_list.find("Key1") == 12
assert skip_list.find("Key2") == 15
skip_list.delete("X")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") == 12
assert skip_list.find("Key2") == 15
skip_list.delete("Key1")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") is None
assert skip_list.find("Key2") == 15
skip_list.delete("Key2")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") is None
assert skip_list.find("Key2") is None
def test_delete_doesnt_leave_dead_nodes():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 142)
skip_list.insert("Key2", 15)
skip_list.delete("X")
def traverse_keys(node):
yield node.key
for forward_node in node.forward:
yield from traverse_keys(forward_node)
assert len(set(traverse_keys(skip_list.head))) == 4
def test_iter_always_yields_sorted_values():
def is_sorted(lst):
for item, next_item in zip(lst, lst[1:]):
if next_item < item:
return False
return True
skip_list = SkipList()
for i in range(10):
skip_list.insert(i, i)
assert is_sorted(list(skip_list))
skip_list.delete(5)
skip_list.delete(8)
skip_list.delete(2)
assert is_sorted(list(skip_list))
skip_list.insert(-12, -12)
skip_list.insert(77, 77)
assert is_sorted(list(skip_list))
def pytests():
for i in range(100):
# Repeat test 100 times due to the probabilistic nature of skip list
# random values == random bugs
test_insert()
test_insert_overrides_existing_value()
test_searching_empty_list_returns_none()
test_search()
test_deleting_item_from_empty_list_do_nothing()
test_deleted_items_are_not_founded_by_find_method()
test_delete_removes_only_given_key()
test_delete_doesnt_leave_dead_nodes()
test_iter_always_yields_sorted_values()
def main():
"""
>>> pytests()
"""
skip_list = SkipList()
skip_list.insert(2, "2")
skip_list.insert(4, "4")
skip_list.insert(6, "4")
skip_list.insert(4, "5")
skip_list.insert(8, "4")
skip_list.insert(9, "4")
skip_list.delete(4)
print(skip_list)
if __name__ == "__main__":
main()
| """
Based on "Skip Lists: A Probabilistic Alternative to Balanced Trees" by William Pugh
https://epaperpress.com/sortsearch/download/skiplist.pdf
"""
from __future__ import annotations
from random import random
from typing import Generic, Optional, TypeVar, Union
KT = TypeVar("KT")
VT = TypeVar("VT")
class Node(Generic[KT, VT]):
def __init__(self, key: Union[KT, str] = "root", value: Optional[VT] = None):
self.key = key
self.value = value
self.forward: list[Node[KT, VT]] = []
def __repr__(self) -> str:
"""
:return: Visual representation of Node
>>> node = Node("Key", 2)
>>> repr(node)
'Node(Key: 2)'
"""
return f"Node({self.key}: {self.value})"
@property
def level(self) -> int:
"""
:return: Number of forward references
>>> node = Node("Key", 2)
>>> node.level
0
>>> node.forward.append(Node("Key2", 4))
>>> node.level
1
>>> node.forward.append(Node("Key3", 6))
>>> node.level
2
"""
return len(self.forward)
class SkipList(Generic[KT, VT]):
def __init__(self, p: float = 0.5, max_level: int = 16):
self.head: Node[KT, VT] = Node[KT, VT]()
self.level = 0
self.p = p
self.max_level = max_level
def __str__(self) -> str:
"""
:return: Visual representation of SkipList
>>> skip_list = SkipList()
>>> print(skip_list)
SkipList(level=0)
>>> skip_list.insert("Key1", "Value")
>>> print(skip_list) # doctest: +ELLIPSIS
SkipList(level=...
[root]--...
[Key1]--Key1...
None *...
>>> skip_list.insert("Key2", "OtherValue")
>>> print(skip_list) # doctest: +ELLIPSIS
SkipList(level=...
[root]--...
[Key1]--Key1...
[Key2]--Key2...
None *...
"""
items = list(self)
if len(items) == 0:
return f"SkipList(level={self.level})"
label_size = max((len(str(item)) for item in items), default=4)
label_size = max(label_size, 4) + 4
node = self.head
lines = []
forwards = node.forward.copy()
lines.append(f"[{node.key}]".ljust(label_size, "-") + "* " * len(forwards))
lines.append(" " * label_size + "| " * len(forwards))
while len(node.forward) != 0:
node = node.forward[0]
lines.append(
f"[{node.key}]".ljust(label_size, "-")
+ " ".join(str(n.key) if n.key == node.key else "|" for n in forwards)
)
lines.append(" " * label_size + "| " * len(forwards))
forwards[: node.level] = node.forward
lines.append("None".ljust(label_size) + "* " * len(forwards))
return f"SkipList(level={self.level})\n" + "\n".join(lines)
def __iter__(self):
node = self.head
while len(node.forward) != 0:
yield node.forward[0].key
node = node.forward[0]
def random_level(self) -> int:
"""
:return: Random level from [1, self.max_level] interval.
Higher values are less likely.
"""
level = 1
while random() < self.p and level < self.max_level:
level += 1
return level
def _locate_node(self, key) -> tuple[Node[KT, VT] | None, list[Node[KT, VT]]]:
"""
:param key: Searched key,
:return: Tuple with searched node (or None if given key is not present)
and list of nodes that refer (if key is present) of should refer to
given node.
"""
# Nodes with refer or should refer to output node
update_vector = []
node = self.head
for i in reversed(range(self.level)):
# i < node.level - When node level is lesser than `i` decrement `i`.
# node.forward[i].key < key - Jumping to node with key value higher
# or equal to searched key would result
# in skipping searched key.
while i < node.level and node.forward[i].key < key:
node = node.forward[i]
# Each leftmost node (relative to searched node) will potentially have to
# be updated.
update_vector.append(node)
update_vector.reverse() # Note that we were inserting values in reverse order.
# len(node.forward) != 0 - If current node doesn't contain any further
# references then searched key is not present.
# node.forward[0].key == key - Next node key should be equal to search key
# if key is present.
if len(node.forward) != 0 and node.forward[0].key == key:
return node.forward[0], update_vector
else:
return None, update_vector
def delete(self, key: KT):
"""
:param key: Key to remove from list.
>>> skip_list = SkipList()
>>> skip_list.insert(2, "Two")
>>> skip_list.insert(1, "One")
>>> skip_list.insert(3, "Three")
>>> list(skip_list)
[1, 2, 3]
>>> skip_list.delete(2)
>>> list(skip_list)
[1, 3]
"""
node, update_vector = self._locate_node(key)
if node is not None:
for i, update_node in enumerate(update_vector):
# Remove or replace all references to removed node.
if update_node.level > i and update_node.forward[i].key == key:
if node.level > i:
update_node.forward[i] = node.forward[i]
else:
update_node.forward = update_node.forward[:i]
def insert(self, key: KT, value: VT):
"""
:param key: Key to insert.
:param value: Value associated with given key.
>>> skip_list = SkipList()
>>> skip_list.insert(2, "Two")
>>> skip_list.find(2)
'Two'
>>> list(skip_list)
[2]
"""
node, update_vector = self._locate_node(key)
if node is not None:
node.value = value
else:
level = self.random_level()
if level > self.level:
# After level increase we have to add additional nodes to head.
for i in range(self.level - 1, level):
update_vector.append(self.head)
self.level = level
new_node = Node(key, value)
for i, update_node in enumerate(update_vector[:level]):
# Change references to pass through new node.
if update_node.level > i:
new_node.forward.append(update_node.forward[i])
if update_node.level < i + 1:
update_node.forward.append(new_node)
else:
update_node.forward[i] = new_node
def find(self, key: VT) -> VT | None:
"""
:param key: Search key.
:return: Value associated with given key or None if given key is not present.
>>> skip_list = SkipList()
>>> skip_list.find(2)
>>> skip_list.insert(2, "Two")
>>> skip_list.find(2)
'Two'
>>> skip_list.insert(2, "Three")
>>> skip_list.find(2)
'Three'
"""
node, _ = self._locate_node(key)
if node is not None:
return node.value
return None
def test_insert():
skip_list = SkipList()
skip_list.insert("Key1", 3)
skip_list.insert("Key2", 12)
skip_list.insert("Key3", 41)
skip_list.insert("Key4", -19)
node = skip_list.head
all_values = {}
while node.level != 0:
node = node.forward[0]
all_values[node.key] = node.value
assert len(all_values) == 4
assert all_values["Key1"] == 3
assert all_values["Key2"] == 12
assert all_values["Key3"] == 41
assert all_values["Key4"] == -19
def test_insert_overrides_existing_value():
skip_list = SkipList()
skip_list.insert("Key1", 10)
skip_list.insert("Key1", 12)
skip_list.insert("Key5", 7)
skip_list.insert("Key7", 10)
skip_list.insert("Key10", 5)
skip_list.insert("Key7", 7)
skip_list.insert("Key5", 5)
skip_list.insert("Key10", 10)
node = skip_list.head
all_values = {}
while node.level != 0:
node = node.forward[0]
all_values[node.key] = node.value
if len(all_values) != 4:
print()
assert len(all_values) == 4
assert all_values["Key1"] == 12
assert all_values["Key7"] == 7
assert all_values["Key5"] == 5
assert all_values["Key10"] == 10
def test_searching_empty_list_returns_none():
skip_list = SkipList()
assert skip_list.find("Some key") is None
def test_search():
skip_list = SkipList()
skip_list.insert("Key2", 20)
assert skip_list.find("Key2") == 20
skip_list.insert("Some Key", 10)
skip_list.insert("Key2", 8)
skip_list.insert("V", 13)
assert skip_list.find("Y") is None
assert skip_list.find("Key2") == 8
assert skip_list.find("Some Key") == 10
assert skip_list.find("V") == 13
def test_deleting_item_from_empty_list_do_nothing():
skip_list = SkipList()
skip_list.delete("Some key")
assert len(skip_list.head.forward) == 0
def test_deleted_items_are_not_founded_by_find_method():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 14)
skip_list.insert("Key2", 15)
skip_list.delete("V")
skip_list.delete("Key2")
assert skip_list.find("V") is None
assert skip_list.find("Key2") is None
def test_delete_removes_only_given_key():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 14)
skip_list.insert("Key2", 15)
skip_list.delete("V")
assert skip_list.find("V") is None
assert skip_list.find("X") == 14
assert skip_list.find("Key1") == 12
assert skip_list.find("Key2") == 15
skip_list.delete("X")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") == 12
assert skip_list.find("Key2") == 15
skip_list.delete("Key1")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") is None
assert skip_list.find("Key2") == 15
skip_list.delete("Key2")
assert skip_list.find("V") is None
assert skip_list.find("X") is None
assert skip_list.find("Key1") is None
assert skip_list.find("Key2") is None
def test_delete_doesnt_leave_dead_nodes():
skip_list = SkipList()
skip_list.insert("Key1", 12)
skip_list.insert("V", 13)
skip_list.insert("X", 142)
skip_list.insert("Key2", 15)
skip_list.delete("X")
def traverse_keys(node):
yield node.key
for forward_node in node.forward:
yield from traverse_keys(forward_node)
assert len(set(traverse_keys(skip_list.head))) == 4
def test_iter_always_yields_sorted_values():
def is_sorted(lst):
for item, next_item in zip(lst, lst[1:]):
if next_item < item:
return False
return True
skip_list = SkipList()
for i in range(10):
skip_list.insert(i, i)
assert is_sorted(list(skip_list))
skip_list.delete(5)
skip_list.delete(8)
skip_list.delete(2)
assert is_sorted(list(skip_list))
skip_list.insert(-12, -12)
skip_list.insert(77, 77)
assert is_sorted(list(skip_list))
def pytests():
for i in range(100):
# Repeat test 100 times due to the probabilistic nature of skip list
# random values == random bugs
test_insert()
test_insert_overrides_existing_value()
test_searching_empty_list_returns_none()
test_search()
test_deleting_item_from_empty_list_do_nothing()
test_deleted_items_are_not_founded_by_find_method()
test_delete_removes_only_given_key()
test_delete_doesnt_leave_dead_nodes()
test_iter_always_yields_sorted_values()
def main():
"""
>>> pytests()
"""
skip_list = SkipList()
skip_list.insert(2, "2")
skip_list.insert(4, "4")
skip_list.insert(6, "4")
skip_list.insert(4, "5")
skip_list.insert(8, "4")
skip_list.insert(9, "4")
skip_list.delete(4)
print(skip_list)
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Non recursive implementation of a DFS algorithm."""
from __future__ import annotations
def depth_first_search(graph: dict, start: str) -> set[str]:
"""Depth First Search on Graph
:param graph: directed graph in dictionary format
:param start: starting vertex as a string
:returns: the trace of the search
>>> input_G = { "A": ["B", "C", "D"], "B": ["A", "D", "E"],
... "C": ["A", "F"], "D": ["B", "D"], "E": ["B", "F"],
... "F": ["C", "E", "G"], "G": ["F"] }
>>> output_G = list({'A', 'B', 'C', 'D', 'E', 'F', 'G'})
>>> all(x in output_G for x in list(depth_first_search(input_G, "A")))
True
>>> all(x in output_G for x in list(depth_first_search(input_G, "G")))
True
"""
explored, stack = set(start), [start]
while stack:
v = stack.pop()
explored.add(v)
# Differences from BFS:
# 1) pop last element instead of first one
# 2) add adjacent elements to stack without exploring them
for adj in reversed(graph[v]):
if adj not in explored:
stack.append(adj)
return explored
G = {
"A": ["B", "C", "D"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B", "D"],
"E": ["B", "F"],
"F": ["C", "E", "G"],
"G": ["F"],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
print(depth_first_search(G, "A"))
| """Non recursive implementation of a DFS algorithm."""
from __future__ import annotations
def depth_first_search(graph: dict, start: str) -> set[str]:
"""Depth First Search on Graph
:param graph: directed graph in dictionary format
:param start: starting vertex as a string
:returns: the trace of the search
>>> input_G = { "A": ["B", "C", "D"], "B": ["A", "D", "E"],
... "C": ["A", "F"], "D": ["B", "D"], "E": ["B", "F"],
... "F": ["C", "E", "G"], "G": ["F"] }
>>> output_G = list({'A', 'B', 'C', 'D', 'E', 'F', 'G'})
>>> all(x in output_G for x in list(depth_first_search(input_G, "A")))
True
>>> all(x in output_G for x in list(depth_first_search(input_G, "G")))
True
"""
explored, stack = set(start), [start]
while stack:
v = stack.pop()
explored.add(v)
# Differences from BFS:
# 1) pop last element instead of first one
# 2) add adjacent elements to stack without exploring them
for adj in reversed(graph[v]):
if adj not in explored:
stack.append(adj)
return explored
G = {
"A": ["B", "C", "D"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B", "D"],
"E": ["B", "F"],
"F": ["C", "E", "G"],
"G": ["F"],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
print(depth_first_search(G, "A"))
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 234: https://projecteuler.net/problem=234
For any integer n, consider the three functions
f1,n(x,y,z) = x^(n+1) + y^(n+1) - z^(n+1)
f2,n(x,y,z) = (xy + yz + zx)*(x^(n-1) + y^(n-1) - z^(n-1))
f3,n(x,y,z) = xyz*(xn-2 + yn-2 - zn-2)
and their combination
fn(x,y,z) = f1,n(x,y,z) + f2,n(x,y,z) - f3,n(x,y,z)
We call (x,y,z) a golden triple of order k if x, y, and z are all rational numbers
of the form a / b with 0 < a < b ≤ k and there is (at least) one integer n,
so that fn(x,y,z) = 0.
Let s(x,y,z) = x + y + z.
Let t = u / v be the sum of all distinct s(x,y,z) for all golden triples
(x,y,z) of order 35.
All the s(x,y,z) and t must be in reduced form.
Find u + v.
Solution:
By expanding the brackets it is easy to show that
fn(x, y, z) = (x + y + z) * (x^n + y^n - z^n).
Since x,y,z are positive, the requirement fn(x, y, z) = 0 is fulfilled if and
only if x^n + y^n = z^n.
By Fermat's Last Theorem, this means that the absolute value of n can not
exceed 2, i.e. n is in {-2, -1, 0, 1, 2}. We can eliminate n = 0 since then the
equation would reduce to 1 + 1 = 1, for which there are no solutions.
So all we have to do is iterate through the possible numerators and denominators
of x and y, calculate the corresponding z, and check if the corresponding numerator and
denominator are integer and satisfy 0 < z_num < z_den <= 0. We use a set "uniquq_s"
to make sure there are no duplicates, and the fractions.Fraction class to make sure
we get the right numerator and denominator.
Reference:
https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem
"""
from __future__ import annotations
from fractions import Fraction
from math import gcd, sqrt
def is_sq(number: int) -> bool:
"""
Check if number is a perfect square.
>>> is_sq(1)
True
>>> is_sq(1000001)
False
>>> is_sq(1000000)
True
"""
sq: int = int(number ** 0.5)
return number == sq * sq
def add_three(
x_num: int, x_den: int, y_num: int, y_den: int, z_num: int, z_den: int
) -> tuple[int, int]:
"""
Given the numerators and denominators of three fractions, return the
numerator and denominator of their sum in lowest form.
>>> add_three(1, 3, 1, 3, 1, 3)
(1, 1)
>>> add_three(2, 5, 4, 11, 12, 3)
(262, 55)
"""
top: int = x_num * y_den * z_den + y_num * x_den * z_den + z_num * x_den * y_den
bottom: int = x_den * y_den * z_den
hcf: int = gcd(top, bottom)
top //= hcf
bottom //= hcf
return top, bottom
def solution(order: int = 35) -> int:
"""
Find the sum of the numerator and denominator of the sum of all s(x,y,z) for
golden triples (x,y,z) of the given order.
>>> solution(5)
296
>>> solution(10)
12519
>>> solution(20)
19408891927
"""
unique_s: set = set()
hcf: int
total: Fraction = Fraction(0)
fraction_sum: tuple[int, int]
for x_num in range(1, order + 1):
for x_den in range(x_num + 1, order + 1):
for y_num in range(1, order + 1):
for y_den in range(y_num + 1, order + 1):
# n=1
z_num = x_num * y_den + x_den * y_num
z_den = x_den * y_den
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=2
z_num = (
x_num * x_num * y_den * y_den + x_den * x_den * y_num * y_num
)
z_den = x_den * x_den * y_den * y_den
if is_sq(z_num) and is_sq(z_den):
z_num = int(sqrt(z_num))
z_den = int(sqrt(z_den))
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=-1
z_num = x_num * y_num
z_den = x_den * y_num + x_num * y_den
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=2
z_num = x_num * x_num * y_num * y_num
z_den = (
x_den * x_den * y_num * y_num + x_num * x_num * y_den * y_den
)
if is_sq(z_num) and is_sq(z_den):
z_num = int(sqrt(z_num))
z_den = int(sqrt(z_den))
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
for num, den in unique_s:
total += Fraction(num, den)
return total.denominator + total.numerator
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 234: https://projecteuler.net/problem=234
For any integer n, consider the three functions
f1,n(x,y,z) = x^(n+1) + y^(n+1) - z^(n+1)
f2,n(x,y,z) = (xy + yz + zx)*(x^(n-1) + y^(n-1) - z^(n-1))
f3,n(x,y,z) = xyz*(xn-2 + yn-2 - zn-2)
and their combination
fn(x,y,z) = f1,n(x,y,z) + f2,n(x,y,z) - f3,n(x,y,z)
We call (x,y,z) a golden triple of order k if x, y, and z are all rational numbers
of the form a / b with 0 < a < b ≤ k and there is (at least) one integer n,
so that fn(x,y,z) = 0.
Let s(x,y,z) = x + y + z.
Let t = u / v be the sum of all distinct s(x,y,z) for all golden triples
(x,y,z) of order 35.
All the s(x,y,z) and t must be in reduced form.
Find u + v.
Solution:
By expanding the brackets it is easy to show that
fn(x, y, z) = (x + y + z) * (x^n + y^n - z^n).
Since x,y,z are positive, the requirement fn(x, y, z) = 0 is fulfilled if and
only if x^n + y^n = z^n.
By Fermat's Last Theorem, this means that the absolute value of n can not
exceed 2, i.e. n is in {-2, -1, 0, 1, 2}. We can eliminate n = 0 since then the
equation would reduce to 1 + 1 = 1, for which there are no solutions.
So all we have to do is iterate through the possible numerators and denominators
of x and y, calculate the corresponding z, and check if the corresponding numerator and
denominator are integer and satisfy 0 < z_num < z_den <= 0. We use a set "uniquq_s"
to make sure there are no duplicates, and the fractions.Fraction class to make sure
we get the right numerator and denominator.
Reference:
https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem
"""
from __future__ import annotations
from fractions import Fraction
from math import gcd, sqrt
def is_sq(number: int) -> bool:
"""
Check if number is a perfect square.
>>> is_sq(1)
True
>>> is_sq(1000001)
False
>>> is_sq(1000000)
True
"""
sq: int = int(number ** 0.5)
return number == sq * sq
def add_three(
x_num: int, x_den: int, y_num: int, y_den: int, z_num: int, z_den: int
) -> tuple[int, int]:
"""
Given the numerators and denominators of three fractions, return the
numerator and denominator of their sum in lowest form.
>>> add_three(1, 3, 1, 3, 1, 3)
(1, 1)
>>> add_three(2, 5, 4, 11, 12, 3)
(262, 55)
"""
top: int = x_num * y_den * z_den + y_num * x_den * z_den + z_num * x_den * y_den
bottom: int = x_den * y_den * z_den
hcf: int = gcd(top, bottom)
top //= hcf
bottom //= hcf
return top, bottom
def solution(order: int = 35) -> int:
"""
Find the sum of the numerator and denominator of the sum of all s(x,y,z) for
golden triples (x,y,z) of the given order.
>>> solution(5)
296
>>> solution(10)
12519
>>> solution(20)
19408891927
"""
unique_s: set = set()
hcf: int
total: Fraction = Fraction(0)
fraction_sum: tuple[int, int]
for x_num in range(1, order + 1):
for x_den in range(x_num + 1, order + 1):
for y_num in range(1, order + 1):
for y_den in range(y_num + 1, order + 1):
# n=1
z_num = x_num * y_den + x_den * y_num
z_den = x_den * y_den
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=2
z_num = (
x_num * x_num * y_den * y_den + x_den * x_den * y_num * y_num
)
z_den = x_den * x_den * y_den * y_den
if is_sq(z_num) and is_sq(z_den):
z_num = int(sqrt(z_num))
z_den = int(sqrt(z_den))
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=-1
z_num = x_num * y_num
z_den = x_den * y_num + x_num * y_den
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
# n=2
z_num = x_num * x_num * y_num * y_num
z_den = (
x_den * x_den * y_num * y_num + x_num * x_num * y_den * y_den
)
if is_sq(z_num) and is_sq(z_den):
z_num = int(sqrt(z_num))
z_den = int(sqrt(z_den))
hcf = gcd(z_num, z_den)
z_num //= hcf
z_den //= hcf
if 0 < z_num < z_den <= order:
fraction_sum = add_three(
x_num, x_den, y_num, y_den, z_num, z_den
)
unique_s.add(fraction_sum)
for num, den in unique_s:
total += Fraction(num, den)
return total.denominator + total.numerator
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def kruskal(
num_nodes: int, edges: list[tuple[int, int, int]]
) -> list[tuple[int, int, int]]:
"""
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1)])
[(2, 3, 1), (0, 1, 3), (1, 2, 5)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2)])
[(2, 3, 1), (0, 2, 1), (0, 1, 3)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2),
... (2, 1, 1)])
[(2, 3, 1), (0, 2, 1), (2, 1, 1)]
"""
edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
def find_parent(i):
if i != parent[i]:
parent[i] = find_parent(parent[i])
return parent[i]
minimum_spanning_tree_cost = 0
minimum_spanning_tree = []
for edge in edges:
parent_a = find_parent(edge[0])
parent_b = find_parent(edge[1])
if parent_a != parent_b:
minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
return minimum_spanning_tree
if __name__ == "__main__": # pragma: no cover
num_nodes, num_edges = list(map(int, input().strip().split()))
edges = []
for _ in range(num_edges):
node1, node2, cost = (int(x) for x in input().strip().split())
edges.append((node1, node2, cost))
kruskal(num_nodes, edges)
| def kruskal(
num_nodes: int, edges: list[tuple[int, int, int]]
) -> list[tuple[int, int, int]]:
"""
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1)])
[(2, 3, 1), (0, 1, 3), (1, 2, 5)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2)])
[(2, 3, 1), (0, 2, 1), (0, 1, 3)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2),
... (2, 1, 1)])
[(2, 3, 1), (0, 2, 1), (2, 1, 1)]
"""
edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
def find_parent(i):
if i != parent[i]:
parent[i] = find_parent(parent[i])
return parent[i]
minimum_spanning_tree_cost = 0
minimum_spanning_tree = []
for edge in edges:
parent_a = find_parent(edge[0])
parent_b = find_parent(edge[1])
if parent_a != parent_b:
minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
return minimum_spanning_tree
if __name__ == "__main__": # pragma: no cover
num_nodes, num_edges = list(map(int, input().strip().split()))
edges = []
for _ in range(num_edges):
node1, node2, cost = (int(x) for x in input().strip().split())
edges.append((node1, node2, cost))
kruskal(num_nodes, edges)
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 120 Square remainders: https://projecteuler.net/problem=120
Description:
Let r be the remainder when (a−1)^n + (a+1)^n is divided by a^2.
For example, if a = 7 and n = 3, then r = 42: 6^3 + 8^3 = 728 ≡ 42 mod 49.
And as n varies, so too will r, but for a = 7 it turns out that r_max = 42.
For 3 ≤ a ≤ 1000, find ∑ r_max.
Solution:
On expanding the terms, we get 2 if n is even and 2an if n is odd.
For maximizing the value, 2an < a*a => n <= (a - 1)/2 (integer division)
"""
def solution(n: int = 1000) -> int:
"""
Returns ∑ r_max for 3 <= a <= n as explained above
>>> solution(10)
300
>>> solution(100)
330750
>>> solution(1000)
333082500
"""
return sum(2 * a * ((a - 1) // 2) for a in range(3, n + 1))
if __name__ == "__main__":
print(solution())
| """
Problem 120 Square remainders: https://projecteuler.net/problem=120
Description:
Let r be the remainder when (a−1)^n + (a+1)^n is divided by a^2.
For example, if a = 7 and n = 3, then r = 42: 6^3 + 8^3 = 728 ≡ 42 mod 49.
And as n varies, so too will r, but for a = 7 it turns out that r_max = 42.
For 3 ≤ a ≤ 1000, find ∑ r_max.
Solution:
On expanding the terms, we get 2 if n is even and 2an if n is odd.
For maximizing the value, 2an < a*a => n <= (a - 1)/2 (integer division)
"""
def solution(n: int = 1000) -> int:
"""
Returns ∑ r_max for 3 <= a <= n as explained above
>>> solution(10)
300
>>> solution(100)
330750
>>> solution(1000)
333082500
"""
return sum(2 * a * ((a - 1) // 2) for a in range(3, n + 1))
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Binary Exponentiation."""
# Author : Junth Basnet
# Time Complexity : O(logn)
def binary_exponentiation(a, n):
if n == 0:
return 1
elif n % 2 == 1:
return binary_exponentiation(a, n - 1) * a
else:
b = binary_exponentiation(a, n / 2)
return b * b
if __name__ == "__main__":
try:
BASE = int(input("Enter Base : ").strip())
POWER = int(input("Enter Power : ").strip())
except ValueError:
print("Invalid literal for integer")
RESULT = binary_exponentiation(BASE, POWER)
print(f"{BASE}^({POWER}) : {RESULT}")
| """Binary Exponentiation."""
# Author : Junth Basnet
# Time Complexity : O(logn)
def binary_exponentiation(a, n):
if n == 0:
return 1
elif n % 2 == 1:
return binary_exponentiation(a, n - 1) * a
else:
b = binary_exponentiation(a, n / 2)
return b * b
if __name__ == "__main__":
try:
BASE = int(input("Enter Base : ").strip())
POWER = int(input("Enter Power : ").strip())
except ValueError:
print("Invalid literal for integer")
RESULT = binary_exponentiation(BASE, POWER)
print(f"{BASE}^({POWER}) : {RESULT}")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The RGB color model is an additive color model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name
of the model comes from the initials of the three additive primary colors, red,
green, and blue. Meanwhile, the HSV representation models how colors appear under
light. In it, colors are represented using three components: hue, saturation and
(brightness-)value. This file provides functions for converting colors from one
representation to the other.
(description adapted from https://en.wikipedia.org/wiki/RGB_color_model and
https://en.wikipedia.org/wiki/HSL_and_HSV).
"""
def hsv_to_rgb(hue: float, saturation: float, value: float) -> list[int]:
"""
Conversion from the HSV-representation to the RGB-representation.
Expected RGB-values taken from
https://www.rapidtables.com/convert/color/hsv-to-rgb.html
>>> hsv_to_rgb(0, 0, 0)
[0, 0, 0]
>>> hsv_to_rgb(0, 0, 1)
[255, 255, 255]
>>> hsv_to_rgb(0, 1, 1)
[255, 0, 0]
>>> hsv_to_rgb(60, 1, 1)
[255, 255, 0]
>>> hsv_to_rgb(120, 1, 1)
[0, 255, 0]
>>> hsv_to_rgb(240, 1, 1)
[0, 0, 255]
>>> hsv_to_rgb(300, 1, 1)
[255, 0, 255]
>>> hsv_to_rgb(180, 0.5, 0.5)
[64, 128, 128]
>>> hsv_to_rgb(234, 0.14, 0.88)
[193, 196, 224]
>>> hsv_to_rgb(330, 0.75, 0.5)
[128, 32, 80]
"""
if hue < 0 or hue > 360:
raise Exception("hue should be between 0 and 360")
if saturation < 0 or saturation > 1:
raise Exception("saturation should be between 0 and 1")
if value < 0 or value > 1:
raise Exception("value should be between 0 and 1")
chroma = value * saturation
hue_section = hue / 60
second_largest_component = chroma * (1 - abs(hue_section % 2 - 1))
match_value = value - chroma
if hue_section >= 0 and hue_section <= 1:
red = round(255 * (chroma + match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (match_value))
elif hue_section > 1 and hue_section <= 2:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (match_value))
elif hue_section > 2 and hue_section <= 3:
red = round(255 * (match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (second_largest_component + match_value))
elif hue_section > 3 and hue_section <= 4:
red = round(255 * (match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (chroma + match_value))
elif hue_section > 4 and hue_section <= 5:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (match_value))
blue = round(255 * (chroma + match_value))
else:
red = round(255 * (chroma + match_value))
green = round(255 * (match_value))
blue = round(255 * (second_largest_component + match_value))
return [red, green, blue]
def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
"""
Conversion from the RGB-representation to the HSV-representation.
The tested values are the reverse values from the hsv_to_rgb-doctests.
Function "approximately_equal_hsv" is needed because of small deviations due to
rounding for the RGB-values.
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 0), [0, 0, 0])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 255), [0, 0, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 0), [0, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 0), [60, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 255, 0), [120, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 255), [240, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 255), [300, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(64, 128, 128), [180, 0.5, 0.5])
True
>>> approximately_equal_hsv(rgb_to_hsv(193, 196, 224), [234, 0.14, 0.88])
True
>>> approximately_equal_hsv(rgb_to_hsv(128, 32, 80), [330, 0.75, 0.5])
True
"""
if red < 0 or red > 255:
raise Exception("red should be between 0 and 255")
if green < 0 or green > 255:
raise Exception("green should be between 0 and 255")
if blue < 0 or blue > 255:
raise Exception("blue should be between 0 and 255")
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
value = max(max(float_red, float_green), float_blue)
chroma = value - min(min(float_red, float_green), float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:
hue = 0.0
elif value == float_red:
hue = 60 * (0 + (float_green - float_blue) / chroma)
elif value == float_green:
hue = 60 * (2 + (float_blue - float_red) / chroma)
else:
hue = 60 * (4 + (float_red - float_green) / chroma)
hue = (hue + 360) % 360
return [hue, saturation, value]
def approximately_equal_hsv(hsv_1: list[float], hsv_2: list[float]) -> bool:
"""
Utility-function to check that two hsv-colors are approximately equal
>>> approximately_equal_hsv([0, 0, 0], [0, 0, 0])
True
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.500001, 0.30001])
True
>>> approximately_equal_hsv([0, 0, 0], [1, 0, 0])
False
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.6, 0.30001])
False
"""
check_hue = abs(hsv_1[0] - hsv_2[0]) < 0.2
check_saturation = abs(hsv_1[1] - hsv_2[1]) < 0.002
check_value = abs(hsv_1[2] - hsv_2[2]) < 0.002
return check_hue and check_saturation and check_value
| """
The RGB color model is an additive color model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name
of the model comes from the initials of the three additive primary colors, red,
green, and blue. Meanwhile, the HSV representation models how colors appear under
light. In it, colors are represented using three components: hue, saturation and
(brightness-)value. This file provides functions for converting colors from one
representation to the other.
(description adapted from https://en.wikipedia.org/wiki/RGB_color_model and
https://en.wikipedia.org/wiki/HSL_and_HSV).
"""
def hsv_to_rgb(hue: float, saturation: float, value: float) -> list[int]:
"""
Conversion from the HSV-representation to the RGB-representation.
Expected RGB-values taken from
https://www.rapidtables.com/convert/color/hsv-to-rgb.html
>>> hsv_to_rgb(0, 0, 0)
[0, 0, 0]
>>> hsv_to_rgb(0, 0, 1)
[255, 255, 255]
>>> hsv_to_rgb(0, 1, 1)
[255, 0, 0]
>>> hsv_to_rgb(60, 1, 1)
[255, 255, 0]
>>> hsv_to_rgb(120, 1, 1)
[0, 255, 0]
>>> hsv_to_rgb(240, 1, 1)
[0, 0, 255]
>>> hsv_to_rgb(300, 1, 1)
[255, 0, 255]
>>> hsv_to_rgb(180, 0.5, 0.5)
[64, 128, 128]
>>> hsv_to_rgb(234, 0.14, 0.88)
[193, 196, 224]
>>> hsv_to_rgb(330, 0.75, 0.5)
[128, 32, 80]
"""
if hue < 0 or hue > 360:
raise Exception("hue should be between 0 and 360")
if saturation < 0 or saturation > 1:
raise Exception("saturation should be between 0 and 1")
if value < 0 or value > 1:
raise Exception("value should be between 0 and 1")
chroma = value * saturation
hue_section = hue / 60
second_largest_component = chroma * (1 - abs(hue_section % 2 - 1))
match_value = value - chroma
if hue_section >= 0 and hue_section <= 1:
red = round(255 * (chroma + match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (match_value))
elif hue_section > 1 and hue_section <= 2:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (match_value))
elif hue_section > 2 and hue_section <= 3:
red = round(255 * (match_value))
green = round(255 * (chroma + match_value))
blue = round(255 * (second_largest_component + match_value))
elif hue_section > 3 and hue_section <= 4:
red = round(255 * (match_value))
green = round(255 * (second_largest_component + match_value))
blue = round(255 * (chroma + match_value))
elif hue_section > 4 and hue_section <= 5:
red = round(255 * (second_largest_component + match_value))
green = round(255 * (match_value))
blue = round(255 * (chroma + match_value))
else:
red = round(255 * (chroma + match_value))
green = round(255 * (match_value))
blue = round(255 * (second_largest_component + match_value))
return [red, green, blue]
def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
"""
Conversion from the RGB-representation to the HSV-representation.
The tested values are the reverse values from the hsv_to_rgb-doctests.
Function "approximately_equal_hsv" is needed because of small deviations due to
rounding for the RGB-values.
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 0), [0, 0, 0])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 255), [0, 0, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 0), [0, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 255, 0), [60, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 255, 0), [120, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(0, 0, 255), [240, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(255, 0, 255), [300, 1, 1])
True
>>> approximately_equal_hsv(rgb_to_hsv(64, 128, 128), [180, 0.5, 0.5])
True
>>> approximately_equal_hsv(rgb_to_hsv(193, 196, 224), [234, 0.14, 0.88])
True
>>> approximately_equal_hsv(rgb_to_hsv(128, 32, 80), [330, 0.75, 0.5])
True
"""
if red < 0 or red > 255:
raise Exception("red should be between 0 and 255")
if green < 0 or green > 255:
raise Exception("green should be between 0 and 255")
if blue < 0 or blue > 255:
raise Exception("blue should be between 0 and 255")
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
value = max(max(float_red, float_green), float_blue)
chroma = value - min(min(float_red, float_green), float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:
hue = 0.0
elif value == float_red:
hue = 60 * (0 + (float_green - float_blue) / chroma)
elif value == float_green:
hue = 60 * (2 + (float_blue - float_red) / chroma)
else:
hue = 60 * (4 + (float_red - float_green) / chroma)
hue = (hue + 360) % 360
return [hue, saturation, value]
def approximately_equal_hsv(hsv_1: list[float], hsv_2: list[float]) -> bool:
"""
Utility-function to check that two hsv-colors are approximately equal
>>> approximately_equal_hsv([0, 0, 0], [0, 0, 0])
True
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.500001, 0.30001])
True
>>> approximately_equal_hsv([0, 0, 0], [1, 0, 0])
False
>>> approximately_equal_hsv([180, 0.5, 0.3], [179.9999, 0.6, 0.30001])
False
"""
check_hue = abs(hsv_1[0] - hsv_2[0]) < 0.2
check_saturation = abs(hsv_1[1] - hsv_2[1]) < 0.002
check_value = abs(hsv_1[2] - hsv_2[2]) < 0.002
return check_hue and check_saturation and check_value
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
from .hash_table import HashTable
from .number_theory.prime_numbers import check_prime, next_prime
class DoubleHash(HashTable):
"""
Hash Table example with open addressing and Double Hash
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def __hash_function_2(self, value, data):
next_prime_gt = (
next_prime(value % self.size_table)
if not check_prime(value % self.size_table)
else value % self.size_table
) # gt = bigger than
return next_prime_gt - (data % next_prime_gt)
def __hash_double_function(self, key, data, increment):
return (increment * self.__hash_function_2(key, data)) % self.size_table
def _collision_resolution(self, key, data=None):
i = 1
new_key = self.hash_function(data)
while self.values[new_key] is not None and self.values[new_key] != key:
new_key = (
self.__hash_double_function(key, data, i)
if self.balanced_factor() >= self.lim_charge
else None
)
if new_key is None:
break
else:
i += 1
return new_key
| #!/usr/bin/env python3
from .hash_table import HashTable
from .number_theory.prime_numbers import check_prime, next_prime
class DoubleHash(HashTable):
"""
Hash Table example with open addressing and Double Hash
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def __hash_function_2(self, value, data):
next_prime_gt = (
next_prime(value % self.size_table)
if not check_prime(value % self.size_table)
else value % self.size_table
) # gt = bigger than
return next_prime_gt - (data % next_prime_gt)
def __hash_double_function(self, key, data, increment):
return (increment * self.__hash_function_2(key, data)) % self.size_table
def _collision_resolution(self, key, data=None):
i = 1
new_key = self.hash_function(data)
while self.values[new_key] is not None and self.values[new_key] != key:
new_key = (
self.__hash_double_function(key, data, i)
if self.balanced_factor() >= self.lim_charge
else None
)
if new_key is None:
break
else:
i += 1
return new_key
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from string import ascii_lowercase, ascii_uppercase
def capitalize(sentence: str) -> str:
"""
This function will capitalize the first letter of a sentence or a word
>>> capitalize("hello world")
'Hello world'
>>> capitalize("123 hello world")
'123 hello world'
>>> capitalize(" hello world")
' hello world'
>>> capitalize("a")
'A'
>>> capitalize("")
''
"""
if not sentence:
return ""
lower_to_upper = {lc: uc for lc, uc in zip(ascii_lowercase, ascii_uppercase)}
return lower_to_upper.get(sentence[0], sentence[0]) + sentence[1:]
if __name__ == "__main__":
from doctest import testmod
testmod()
| from string import ascii_lowercase, ascii_uppercase
def capitalize(sentence: str) -> str:
"""
This function will capitalize the first letter of a sentence or a word
>>> capitalize("hello world")
'Hello world'
>>> capitalize("123 hello world")
'123 hello world'
>>> capitalize(" hello world")
' hello world'
>>> capitalize("a")
'A'
>>> capitalize("")
''
"""
if not sentence:
return ""
lower_to_upper = {lc: uc for lc, uc in zip(ascii_lowercase, ascii_uppercase)}
return lower_to_upper.get(sentence[0], sentence[0]) + sentence[1:]
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Best-first_search#Greedy_BFS
"""
from __future__ import annotations
Path = list[tuple[int, int]]
grid = [
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
]
delta = ([-1, 0], [0, -1], [1, 0], [0, 1]) # up, left, down, right
class Node:
"""
>>> k = Node(0, 0, 4, 5, 0, None)
>>> k.calculate_heuristic()
9
>>> n = Node(1, 4, 3, 4, 2, None)
>>> n.calculate_heuristic()
2
>>> l = [k, n]
>>> n == l[0]
False
>>> l.sort()
>>> n == l[0]
True
"""
def __init__(
self,
pos_x: int,
pos_y: int,
goal_x: int,
goal_y: int,
g_cost: float,
parent: Node | None,
):
self.pos_x = pos_x
self.pos_y = pos_y
self.pos = (pos_y, pos_x)
self.goal_x = goal_x
self.goal_y = goal_y
self.g_cost = g_cost
self.parent = parent
self.f_cost = self.calculate_heuristic()
def calculate_heuristic(self) -> float:
"""
The heuristic here is the Manhattan Distance
Could elaborate to offer more than one choice
"""
dy = abs(self.pos_x - self.goal_x)
dx = abs(self.pos_y - self.goal_y)
return dx + dy
def __lt__(self, other) -> bool:
return self.f_cost < other.f_cost
class GreedyBestFirst:
"""
>>> gbf = GreedyBestFirst((0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> [x.pos for x in gbf.get_successors(gbf.start)]
[(1, 0), (0, 1)]
>>> (gbf.start.pos_y + delta[3][0], gbf.start.pos_x + delta[3][1])
(0, 1)
>>> (gbf.start.pos_y + delta[2][0], gbf.start.pos_x + delta[2][1])
(1, 0)
>>> gbf.retrace_path(gbf.start)
[(0, 0)]
>>> gbf.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (1, 0), (2, 0), (3, 0), (3, 1), (4, 1), (5, 1), (6, 1),
(6, 2), (6, 3), (5, 3), (5, 4), (5, 5), (6, 5), (6, 6)]
"""
def __init__(self, start: tuple[int, int], goal: tuple[int, int]):
self.start = Node(start[1], start[0], goal[1], goal[0], 0, None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], 99999, None)
self.open_nodes = [self.start]
self.closed_nodes: list[Node] = []
self.reached = False
def search(self) -> Path | None:
"""
Search for the path,
if a path is not found, only the starting position is returned
"""
while self.open_nodes:
# Open Nodes are sorted using __lt__
self.open_nodes.sort()
current_node = self.open_nodes.pop(0)
if current_node.pos == self.target.pos:
self.reached = True
return self.retrace_path(current_node)
self.closed_nodes.append(current_node)
successors = self.get_successors(current_node)
for child_node in successors:
if child_node in self.closed_nodes:
continue
if child_node not in self.open_nodes:
self.open_nodes.append(child_node)
else:
# retrieve the best current path
better_node = self.open_nodes.pop(self.open_nodes.index(child_node))
if child_node.g_cost < better_node.g_cost:
self.open_nodes.append(child_node)
else:
self.open_nodes.append(better_node)
if not self.reached:
return [self.start.pos]
return None
def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
successors = []
for action in delta:
pos_x = parent.pos_x + action[1]
pos_y = parent.pos_y + action[0]
if not (0 <= pos_x <= len(grid[0]) - 1 and 0 <= pos_y <= len(grid) - 1):
continue
if grid[pos_y][pos_x] != 0:
continue
successors.append(
Node(
pos_x,
pos_y,
self.target.pos_y,
self.target.pos_x,
parent.g_cost + 1,
parent,
)
)
return successors
def retrace_path(self, node: Node | None) -> Path:
"""
Retrace the path from parents to parents until start node
"""
current_node = node
path = []
while current_node is not None:
path.append((current_node.pos_y, current_node.pos_x))
current_node = current_node.parent
path.reverse()
return path
if __name__ == "__main__":
init = (0, 0)
goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
print("------")
greedy_bf = GreedyBestFirst(init, goal)
path = greedy_bf.search()
if path:
for pos_x, pos_y in path:
grid[pos_x][pos_y] = 2
for elem in grid:
print(elem)
| """
https://en.wikipedia.org/wiki/Best-first_search#Greedy_BFS
"""
from __future__ import annotations
Path = list[tuple[int, int]]
grid = [
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
]
delta = ([-1, 0], [0, -1], [1, 0], [0, 1]) # up, left, down, right
class Node:
"""
>>> k = Node(0, 0, 4, 5, 0, None)
>>> k.calculate_heuristic()
9
>>> n = Node(1, 4, 3, 4, 2, None)
>>> n.calculate_heuristic()
2
>>> l = [k, n]
>>> n == l[0]
False
>>> l.sort()
>>> n == l[0]
True
"""
def __init__(
self,
pos_x: int,
pos_y: int,
goal_x: int,
goal_y: int,
g_cost: float,
parent: Node | None,
):
self.pos_x = pos_x
self.pos_y = pos_y
self.pos = (pos_y, pos_x)
self.goal_x = goal_x
self.goal_y = goal_y
self.g_cost = g_cost
self.parent = parent
self.f_cost = self.calculate_heuristic()
def calculate_heuristic(self) -> float:
"""
The heuristic here is the Manhattan Distance
Could elaborate to offer more than one choice
"""
dy = abs(self.pos_x - self.goal_x)
dx = abs(self.pos_y - self.goal_y)
return dx + dy
def __lt__(self, other) -> bool:
return self.f_cost < other.f_cost
class GreedyBestFirst:
"""
>>> gbf = GreedyBestFirst((0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> [x.pos for x in gbf.get_successors(gbf.start)]
[(1, 0), (0, 1)]
>>> (gbf.start.pos_y + delta[3][0], gbf.start.pos_x + delta[3][1])
(0, 1)
>>> (gbf.start.pos_y + delta[2][0], gbf.start.pos_x + delta[2][1])
(1, 0)
>>> gbf.retrace_path(gbf.start)
[(0, 0)]
>>> gbf.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (1, 0), (2, 0), (3, 0), (3, 1), (4, 1), (5, 1), (6, 1),
(6, 2), (6, 3), (5, 3), (5, 4), (5, 5), (6, 5), (6, 6)]
"""
def __init__(self, start: tuple[int, int], goal: tuple[int, int]):
self.start = Node(start[1], start[0], goal[1], goal[0], 0, None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], 99999, None)
self.open_nodes = [self.start]
self.closed_nodes: list[Node] = []
self.reached = False
def search(self) -> Path | None:
"""
Search for the path,
if a path is not found, only the starting position is returned
"""
while self.open_nodes:
# Open Nodes are sorted using __lt__
self.open_nodes.sort()
current_node = self.open_nodes.pop(0)
if current_node.pos == self.target.pos:
self.reached = True
return self.retrace_path(current_node)
self.closed_nodes.append(current_node)
successors = self.get_successors(current_node)
for child_node in successors:
if child_node in self.closed_nodes:
continue
if child_node not in self.open_nodes:
self.open_nodes.append(child_node)
else:
# retrieve the best current path
better_node = self.open_nodes.pop(self.open_nodes.index(child_node))
if child_node.g_cost < better_node.g_cost:
self.open_nodes.append(child_node)
else:
self.open_nodes.append(better_node)
if not self.reached:
return [self.start.pos]
return None
def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
successors = []
for action in delta:
pos_x = parent.pos_x + action[1]
pos_y = parent.pos_y + action[0]
if not (0 <= pos_x <= len(grid[0]) - 1 and 0 <= pos_y <= len(grid) - 1):
continue
if grid[pos_y][pos_x] != 0:
continue
successors.append(
Node(
pos_x,
pos_y,
self.target.pos_y,
self.target.pos_x,
parent.g_cost + 1,
parent,
)
)
return successors
def retrace_path(self, node: Node | None) -> Path:
"""
Retrace the path from parents to parents until start node
"""
current_node = node
path = []
while current_node is not None:
path.append((current_node.pos_y, current_node.pos_x))
current_node = current_node.parent
path.reverse()
return path
if __name__ == "__main__":
init = (0, 0)
goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
print("------")
greedy_bf = GreedyBestFirst(init, goal)
path = greedy_bf.search()
if path:
for pos_x, pos_y in path:
grid[pos_x][pos_y] = 2
for elem in grid:
print(elem)
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # An island in matrix is a group of linked areas, all having the same value.
# This code counts number of islands in a given matrix, with including diagonal
# connections.
class matrix: # Public class to implement a graph
def __init__(self, row: int, col: int, graph: list):
self.ROW = row
self.COL = col
self.graph = graph
def is_safe(self, i, j, visited) -> bool:
return (
0 <= i < self.ROW
and 0 <= j < self.COL
and not visited[i][j]
and self.graph[i][j]
)
def diffs(self, i, j, visited): # Checking all 8 elements surrounding nth element
rowNbr = [-1, -1, -1, 0, 0, 1, 1, 1] # Coordinate order
colNbr = [-1, 0, 1, -1, 1, -1, 0, 1]
visited[i][j] = True # Make those cells visited
for k in range(8):
if self.is_safe(i + rowNbr[k], j + colNbr[k], visited):
self.diffs(i + rowNbr[k], j + colNbr[k], visited)
def count_islands(self) -> int: # And finally, count all islands.
visited = [[False for j in range(self.COL)] for i in range(self.ROW)]
count = 0
for i in range(self.ROW):
for j in range(self.COL):
if visited[i][j] is False and self.graph[i][j] == 1:
self.diffs(i, j, visited)
count += 1
return count
| # An island in matrix is a group of linked areas, all having the same value.
# This code counts number of islands in a given matrix, with including diagonal
# connections.
class matrix: # Public class to implement a graph
def __init__(self, row: int, col: int, graph: list):
self.ROW = row
self.COL = col
self.graph = graph
def is_safe(self, i, j, visited) -> bool:
return (
0 <= i < self.ROW
and 0 <= j < self.COL
and not visited[i][j]
and self.graph[i][j]
)
def diffs(self, i, j, visited): # Checking all 8 elements surrounding nth element
rowNbr = [-1, -1, -1, 0, 0, 1, 1, 1] # Coordinate order
colNbr = [-1, 0, 1, -1, 1, -1, 0, 1]
visited[i][j] = True # Make those cells visited
for k in range(8):
if self.is_safe(i + rowNbr[k], j + colNbr[k], visited):
self.diffs(i + rowNbr[k], j + colNbr[k], visited)
def count_islands(self) -> int: # And finally, count all islands.
visited = [[False for j in range(self.COL)] for i in range(self.ROW)]
count = 0
for i in range(self.ROW):
for j in range(self.COL):
if visited[i][j] is False and self.graph[i][j] == 1:
self.diffs(i, j, visited)
count += 1
return count
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, … shows the first 11 ugly numbers. By convention,
1 is included.
Given an integer n, we have to find the nth ugly number.
For more details, refer this article
https://www.geeksforgeeks.org/ugly-numbers/
"""
def ugly_numbers(n: int) -> int:
"""
Returns the nth ugly number.
>>> ugly_numbers(100)
1536
>>> ugly_numbers(0)
1
>>> ugly_numbers(20)
36
>>> ugly_numbers(-5)
1
>>> ugly_numbers(-5.5)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
"""
ugly_nums = [1]
i2, i3, i5 = 0, 0, 0
next_2 = ugly_nums[i2] * 2
next_3 = ugly_nums[i3] * 3
next_5 = ugly_nums[i5] * 5
for i in range(1, n):
next_num = min(next_2, next_3, next_5)
ugly_nums.append(next_num)
if next_num == next_2:
i2 += 1
next_2 = ugly_nums[i2] * 2
if next_num == next_3:
i3 += 1
next_3 = ugly_nums[i3] * 3
if next_num == next_5:
i5 += 1
next_5 = ugly_nums[i5] * 5
return ugly_nums[-1]
if __name__ == "__main__":
from doctest import testmod
testmod(verbose=True)
print(f"{ugly_numbers(200) = }")
| """
Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, … shows the first 11 ugly numbers. By convention,
1 is included.
Given an integer n, we have to find the nth ugly number.
For more details, refer this article
https://www.geeksforgeeks.org/ugly-numbers/
"""
def ugly_numbers(n: int) -> int:
"""
Returns the nth ugly number.
>>> ugly_numbers(100)
1536
>>> ugly_numbers(0)
1
>>> ugly_numbers(20)
36
>>> ugly_numbers(-5)
1
>>> ugly_numbers(-5.5)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
"""
ugly_nums = [1]
i2, i3, i5 = 0, 0, 0
next_2 = ugly_nums[i2] * 2
next_3 = ugly_nums[i3] * 3
next_5 = ugly_nums[i5] * 5
for i in range(1, n):
next_num = min(next_2, next_3, next_5)
ugly_nums.append(next_num)
if next_num == next_2:
i2 += 1
next_2 = ugly_nums[i2] * 2
if next_num == next_3:
i3 += 1
next_3 = ugly_nums[i3] * 3
if next_num == next_5:
i5 += 1
next_5 = ugly_nums[i5] * 5
return ugly_nums[-1]
if __name__ == "__main__":
from doctest import testmod
testmod(verbose=True)
print(f"{ugly_numbers(200) = }")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Literal
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def translate_message(
key: str, message: str, mode: Literal["encrypt", "decrypt"]
) -> str:
"""
>>> translate_message("QWERTYUIOPASDFGHJKLZXCVBNM","Hello World","encrypt")
'Pcssi Bidsm'
"""
chars_a = LETTERS if mode == "decrypt" else key
chars_b = key if mode == "decrypt" else LETTERS
translated = ""
# loop through each symbol in the message
for symbol in message:
if symbol.upper() in chars_a:
# encrypt/decrypt the symbol
sym_index = chars_a.find(symbol.upper())
if symbol.isupper():
translated += chars_b[sym_index].upper()
else:
translated += chars_b[sym_index].lower()
else:
# symbol is not in LETTERS, just add it
translated += symbol
return translated
def encrypt_message(key: str, message: str) -> str:
"""
>>> encrypt_message("QWERTYUIOPASDFGHJKLZXCVBNM", "Hello World")
'Pcssi Bidsm'
"""
return translate_message(key, message, "encrypt")
def decrypt_message(key: str, message: str) -> str:
"""
>>> decrypt_message("QWERTYUIOPASDFGHJKLZXCVBNM", "Hello World")
'Itssg Vgksr'
"""
return translate_message(key, message, "decrypt")
def main() -> None:
message = "Hello World"
key = "QWERTYUIOPASDFGHJKLZXCVBNM"
mode = "decrypt" # set to 'encrypt' or 'decrypt'
if mode == "encrypt":
translated = encrypt_message(key, message)
elif mode == "decrypt":
translated = decrypt_message(key, message)
print(f"Using the key {key}, the {mode}ed message is: {translated}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| from typing import Literal
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def translate_message(
key: str, message: str, mode: Literal["encrypt", "decrypt"]
) -> str:
"""
>>> translate_message("QWERTYUIOPASDFGHJKLZXCVBNM","Hello World","encrypt")
'Pcssi Bidsm'
"""
chars_a = LETTERS if mode == "decrypt" else key
chars_b = key if mode == "decrypt" else LETTERS
translated = ""
# loop through each symbol in the message
for symbol in message:
if symbol.upper() in chars_a:
# encrypt/decrypt the symbol
sym_index = chars_a.find(symbol.upper())
if symbol.isupper():
translated += chars_b[sym_index].upper()
else:
translated += chars_b[sym_index].lower()
else:
# symbol is not in LETTERS, just add it
translated += symbol
return translated
def encrypt_message(key: str, message: str) -> str:
"""
>>> encrypt_message("QWERTYUIOPASDFGHJKLZXCVBNM", "Hello World")
'Pcssi Bidsm'
"""
return translate_message(key, message, "encrypt")
def decrypt_message(key: str, message: str) -> str:
"""
>>> decrypt_message("QWERTYUIOPASDFGHJKLZXCVBNM", "Hello World")
'Itssg Vgksr'
"""
return translate_message(key, message, "decrypt")
def main() -> None:
message = "Hello World"
key = "QWERTYUIOPASDFGHJKLZXCVBNM"
mode = "decrypt" # set to 'encrypt' or 'decrypt'
if mode == "encrypt":
translated = encrypt_message(key, message)
elif mode == "decrypt":
translated = decrypt_message(key, message)
print(f"Using the key {key}, the {mode}ed message is: {translated}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author: Mohit Radadiya
"""
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
# This function generates the key in
# a cyclic manner until it's length isn't
# equal to the length of original text
def generate_key(message: str, key: str) -> str:
"""
>>> generate_key("THE GERMAN ATTACK","SECRET")
'SECRETSECRETSECRE'
"""
x = len(message)
i = 0
while True:
if x == i:
i = 0
if len(key) == len(message):
break
key += key[i]
i += 1
return key
# This function returns the encrypted text
# generated with the help of the key
def cipher_text(message: str, key_new: str) -> str:
"""
>>> cipher_text("THE GERMAN ATTACK","SECRETSECRETSECRE")
'BDC PAYUWL JPAIYI'
"""
cipher_text = ""
i = 0
for letter in message:
if letter == " ":
cipher_text += " "
else:
x = (dict1[letter] - dict1[key_new[i]]) % 26
i += 1
cipher_text += dict2[x]
return cipher_text
# This function decrypts the encrypted text
# and returns the original text
def original_text(cipher_text: str, key_new: str) -> str:
"""
>>> original_text("BDC PAYUWL JPAIYI","SECRETSECRETSECRE")
'THE GERMAN ATTACK'
"""
or_txt = ""
i = 0
for letter in cipher_text:
if letter == " ":
or_txt += " "
else:
x = (dict1[letter] + dict1[key_new[i]] + 26) % 26
i += 1
or_txt += dict2[x]
return or_txt
def main() -> None:
message = "THE GERMAN ATTACK"
key = "SECRET"
key_new = generate_key(message, key)
s = cipher_text(message, key_new)
print(f"Encrypted Text = {s}")
print(f"Original Text = {original_text(s, key_new)}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
Author: Mohit Radadiya
"""
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
# This function generates the key in
# a cyclic manner until it's length isn't
# equal to the length of original text
def generate_key(message: str, key: str) -> str:
"""
>>> generate_key("THE GERMAN ATTACK","SECRET")
'SECRETSECRETSECRE'
"""
x = len(message)
i = 0
while True:
if x == i:
i = 0
if len(key) == len(message):
break
key += key[i]
i += 1
return key
# This function returns the encrypted text
# generated with the help of the key
def cipher_text(message: str, key_new: str) -> str:
"""
>>> cipher_text("THE GERMAN ATTACK","SECRETSECRETSECRE")
'BDC PAYUWL JPAIYI'
"""
cipher_text = ""
i = 0
for letter in message:
if letter == " ":
cipher_text += " "
else:
x = (dict1[letter] - dict1[key_new[i]]) % 26
i += 1
cipher_text += dict2[x]
return cipher_text
# This function decrypts the encrypted text
# and returns the original text
def original_text(cipher_text: str, key_new: str) -> str:
"""
>>> original_text("BDC PAYUWL JPAIYI","SECRETSECRETSECRE")
'THE GERMAN ATTACK'
"""
or_txt = ""
i = 0
for letter in cipher_text:
if letter == " ":
or_txt += " "
else:
x = (dict1[letter] + dict1[key_new[i]] + 26) % 26
i += 1
or_txt += dict2[x]
return or_txt
def main() -> None:
message = "THE GERMAN ATTACK"
key = "SECRET"
key_new = generate_key(message, key)
s = cipher_text(message, key_new)
print(f"Encrypted Text = {s}")
print(f"Original Text = {original_text(s, key_new)}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # flake8: noqa
"""
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def mergeTrees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> print(first_heap.size)
30
Deleting - delete() test
>>> for i in range(25):
... print(first_heap.deleteMin(), end=" ")
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> print(second_heap.preOrder())
[(17, 0), ('#', 1), (31, 1), (20, 2), ('#', 3), ('#', 3), (34, 2), ('#', 3), ('#', 3)]
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>> merged = second_heap.mergeHeaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> while not first_heap.isEmpty():
... print(first_heap.deleteMin(), end=" ")
17 20 25 26 27 28 29 31 34
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def mergeHeaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.mergeTrees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.mergeTrees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def isEmpty(self):
return self.size == 0
def deleteMin(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
newHeap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.mergeHeaps(newHeap)
return min_value
def preOrder(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_preOrder = []
self.__traversal(top_root, heap_preOrder)
return heap_preOrder
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.isEmpty():
return ""
preorder_heap = self.preOrder()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| # flake8: noqa
"""
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def mergeTrees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> print(first_heap.size)
30
Deleting - delete() test
>>> for i in range(25):
... print(first_heap.deleteMin(), end=" ")
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> print(second_heap.preOrder())
[(17, 0), ('#', 1), (31, 1), (20, 2), ('#', 3), ('#', 3), (34, 2), ('#', 3), ('#', 3)]
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>> merged = second_heap.mergeHeaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> while not first_heap.isEmpty():
... print(first_heap.deleteMin(), end=" ")
17 20 25 26 27 28 29 31 34
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def mergeHeaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.mergeTrees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.mergeTrees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def isEmpty(self):
return self.size == 0
def deleteMin(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
newHeap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.mergeHeaps(newHeap)
return min_value
def preOrder(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_preOrder = []
self.__traversal(top_root, heap_preOrder)
return heap_preOrder
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.isEmpty():
return ""
preorder_heap = self.preOrder()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Computus#Gauss'_Easter_algorithm
"""
import math
from datetime import datetime, timedelta
def gauss_easter(year: int) -> datetime:
"""
Calculation Gregorian easter date for given year
>>> gauss_easter(2007)
datetime.datetime(2007, 4, 8, 0, 0)
>>> gauss_easter(2008)
datetime.datetime(2008, 3, 23, 0, 0)
>>> gauss_easter(2020)
datetime.datetime(2020, 4, 12, 0, 0)
>>> gauss_easter(2021)
datetime.datetime(2021, 4, 4, 0, 0)
"""
metonic_cycle = year % 19
julian_leap_year = year % 4
non_leap_year = year % 7
leap_day_inhibits = math.floor(year / 100)
lunar_orbit_correction = math.floor((13 + 8 * leap_day_inhibits) / 25)
leap_day_reinstall_number = leap_day_inhibits / 4
secular_moon_shift = (
15 - lunar_orbit_correction + leap_day_inhibits - leap_day_reinstall_number
) % 30
century_starting_point = (4 + leap_day_inhibits - leap_day_reinstall_number) % 7
# days to be added to March 21
days_to_add = (19 * metonic_cycle + secular_moon_shift) % 30
# PHM -> Paschal Full Moon
days_from_phm_to_sunday = (
2 * julian_leap_year
+ 4 * non_leap_year
+ 6 * days_to_add
+ century_starting_point
) % 7
if days_to_add == 29 and days_from_phm_to_sunday == 6:
return datetime(year, 4, 19)
elif days_to_add == 28 and days_from_phm_to_sunday == 6:
return datetime(year, 4, 18)
else:
return datetime(year, 3, 22) + timedelta(
days=int(days_to_add + days_from_phm_to_sunday)
)
if __name__ == "__main__":
for year in (1994, 2000, 2010, 2021, 2023):
tense = "will be" if year > datetime.now().year else "was"
print(f"Easter in {year} {tense} {gauss_easter(year)}")
| """
https://en.wikipedia.org/wiki/Computus#Gauss'_Easter_algorithm
"""
import math
from datetime import datetime, timedelta
def gauss_easter(year: int) -> datetime:
"""
Calculation Gregorian easter date for given year
>>> gauss_easter(2007)
datetime.datetime(2007, 4, 8, 0, 0)
>>> gauss_easter(2008)
datetime.datetime(2008, 3, 23, 0, 0)
>>> gauss_easter(2020)
datetime.datetime(2020, 4, 12, 0, 0)
>>> gauss_easter(2021)
datetime.datetime(2021, 4, 4, 0, 0)
"""
metonic_cycle = year % 19
julian_leap_year = year % 4
non_leap_year = year % 7
leap_day_inhibits = math.floor(year / 100)
lunar_orbit_correction = math.floor((13 + 8 * leap_day_inhibits) / 25)
leap_day_reinstall_number = leap_day_inhibits / 4
secular_moon_shift = (
15 - lunar_orbit_correction + leap_day_inhibits - leap_day_reinstall_number
) % 30
century_starting_point = (4 + leap_day_inhibits - leap_day_reinstall_number) % 7
# days to be added to March 21
days_to_add = (19 * metonic_cycle + secular_moon_shift) % 30
# PHM -> Paschal Full Moon
days_from_phm_to_sunday = (
2 * julian_leap_year
+ 4 * non_leap_year
+ 6 * days_to_add
+ century_starting_point
) % 7
if days_to_add == 29 and days_from_phm_to_sunday == 6:
return datetime(year, 4, 19)
elif days_to_add == 28 and days_from_phm_to_sunday == 6:
return datetime(year, 4, 18)
else:
return datetime(year, 3, 22) + timedelta(
days=int(days_to_add + days_from_phm_to_sunday)
)
if __name__ == "__main__":
for year in (1994, 2000, 2010, 2021, 2023):
tense = "will be" if year > datetime.now().year else "was"
print(f"Easter in {year} {tense} {gauss_easter(year)}")
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the heap sort algorithm.
For doctests run following command:
python -m doctest -v heap_sort.py
or
python3 -m doctest -v heap_sort.py
For manual testing run:
python heap_sort.py
"""
def heapify(unsorted, index, heap_size):
largest = index
left_index = 2 * index + 1
right_index = 2 * index + 2
if left_index < heap_size and unsorted[left_index] > unsorted[largest]:
largest = left_index
if right_index < heap_size and unsorted[right_index] > unsorted[largest]:
largest = right_index
if largest != index:
unsorted[largest], unsorted[index] = unsorted[index], unsorted[largest]
heapify(unsorted, largest, heap_size)
def heap_sort(unsorted):
"""
Pure implementation of the heap sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> heap_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> heap_sort([])
[]
>>> heap_sort([-2, -5, -45])
[-45, -5, -2]
"""
n = len(unsorted)
for i in range(n // 2 - 1, -1, -1):
heapify(unsorted, i, n)
for i in range(n - 1, 0, -1):
unsorted[0], unsorted[i] = unsorted[i], unsorted[0]
heapify(unsorted, 0, i)
return unsorted
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(heap_sort(unsorted))
| """
This is a pure Python implementation of the heap sort algorithm.
For doctests run following command:
python -m doctest -v heap_sort.py
or
python3 -m doctest -v heap_sort.py
For manual testing run:
python heap_sort.py
"""
def heapify(unsorted, index, heap_size):
largest = index
left_index = 2 * index + 1
right_index = 2 * index + 2
if left_index < heap_size and unsorted[left_index] > unsorted[largest]:
largest = left_index
if right_index < heap_size and unsorted[right_index] > unsorted[largest]:
largest = right_index
if largest != index:
unsorted[largest], unsorted[index] = unsorted[index], unsorted[largest]
heapify(unsorted, largest, heap_size)
def heap_sort(unsorted):
"""
Pure implementation of the heap sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> heap_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> heap_sort([])
[]
>>> heap_sort([-2, -5, -45])
[-45, -5, -2]
"""
n = len(unsorted)
for i in range(n // 2 - 1, -1, -1):
heapify(unsorted, i, n)
for i in range(n - 1, 0, -1):
unsorted[0], unsorted[i] = unsorted[i], unsorted[0]
heapify(unsorted, 0, i)
return unsorted
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(heap_sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/bin/python3
# Doomsday algorithm info: https://en.wikipedia.org/wiki/Doomsday_rule
DOOMSDAY_LEAP = [4, 1, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5]
DOOMSDAY_NOT_LEAP = [3, 7, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5]
WEEK_DAY_NAMES = {
0: "Sunday",
1: "Monday",
2: "Tuesday",
3: "Wednesday",
4: "Thursday",
5: "Friday",
6: "Saturday",
}
def get_week_day(year: int, month: int, day: int) -> str:
"""Returns the week-day name out of a given date.
>>> get_week_day(2020, 10, 24)
'Saturday'
>>> get_week_day(2017, 10, 24)
'Tuesday'
>>> get_week_day(2019, 5, 3)
'Friday'
>>> get_week_day(1970, 9, 16)
'Wednesday'
>>> get_week_day(1870, 8, 13)
'Saturday'
>>> get_week_day(2040, 3, 14)
'Wednesday'
"""
# minimal input check:
assert len(str(year)) > 2, "year should be in YYYY format"
assert 1 <= month <= 12, "month should be between 1 to 12"
assert 1 <= day <= 31, "day should be between 1 to 31"
# Doomsday algorithm:
century = year // 100
century_anchor = (5 * (century % 4) + 2) % 7
centurian = year % 100
centurian_m = centurian % 12
dooms_day = (
(centurian // 12) + centurian_m + (centurian_m // 4) + century_anchor
) % 7
day_anchor = (
DOOMSDAY_NOT_LEAP[month - 1]
if (year % 4 != 0) or (centurian == 0 and (year % 400) == 0)
else DOOMSDAY_LEAP[month - 1]
)
week_day = (dooms_day + day - day_anchor) % 7
return WEEK_DAY_NAMES[week_day]
if __name__ == "__main__":
import doctest
doctest.testmod()
| #!/bin/python3
# Doomsday algorithm info: https://en.wikipedia.org/wiki/Doomsday_rule
DOOMSDAY_LEAP = [4, 1, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5]
DOOMSDAY_NOT_LEAP = [3, 7, 7, 4, 2, 6, 4, 1, 5, 3, 7, 5]
WEEK_DAY_NAMES = {
0: "Sunday",
1: "Monday",
2: "Tuesday",
3: "Wednesday",
4: "Thursday",
5: "Friday",
6: "Saturday",
}
def get_week_day(year: int, month: int, day: int) -> str:
"""Returns the week-day name out of a given date.
>>> get_week_day(2020, 10, 24)
'Saturday'
>>> get_week_day(2017, 10, 24)
'Tuesday'
>>> get_week_day(2019, 5, 3)
'Friday'
>>> get_week_day(1970, 9, 16)
'Wednesday'
>>> get_week_day(1870, 8, 13)
'Saturday'
>>> get_week_day(2040, 3, 14)
'Wednesday'
"""
# minimal input check:
assert len(str(year)) > 2, "year should be in YYYY format"
assert 1 <= month <= 12, "month should be between 1 to 12"
assert 1 <= day <= 31, "day should be between 1 to 31"
# Doomsday algorithm:
century = year // 100
century_anchor = (5 * (century % 4) + 2) % 7
centurian = year % 100
centurian_m = centurian % 12
dooms_day = (
(centurian // 12) + centurian_m + (centurian_m // 4) + century_anchor
) % 7
day_anchor = (
DOOMSDAY_NOT_LEAP[month - 1]
if (year % 4 != 0) or (centurian == 0 and (year % 400) == 0)
else DOOMSDAY_LEAP[month - 1]
)
week_day = (dooms_day + day - day_anchor) % 7
return WEEK_DAY_NAMES[week_day]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
# Author: OMKAR PATHAK, Nwachukwu Chidiebere
# Use a Python dictionary to construct the graph.
from __future__ import annotations
from pprint import pformat
from typing import Generic, TypeVar
T = TypeVar("T")
class GraphAdjacencyList(Generic[T]):
"""
Adjacency List type Graph Data Structure that accounts for directed and undirected
Graphs. Initialize graph object indicating whether it's directed or undirected.
Directed graph example:
>>> d_graph = GraphAdjacencyList()
>>> d_graph
{}
>>> d_graph.add_edge(0, 1)
{0: [1], 1: []}
>>> d_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [2, 4, 5], 2: [], 4: [], 5: []}
>>> d_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> print(d_graph)
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> print(repr(d_graph))
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
Undirected graph example:
>>> u_graph = GraphAdjacencyList(directed=False)
>>> u_graph.add_edge(0, 1)
{0: [1], 1: [0]}
>>> u_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [0, 2, 4, 5], 2: [1], 4: [1], 5: [1]}
>>> u_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1, 2], 1: [0, 2, 4, 5], 2: [1, 0, 6, 7], 4: [1], 5: [1], 6: [2], 7: [2]}
>>> u_graph.add_edge(4, 5)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(u_graph)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(repr(u_graph))
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> char_graph = GraphAdjacencyList(directed=False)
>>> char_graph.add_edge('a', 'b')
{'a': ['b'], 'b': ['a']}
>>> char_graph.add_edge('b', 'c').add_edge('b', 'e').add_edge('b', 'f')
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
>>> print(char_graph)
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
"""
def __init__(self, directed: bool = True) -> None:
"""
Parameters:
directed: (bool) Indicates if graph is directed or undirected. Default is True.
"""
self.adj_list: dict[T, list[T]] = {} # dictionary of lists
self.directed = directed
def add_edge(
self, source_vertex: T, destination_vertex: T
) -> GraphAdjacencyList[T]:
"""
Connects vertices together. Creates and Edge from source vertex to destination
vertex.
Vertices will be created if not found in graph
"""
if not self.directed: # For undirected graphs
# if both source vertex and destination vertex are both present in the
# adjacency list, add destination vertex to source vertex list of adjacent
# vertices and add source vertex to destination vertex list of adjacent
# vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex].append(source_vertex)
# if only source vertex is present in adjacency list, add destination vertex
# to source vertex list of adjacent vertices, then create a new vertex with
# destination vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = [source_vertex]
# if only destination vertex is present in adjacency list, add source vertex
# to destination vertex list of adjacent vertices, then create a new vertex
# with source vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif destination_vertex in self.adj_list:
self.adj_list[destination_vertex].append(source_vertex)
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and assign a list
# containing the destination vertex as it's first adjacent vertex also
# create a new vertex with destination vertex as key and assign a list
# containing the source vertex as it's first adjacent vertex.
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = [source_vertex]
else: # For directed graphs
# if both source vertex and destination vertex are present in adjacency
# list, add destination vertex to source vertex list of adjacent vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
# if only source vertex is present in adjacency list, add destination
# vertex to source vertex list of adjacent vertices and create a new vertex
# with destination vertex as key, which has no adjacent vertex
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = []
# if only destination vertex is present in adjacency list, create a new
# vertex with source vertex as key and assign a list containing destination
# vertex as first adjacent vertex
elif destination_vertex in self.adj_list:
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and a list containing
# destination vertex as it's first adjacent vertex. Then create a new vertex
# with destination vertex as key, which has no adjacent vertex
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = []
return self
def __repr__(self) -> str:
return pformat(self.adj_list)
| #!/usr/bin/env python3
# Author: OMKAR PATHAK, Nwachukwu Chidiebere
# Use a Python dictionary to construct the graph.
from __future__ import annotations
from pprint import pformat
from typing import Generic, TypeVar
T = TypeVar("T")
class GraphAdjacencyList(Generic[T]):
"""
Adjacency List type Graph Data Structure that accounts for directed and undirected
Graphs. Initialize graph object indicating whether it's directed or undirected.
Directed graph example:
>>> d_graph = GraphAdjacencyList()
>>> d_graph
{}
>>> d_graph.add_edge(0, 1)
{0: [1], 1: []}
>>> d_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [2, 4, 5], 2: [], 4: [], 5: []}
>>> d_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> print(d_graph)
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> print(repr(d_graph))
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
Undirected graph example:
>>> u_graph = GraphAdjacencyList(directed=False)
>>> u_graph.add_edge(0, 1)
{0: [1], 1: [0]}
>>> u_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [0, 2, 4, 5], 2: [1], 4: [1], 5: [1]}
>>> u_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1, 2], 1: [0, 2, 4, 5], 2: [1, 0, 6, 7], 4: [1], 5: [1], 6: [2], 7: [2]}
>>> u_graph.add_edge(4, 5)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(u_graph)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(repr(u_graph))
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> char_graph = GraphAdjacencyList(directed=False)
>>> char_graph.add_edge('a', 'b')
{'a': ['b'], 'b': ['a']}
>>> char_graph.add_edge('b', 'c').add_edge('b', 'e').add_edge('b', 'f')
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
>>> print(char_graph)
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
"""
def __init__(self, directed: bool = True) -> None:
"""
Parameters:
directed: (bool) Indicates if graph is directed or undirected. Default is True.
"""
self.adj_list: dict[T, list[T]] = {} # dictionary of lists
self.directed = directed
def add_edge(
self, source_vertex: T, destination_vertex: T
) -> GraphAdjacencyList[T]:
"""
Connects vertices together. Creates and Edge from source vertex to destination
vertex.
Vertices will be created if not found in graph
"""
if not self.directed: # For undirected graphs
# if both source vertex and destination vertex are both present in the
# adjacency list, add destination vertex to source vertex list of adjacent
# vertices and add source vertex to destination vertex list of adjacent
# vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex].append(source_vertex)
# if only source vertex is present in adjacency list, add destination vertex
# to source vertex list of adjacent vertices, then create a new vertex with
# destination vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = [source_vertex]
# if only destination vertex is present in adjacency list, add source vertex
# to destination vertex list of adjacent vertices, then create a new vertex
# with source vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif destination_vertex in self.adj_list:
self.adj_list[destination_vertex].append(source_vertex)
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and assign a list
# containing the destination vertex as it's first adjacent vertex also
# create a new vertex with destination vertex as key and assign a list
# containing the source vertex as it's first adjacent vertex.
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = [source_vertex]
else: # For directed graphs
# if both source vertex and destination vertex are present in adjacency
# list, add destination vertex to source vertex list of adjacent vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
# if only source vertex is present in adjacency list, add destination
# vertex to source vertex list of adjacent vertices and create a new vertex
# with destination vertex as key, which has no adjacent vertex
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = []
# if only destination vertex is present in adjacency list, create a new
# vertex with source vertex as key and assign a list containing destination
# vertex as first adjacent vertex
elif destination_vertex in self.adj_list:
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and a list containing
# destination vertex as it's first adjacent vertex. Then create a new vertex
# with destination vertex as key, which has no adjacent vertex
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = []
return self
def __repr__(self) -> str:
return pformat(self.adj_list)
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Isolate the Decimal part of a Number
https://stackoverflow.com/questions/3886402/how-to-get-numbers-after-decimal-point
"""
def decimal_isolate(number, digitAmount):
"""
Isolates the decimal part of a number.
If digitAmount > 0 round to that decimal place, else print the entire decimal.
>>> decimal_isolate(1.53, 0)
0.53
>>> decimal_isolate(35.345, 1)
0.3
>>> decimal_isolate(35.345, 2)
0.34
>>> decimal_isolate(35.345, 3)
0.345
>>> decimal_isolate(-14.789, 3)
-0.789
>>> decimal_isolate(0, 2)
0
>>> decimal_isolate(-14.123, 1)
-0.1
>>> decimal_isolate(-14.123, 2)
-0.12
>>> decimal_isolate(-14.123, 3)
-0.123
"""
if digitAmount > 0:
return round(number - int(number), digitAmount)
return number - int(number)
if __name__ == "__main__":
print(decimal_isolate(1.53, 0))
print(decimal_isolate(35.345, 1))
print(decimal_isolate(35.345, 2))
print(decimal_isolate(35.345, 3))
print(decimal_isolate(-14.789, 3))
print(decimal_isolate(0, 2))
print(decimal_isolate(-14.123, 1))
print(decimal_isolate(-14.123, 2))
print(decimal_isolate(-14.123, 3))
| """
Isolate the Decimal part of a Number
https://stackoverflow.com/questions/3886402/how-to-get-numbers-after-decimal-point
"""
def decimal_isolate(number, digitAmount):
"""
Isolates the decimal part of a number.
If digitAmount > 0 round to that decimal place, else print the entire decimal.
>>> decimal_isolate(1.53, 0)
0.53
>>> decimal_isolate(35.345, 1)
0.3
>>> decimal_isolate(35.345, 2)
0.34
>>> decimal_isolate(35.345, 3)
0.345
>>> decimal_isolate(-14.789, 3)
-0.789
>>> decimal_isolate(0, 2)
0
>>> decimal_isolate(-14.123, 1)
-0.1
>>> decimal_isolate(-14.123, 2)
-0.12
>>> decimal_isolate(-14.123, 3)
-0.123
"""
if digitAmount > 0:
return round(number - int(number), digitAmount)
return number - int(number)
if __name__ == "__main__":
print(decimal_isolate(1.53, 0))
print(decimal_isolate(35.345, 1))
print(decimal_isolate(35.345, 2))
print(decimal_isolate(35.345, 3))
print(decimal_isolate(-14.789, 3))
print(decimal_isolate(0, 2))
print(decimal_isolate(-14.123, 1))
print(decimal_isolate(-14.123, 2))
print(decimal_isolate(-14.123, 3))
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://github.com/rupansh/QuantumComputing/blob/master/rippleadd.py
# https://en.wikipedia.org/wiki/Adder_(electronics)#Full_adder
# https://en.wikipedia.org/wiki/Controlled_NOT_gate
from qiskit import Aer, QuantumCircuit, execute
from qiskit.providers import BaseBackend
def store_two_classics(val1: int, val2: int) -> tuple[QuantumCircuit, str, str]:
"""
Generates a Quantum Circuit which stores two classical integers
Returns the circuit and binary representation of the integers
"""
x, y = bin(val1)[2:], bin(val2)[2:] # Remove leading '0b'
# Ensure that both strings are of the same length
if len(x) > len(y):
y = y.zfill(len(x))
else:
x = x.zfill(len(y))
# We need (3 * number of bits in the larger number)+1 qBits
# The second parameter is the number of classical registers, to measure the result
circuit = QuantumCircuit((len(x) * 3) + 1, len(x) + 1)
# We are essentially "not-ing" the bits that are 1
# Reversed because its easier to perform ops on more significant bits
for i in range(len(x)):
if x[::-1][i] == "1":
circuit.x(i)
for j in range(len(y)):
if y[::-1][j] == "1":
circuit.x(len(x) + j)
return circuit, x, y
def full_adder(
circuit: QuantumCircuit,
input1_loc: int,
input2_loc: int,
carry_in: int,
carry_out: int,
):
"""
Quantum Equivalent of a Full Adder Circuit
CX/CCX is like 2-way/3-way XOR
"""
circuit.ccx(input1_loc, input2_loc, carry_out)
circuit.cx(input1_loc, input2_loc)
circuit.ccx(input2_loc, carry_in, carry_out)
circuit.cx(input2_loc, carry_in)
circuit.cx(input1_loc, input2_loc)
# The default value for **backend** is the result of a function call which is not
# normally recommended and causes flake8-bugbear to raise a B008 error. However,
# in this case, this is accptable because `Aer.get_backend()` is called when the
# function is defined and that same backend is then reused for all function calls.
def ripple_adder(
val1: int,
val2: int,
backend: BaseBackend = Aer.get_backend("qasm_simulator"), # noqa: B008
) -> int:
"""
Quantum Equivalent of a Ripple Adder Circuit
Uses qasm_simulator backend by default
Currently only adds 'emulated' Classical Bits
but nothing prevents us from doing this with hadamard'd bits :)
Only supports adding positive integers
>>> ripple_adder(3, 4)
7
>>> ripple_adder(10, 4)
14
>>> ripple_adder(-1, 10)
Traceback (most recent call last):
...
ValueError: Both Integers must be positive!
"""
if val1 < 0 or val2 < 0:
raise ValueError("Both Integers must be positive!")
# Store the Integers
circuit, x, y = store_two_classics(val1, val2)
"""
We are essentially using each bit of x & y respectively as full_adder's input
the carry_input is used from the previous circuit (for circuit num > 1)
the carry_out is just below carry_input because
it will be essentially the carry_input for the next full_adder
"""
for i in range(len(x)):
full_adder(circuit, i, len(x) + i, len(x) + len(y) + i, len(x) + len(y) + i + 1)
circuit.barrier() # Optional, just for aesthetics
# Measure the resultant qBits
for i in range(len(x) + 1):
circuit.measure([(len(x) * 2) + i], [i])
res = execute(circuit, backend, shots=1).result()
# The result is in binary. Convert it back to int
return int(list(res.get_counts())[0], 2)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # https://github.com/rupansh/QuantumComputing/blob/master/rippleadd.py
# https://en.wikipedia.org/wiki/Adder_(electronics)#Full_adder
# https://en.wikipedia.org/wiki/Controlled_NOT_gate
from qiskit import Aer, QuantumCircuit, execute
from qiskit.providers import BaseBackend
def store_two_classics(val1: int, val2: int) -> tuple[QuantumCircuit, str, str]:
"""
Generates a Quantum Circuit which stores two classical integers
Returns the circuit and binary representation of the integers
"""
x, y = bin(val1)[2:], bin(val2)[2:] # Remove leading '0b'
# Ensure that both strings are of the same length
if len(x) > len(y):
y = y.zfill(len(x))
else:
x = x.zfill(len(y))
# We need (3 * number of bits in the larger number)+1 qBits
# The second parameter is the number of classical registers, to measure the result
circuit = QuantumCircuit((len(x) * 3) + 1, len(x) + 1)
# We are essentially "not-ing" the bits that are 1
# Reversed because its easier to perform ops on more significant bits
for i in range(len(x)):
if x[::-1][i] == "1":
circuit.x(i)
for j in range(len(y)):
if y[::-1][j] == "1":
circuit.x(len(x) + j)
return circuit, x, y
def full_adder(
circuit: QuantumCircuit,
input1_loc: int,
input2_loc: int,
carry_in: int,
carry_out: int,
):
"""
Quantum Equivalent of a Full Adder Circuit
CX/CCX is like 2-way/3-way XOR
"""
circuit.ccx(input1_loc, input2_loc, carry_out)
circuit.cx(input1_loc, input2_loc)
circuit.ccx(input2_loc, carry_in, carry_out)
circuit.cx(input2_loc, carry_in)
circuit.cx(input1_loc, input2_loc)
# The default value for **backend** is the result of a function call which is not
# normally recommended and causes flake8-bugbear to raise a B008 error. However,
# in this case, this is accptable because `Aer.get_backend()` is called when the
# function is defined and that same backend is then reused for all function calls.
def ripple_adder(
val1: int,
val2: int,
backend: BaseBackend = Aer.get_backend("qasm_simulator"), # noqa: B008
) -> int:
"""
Quantum Equivalent of a Ripple Adder Circuit
Uses qasm_simulator backend by default
Currently only adds 'emulated' Classical Bits
but nothing prevents us from doing this with hadamard'd bits :)
Only supports adding positive integers
>>> ripple_adder(3, 4)
7
>>> ripple_adder(10, 4)
14
>>> ripple_adder(-1, 10)
Traceback (most recent call last):
...
ValueError: Both Integers must be positive!
"""
if val1 < 0 or val2 < 0:
raise ValueError("Both Integers must be positive!")
# Store the Integers
circuit, x, y = store_two_classics(val1, val2)
"""
We are essentially using each bit of x & y respectively as full_adder's input
the carry_input is used from the previous circuit (for circuit num > 1)
the carry_out is just below carry_input because
it will be essentially the carry_input for the next full_adder
"""
for i in range(len(x)):
full_adder(circuit, i, len(x) + i, len(x) + len(y) + i, len(x) + len(y) + i + 1)
circuit.barrier() # Optional, just for aesthetics
# Measure the resultant qBits
for i in range(len(x) + 1):
circuit.measure([(len(x) * 2) + i], [i])
res = execute(circuit, backend, shots=1).result()
# The result is in binary. Convert it back to int
return int(list(res.get_counts())[0], 2)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Slowsort is a sorting algorithm. It is of humorous nature and not useful.
It's based on the principle of multiply and surrender,
a tongue-in-cheek joke of divide and conquer.
It was published in 1986 by Andrei Broder and Jorge Stolfi
in their paper Pessimal Algorithms and Simplexity Analysis
(a parody of optimal algorithms and complexity analysis).
Source: https://en.wikipedia.org/wiki/Slowsort
"""
from __future__ import annotations
def slowsort(sequence: list, start: int | None = None, end: int | None = None) -> None:
"""
Sorts sequence[start..end] (both inclusive) in-place.
start defaults to 0 if not given.
end defaults to len(sequence) - 1 if not given.
It returns None.
>>> seq = [1, 6, 2, 5, 3, 4, 4, 5]; slowsort(seq); seq
[1, 2, 3, 4, 4, 5, 5, 6]
>>> seq = []; slowsort(seq); seq
[]
>>> seq = [2]; slowsort(seq); seq
[2]
>>> seq = [1, 2, 3, 4]; slowsort(seq); seq
[1, 2, 3, 4]
>>> seq = [4, 3, 2, 1]; slowsort(seq); seq
[1, 2, 3, 4]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, 2, 7); seq
[9, 8, 2, 3, 4, 5, 6, 7, 1, 0]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, end = 4); seq
[5, 6, 7, 8, 9, 4, 3, 2, 1, 0]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, start = 5); seq
[9, 8, 7, 6, 5, 0, 1, 2, 3, 4]
"""
if start is None:
start = 0
if end is None:
end = len(sequence) - 1
if start >= end:
return
mid = (start + end) // 2
slowsort(sequence, start, mid)
slowsort(sequence, mid + 1, end)
if sequence[end] < sequence[mid]:
sequence[end], sequence[mid] = sequence[mid], sequence[end]
slowsort(sequence, start, end - 1)
if __name__ == "__main__":
from doctest import testmod
testmod()
| """
Slowsort is a sorting algorithm. It is of humorous nature and not useful.
It's based on the principle of multiply and surrender,
a tongue-in-cheek joke of divide and conquer.
It was published in 1986 by Andrei Broder and Jorge Stolfi
in their paper Pessimal Algorithms and Simplexity Analysis
(a parody of optimal algorithms and complexity analysis).
Source: https://en.wikipedia.org/wiki/Slowsort
"""
from __future__ import annotations
def slowsort(sequence: list, start: int | None = None, end: int | None = None) -> None:
"""
Sorts sequence[start..end] (both inclusive) in-place.
start defaults to 0 if not given.
end defaults to len(sequence) - 1 if not given.
It returns None.
>>> seq = [1, 6, 2, 5, 3, 4, 4, 5]; slowsort(seq); seq
[1, 2, 3, 4, 4, 5, 5, 6]
>>> seq = []; slowsort(seq); seq
[]
>>> seq = [2]; slowsort(seq); seq
[2]
>>> seq = [1, 2, 3, 4]; slowsort(seq); seq
[1, 2, 3, 4]
>>> seq = [4, 3, 2, 1]; slowsort(seq); seq
[1, 2, 3, 4]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, 2, 7); seq
[9, 8, 2, 3, 4, 5, 6, 7, 1, 0]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, end = 4); seq
[5, 6, 7, 8, 9, 4, 3, 2, 1, 0]
>>> seq = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]; slowsort(seq, start = 5); seq
[9, 8, 7, 6, 5, 0, 1, 2, 3, 4]
"""
if start is None:
start = 0
if end is None:
end = len(sequence) - 1
if start >= end:
return
mid = (start + end) // 2
slowsort(sequence, start, mid)
slowsort(sequence, mid + 1, end)
if sequence[end] < sequence[mid]:
sequence[end], sequence[mid] = sequence[mid], sequence[end]
slowsort(sequence, start, end - 1)
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
In this problem, we want to determine all possible permutations
of the given sequence. We use backtracking to solve this problem.
Time complexity: O(n! * n),
where n denotes the length of the given sequence.
"""
from __future__ import annotations
def generate_all_permutations(sequence: list[int | str]) -> None:
create_state_space_tree(sequence, [], 0, [0 for i in range(len(sequence))])
def create_state_space_tree(
sequence: list[int | str],
current_sequence: list[int | str],
index: int,
index_used: list[int],
) -> None:
"""
Creates a state space tree to iterate through each branch using DFS.
We know that each state has exactly len(sequence) - index children.
It terminates when it reaches the end of the given sequence.
"""
if index == len(sequence):
print(current_sequence)
return
for i in range(len(sequence)):
if not index_used[i]:
current_sequence.append(sequence[i])
index_used[i] = True
create_state_space_tree(sequence, current_sequence, index + 1, index_used)
current_sequence.pop()
index_used[i] = False
"""
remove the comment to take an input from the user
print("Enter the elements")
sequence = list(map(int, input().split()))
"""
sequence: list[int | str] = [3, 1, 2, 4]
generate_all_permutations(sequence)
sequence_2: list[int | str] = ["A", "B", "C"]
generate_all_permutations(sequence_2)
| """
In this problem, we want to determine all possible permutations
of the given sequence. We use backtracking to solve this problem.
Time complexity: O(n! * n),
where n denotes the length of the given sequence.
"""
from __future__ import annotations
def generate_all_permutations(sequence: list[int | str]) -> None:
create_state_space_tree(sequence, [], 0, [0 for i in range(len(sequence))])
def create_state_space_tree(
sequence: list[int | str],
current_sequence: list[int | str],
index: int,
index_used: list[int],
) -> None:
"""
Creates a state space tree to iterate through each branch using DFS.
We know that each state has exactly len(sequence) - index children.
It terminates when it reaches the end of the given sequence.
"""
if index == len(sequence):
print(current_sequence)
return
for i in range(len(sequence)):
if not index_used[i]:
current_sequence.append(sequence[i])
index_used[i] = True
create_state_space_tree(sequence, current_sequence, index + 1, index_used)
current_sequence.pop()
index_used[i] = False
"""
remove the comment to take an input from the user
print("Enter the elements")
sequence = list(map(int, input().split()))
"""
sequence: list[int | str] = [3, 1, 2, 4]
generate_all_permutations(sequence)
sequence_2: list[int | str] = ["A", "B", "C"]
generate_all_permutations(sequence_2)
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, N: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, N - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g ** t) % N
y = math.gcd(x - 1, N)
if x > 1 and y > 1:
p = y
q = N // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, N: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, N - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g ** t) % N
y = math.gcd(x - 1, N)
if x > 1 and y > 1:
p = y
q = N // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #
| #
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A perfect number is a number for which the sum of its proper divisors is exactly
equal to the number. For example, the sum of the proper divisors of 28 would be
1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n
and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest
number that can be written as the sum of two abundant numbers is 24. By
mathematical analysis, it can be shown that all integers greater than 28123
can be written as the sum of two abundant numbers. However, this upper limit
cannot be reduced any further by analysis even though it is known that the
greatest number that cannot be expressed as the sum of two abundant numbers
is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum
of two abundant numbers.
"""
def solution(limit=28123):
"""
Finds the sum of all the positive integers which cannot be written as
the sum of two abundant numbers
as described by the statement above.
>>> solution()
4179871
"""
sumDivs = [1] * (limit + 1)
for i in range(2, int(limit ** 0.5) + 1):
sumDivs[i * i] += i
for k in range(i + 1, limit // i + 1):
sumDivs[k * i] += k + i
abundants = set()
res = 0
for n in range(1, limit + 1):
if sumDivs[n] > n:
abundants.add(n)
if not any((n - a in abundants) for a in abundants):
res += n
return res
if __name__ == "__main__":
print(solution())
| """
A perfect number is a number for which the sum of its proper divisors is exactly
equal to the number. For example, the sum of the proper divisors of 28 would be
1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n
and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest
number that can be written as the sum of two abundant numbers is 24. By
mathematical analysis, it can be shown that all integers greater than 28123
can be written as the sum of two abundant numbers. However, this upper limit
cannot be reduced any further by analysis even though it is known that the
greatest number that cannot be expressed as the sum of two abundant numbers
is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum
of two abundant numbers.
"""
def solution(limit=28123):
"""
Finds the sum of all the positive integers which cannot be written as
the sum of two abundant numbers
as described by the statement above.
>>> solution()
4179871
"""
sumDivs = [1] * (limit + 1)
for i in range(2, int(limit ** 0.5) + 1):
sumDivs[i * i] += i
for k in range(i + 1, limit // i + 1):
sumDivs[k * i] += k + i
abundants = set()
res = 0
for n in range(1, limit + 1):
if sumDivs[n] > n:
abundants.add(n)
if not any((n - a in abundants) for a in abundants):
res += n
return res
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(lambda f: os.remove(f), self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size)
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(lambda f: os.remove(f), self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size)
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Graph Coloring also called "m coloring problem"
consists of coloring a given graph with at most m colors
such that no adjacent vertices are assigned the same color
Wikipedia: https://en.wikipedia.org/wiki/Graph_coloring
"""
def valid_coloring(
neighbours: list[int], colored_vertices: list[int], color: int
) -> bool:
"""
For each neighbour check if the coloring constraint is satisfied
If any of the neighbours fail the constraint return False
If all neighbours validate the constraint return True
>>> neighbours = [0,1,0,1,0]
>>> colored_vertices = [0, 2, 1, 2, 0]
>>> color = 1
>>> valid_coloring(neighbours, colored_vertices, color)
True
>>> color = 2
>>> valid_coloring(neighbours, colored_vertices, color)
False
"""
# Does any neighbour not satisfy the constraints
return not any(
neighbour == 1 and colored_vertices[i] == color
for i, neighbour in enumerate(neighbours)
)
def util_color(
graph: list[list[int]], max_colors: int, colored_vertices: list[int], index: int
) -> bool:
"""
Pseudo-Code
Base Case:
1. Check if coloring is complete
1.1 If complete return True (meaning that we successfully colored the graph)
Recursive Step:
2. Iterates over each color:
Check if the current coloring is valid:
2.1. Color given vertex
2.2. Do recursive call, check if this coloring leads to a solution
2.4. if current coloring leads to a solution return
2.5. Uncolor given vertex
>>> graph = [[0, 1, 0, 0, 0],
... [1, 0, 1, 0, 1],
... [0, 1, 0, 1, 0],
... [0, 1, 1, 0, 0],
... [0, 1, 0, 0, 0]]
>>> max_colors = 3
>>> colored_vertices = [0, 1, 0, 0, 0]
>>> index = 3
>>> util_color(graph, max_colors, colored_vertices, index)
True
>>> max_colors = 2
>>> util_color(graph, max_colors, colored_vertices, index)
False
"""
# Base Case
if index == len(graph):
return True
# Recursive Step
for i in range(max_colors):
if valid_coloring(graph[index], colored_vertices, i):
# Color current vertex
colored_vertices[index] = i
# Validate coloring
if util_color(graph, max_colors, colored_vertices, index + 1):
return True
# Backtrack
colored_vertices[index] = -1
return False
def color(graph: list[list[int]], max_colors: int) -> list[int]:
"""
Wrapper function to call subroutine called util_color
which will either return True or False.
If True is returned colored_vertices list is filled with correct colorings
>>> graph = [[0, 1, 0, 0, 0],
... [1, 0, 1, 0, 1],
... [0, 1, 0, 1, 0],
... [0, 1, 1, 0, 0],
... [0, 1, 0, 0, 0]]
>>> max_colors = 3
>>> color(graph, max_colors)
[0, 1, 0, 2, 0]
>>> max_colors = 2
>>> color(graph, max_colors)
[]
"""
colored_vertices = [-1] * len(graph)
if util_color(graph, max_colors, colored_vertices, 0):
return colored_vertices
return []
| """
Graph Coloring also called "m coloring problem"
consists of coloring a given graph with at most m colors
such that no adjacent vertices are assigned the same color
Wikipedia: https://en.wikipedia.org/wiki/Graph_coloring
"""
def valid_coloring(
neighbours: list[int], colored_vertices: list[int], color: int
) -> bool:
"""
For each neighbour check if the coloring constraint is satisfied
If any of the neighbours fail the constraint return False
If all neighbours validate the constraint return True
>>> neighbours = [0,1,0,1,0]
>>> colored_vertices = [0, 2, 1, 2, 0]
>>> color = 1
>>> valid_coloring(neighbours, colored_vertices, color)
True
>>> color = 2
>>> valid_coloring(neighbours, colored_vertices, color)
False
"""
# Does any neighbour not satisfy the constraints
return not any(
neighbour == 1 and colored_vertices[i] == color
for i, neighbour in enumerate(neighbours)
)
def util_color(
graph: list[list[int]], max_colors: int, colored_vertices: list[int], index: int
) -> bool:
"""
Pseudo-Code
Base Case:
1. Check if coloring is complete
1.1 If complete return True (meaning that we successfully colored the graph)
Recursive Step:
2. Iterates over each color:
Check if the current coloring is valid:
2.1. Color given vertex
2.2. Do recursive call, check if this coloring leads to a solution
2.4. if current coloring leads to a solution return
2.5. Uncolor given vertex
>>> graph = [[0, 1, 0, 0, 0],
... [1, 0, 1, 0, 1],
... [0, 1, 0, 1, 0],
... [0, 1, 1, 0, 0],
... [0, 1, 0, 0, 0]]
>>> max_colors = 3
>>> colored_vertices = [0, 1, 0, 0, 0]
>>> index = 3
>>> util_color(graph, max_colors, colored_vertices, index)
True
>>> max_colors = 2
>>> util_color(graph, max_colors, colored_vertices, index)
False
"""
# Base Case
if index == len(graph):
return True
# Recursive Step
for i in range(max_colors):
if valid_coloring(graph[index], colored_vertices, i):
# Color current vertex
colored_vertices[index] = i
# Validate coloring
if util_color(graph, max_colors, colored_vertices, index + 1):
return True
# Backtrack
colored_vertices[index] = -1
return False
def color(graph: list[list[int]], max_colors: int) -> list[int]:
"""
Wrapper function to call subroutine called util_color
which will either return True or False.
If True is returned colored_vertices list is filled with correct colorings
>>> graph = [[0, 1, 0, 0, 0],
... [1, 0, 1, 0, 1],
... [0, 1, 0, 1, 0],
... [0, 1, 1, 0, 0],
... [0, 1, 0, 0, 0]]
>>> max_colors = 3
>>> color(graph, max_colors)
[0, 1, 0, 2, 0]
>>> max_colors = 2
>>> color(graph, max_colors)
[]
"""
colored_vertices = [-1] * len(graph)
if util_color(graph, max_colors, colored_vertices, 0):
return colored_vertices
return []
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Doubly_linked_list
"""
class Node:
def __init__(self, data):
self.data = data
self.previous = None
self.next = None
def __str__(self):
return f"{self.data}"
class DoublyLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_head('b')
>>> linked_list.insert_at_head('a')
>>> linked_list.insert_at_tail('c')
>>> tuple(linked_list)
('a', 'b', 'c')
"""
node = self.head
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_tail('a')
>>> linked_list.insert_at_tail('b')
>>> linked_list.insert_at_tail('c')
>>> str(linked_list)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> len(linked_list) == 5
True
"""
return len(tuple(iter(self)))
def insert_at_head(self, data):
self.insert_at_nth(0, data)
def insert_at_tail(self, data):
self.insert_at_nth(len(self), data)
def insert_at_nth(self, index: int, data):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_nth(-1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(0, 2)
>>> linked_list.insert_at_nth(0, 1)
>>> linked_list.insert_at_nth(2, 4)
>>> linked_list.insert_at_nth(2, 3)
>>> str(linked_list)
'1->2->3->4'
>>> linked_list.insert_at_nth(5, 5)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = self.tail = new_node
elif index == 0:
self.head.previous = new_node
new_node.next = self.head
self.head = new_node
elif index == len(self):
self.tail.next = new_node
new_node.previous = self.tail
self.tail = new_node
else:
temp = self.head
for i in range(0, index):
temp = temp.next
temp.previous.next = new_node
new_node.previous = temp.previous
new_node.next = temp
temp.previous = new_node
def delete_head(self):
return self.delete_at_nth(0)
def delete_tail(self):
return self.delete_at_nth(len(self) - 1)
def delete_at_nth(self, index: int):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.delete_at_nth(0)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> linked_list.delete_at_nth(0) == 1
True
>>> linked_list.delete_at_nth(3) == 5
True
>>> linked_list.delete_at_nth(1) == 3
True
>>> str(linked_list)
'2->4'
>>> linked_list.delete_at_nth(2)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self) - 1:
raise IndexError("list index out of range")
delete_node = self.head # default first node
if len(self) == 1:
self.head = self.tail = None
elif index == 0:
self.head = self.head.next
self.head.previous = None
elif index == len(self) - 1:
delete_node = self.tail
self.tail = self.tail.previous
self.tail.next = None
else:
temp = self.head
for i in range(0, index):
temp = temp.next
delete_node = temp
temp.next.previous = temp.previous
temp.previous.next = temp.next
return delete_node.data
def delete(self, data) -> str:
current = self.head
while current.data != data: # Find the position to delete
if current.next:
current = current.next
else: # We have reached the end an no value matches
return "No data matching given value"
if current == self.head:
self.delete_head()
elif current == self.tail:
self.delete_tail()
else: # Before: 1 <--> 2(current) <--> 3
current.previous.next = current.next # 1 --> 3
current.next.previous = current.previous # 1 <--> 3
return data
def is_empty(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_at_tail(1)
>>> linked_list.is_empty()
False
"""
return len(self) == 0
def test_doubly_linked_list() -> None:
"""
>>> test_doubly_linked_list()
"""
linked_list = DoublyLinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_at_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_at_head(0)
linked_list.insert_at_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_at_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
if __name__ == "__main__":
from doctest import testmod
testmod()
| """
https://en.wikipedia.org/wiki/Doubly_linked_list
"""
class Node:
def __init__(self, data):
self.data = data
self.previous = None
self.next = None
def __str__(self):
return f"{self.data}"
class DoublyLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_head('b')
>>> linked_list.insert_at_head('a')
>>> linked_list.insert_at_tail('c')
>>> tuple(linked_list)
('a', 'b', 'c')
"""
node = self.head
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_tail('a')
>>> linked_list.insert_at_tail('b')
>>> linked_list.insert_at_tail('c')
>>> str(linked_list)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> len(linked_list) == 5
True
"""
return len(tuple(iter(self)))
def insert_at_head(self, data):
self.insert_at_nth(0, data)
def insert_at_tail(self, data):
self.insert_at_nth(len(self), data)
def insert_at_nth(self, index: int, data):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_nth(-1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(0, 2)
>>> linked_list.insert_at_nth(0, 1)
>>> linked_list.insert_at_nth(2, 4)
>>> linked_list.insert_at_nth(2, 3)
>>> str(linked_list)
'1->2->3->4'
>>> linked_list.insert_at_nth(5, 5)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = self.tail = new_node
elif index == 0:
self.head.previous = new_node
new_node.next = self.head
self.head = new_node
elif index == len(self):
self.tail.next = new_node
new_node.previous = self.tail
self.tail = new_node
else:
temp = self.head
for i in range(0, index):
temp = temp.next
temp.previous.next = new_node
new_node.previous = temp.previous
new_node.next = temp
temp.previous = new_node
def delete_head(self):
return self.delete_at_nth(0)
def delete_tail(self):
return self.delete_at_nth(len(self) - 1)
def delete_at_nth(self, index: int):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.delete_at_nth(0)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> linked_list.delete_at_nth(0) == 1
True
>>> linked_list.delete_at_nth(3) == 5
True
>>> linked_list.delete_at_nth(1) == 3
True
>>> str(linked_list)
'2->4'
>>> linked_list.delete_at_nth(2)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self) - 1:
raise IndexError("list index out of range")
delete_node = self.head # default first node
if len(self) == 1:
self.head = self.tail = None
elif index == 0:
self.head = self.head.next
self.head.previous = None
elif index == len(self) - 1:
delete_node = self.tail
self.tail = self.tail.previous
self.tail.next = None
else:
temp = self.head
for i in range(0, index):
temp = temp.next
delete_node = temp
temp.next.previous = temp.previous
temp.previous.next = temp.next
return delete_node.data
def delete(self, data) -> str:
current = self.head
while current.data != data: # Find the position to delete
if current.next:
current = current.next
else: # We have reached the end an no value matches
return "No data matching given value"
if current == self.head:
self.delete_head()
elif current == self.tail:
self.delete_tail()
else: # Before: 1 <--> 2(current) <--> 3
current.previous.next = current.next # 1 --> 3
current.next.previous = current.previous # 1 <--> 3
return data
def is_empty(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_at_tail(1)
>>> linked_list.is_empty()
False
"""
return len(self) == 0
def test_doubly_linked_list() -> None:
"""
>>> test_doubly_linked_list()
"""
linked_list = DoublyLinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_at_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_at_head(0)
linked_list.insert_at_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_at_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the bogosort algorithm,
also known as permutation sort, stupid sort, slowsort, shotgun sort, or monkey sort.
Bogosort generates random permutations until it guesses the correct one.
More info on: https://en.wikipedia.org/wiki/Bogosort
For doctests run following command:
python -m doctest -v bogo_sort.py
or
python3 -m doctest -v bogo_sort.py
For manual testing run:
python bogo_sort.py
"""
import random
def bogo_sort(collection):
"""Pure implementation of the bogosort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> bogo_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> bogo_sort([])
[]
>>> bogo_sort([-2, -5, -45])
[-45, -5, -2]
"""
def is_sorted(collection):
if len(collection) < 2:
return True
for i in range(len(collection) - 1):
if collection[i] > collection[i + 1]:
return False
return True
while not is_sorted(collection):
random.shuffle(collection)
return collection
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(bogo_sort(unsorted))
| """
This is a pure Python implementation of the bogosort algorithm,
also known as permutation sort, stupid sort, slowsort, shotgun sort, or monkey sort.
Bogosort generates random permutations until it guesses the correct one.
More info on: https://en.wikipedia.org/wiki/Bogosort
For doctests run following command:
python -m doctest -v bogo_sort.py
or
python3 -m doctest -v bogo_sort.py
For manual testing run:
python bogo_sort.py
"""
import random
def bogo_sort(collection):
"""Pure implementation of the bogosort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> bogo_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> bogo_sort([])
[]
>>> bogo_sort([-2, -5, -45])
[-45, -5, -2]
"""
def is_sorted(collection):
if len(collection) < 2:
return True
for i in range(len(collection) - 1):
if collection[i] > collection[i + 1]:
return False
return True
while not is_sorted(collection):
random.shuffle(collection)
return collection
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(bogo_sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,927 | [mypy] Fix type annotations for data_structures->linked_list directory files | ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ParthSatodiya | 2021-10-02T18:42:18Z | 2021-10-07T15:18:23Z | d654806eae5dc6027911424ea828e566a64641fd | d324f91fe75cc859335ee1f7c9c6307d958d0558 | [mypy] Fix type annotations for data_structures->linked_list directory files. ```Shell
$ python -m mypy -V
mypy 0.910
$ python -m mypy data_structures/linked_list/
Success: no issues found in 14 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|