repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Approximates the area under the curve using the trapezoidal rule
"""
from __future__ import annotations
from collections.abc import Callable
def trapezoidal_area(
fnc: Callable[[int | float], int | float],
x_start: int | float,
x_end: int | float,
steps: int = 100,
) -> float:
"""
Treats curve as a collection of linear lines and sums the area of the
trapezium shape they form
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases the accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return 5
>>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000)
'10.000'
>>> def f(x):
... return 9*x**2
>>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000)
'192.0000'
>>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000)
'384.0000'
"""
x1 = x_start
fx1 = fnc(x_start)
area = 0.0
for i in range(steps):
# Approximates small segments of curve as linear and solve
# for trapezoidal area
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
area += abs(fx2 + fx1) * (x2 - x1) / 2
# Increment step
x1 = x2
fx1 = fx2
return area
if __name__ == "__main__":
def f(x):
return x**3
print("f(x) = x^3")
print("The area between the curve, x = -10, x = 10 and the x axis is:")
i = 10
while i <= 100000:
area = trapezoidal_area(f, -5, 5, i)
print(f"with {i} steps: {area}")
i *= 10
| """
Approximates the area under the curve using the trapezoidal rule
"""
from __future__ import annotations
from collections.abc import Callable
def trapezoidal_area(
fnc: Callable[[int | float], int | float],
x_start: int | float,
x_end: int | float,
steps: int = 100,
) -> float:
"""
Treats curve as a collection of linear lines and sums the area of the
trapezium shape they form
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases the accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return 5
>>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000)
'10.000'
>>> def f(x):
... return 9*x**2
>>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000)
'192.0000'
>>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000)
'384.0000'
"""
x1 = x_start
fx1 = fnc(x_start)
area = 0.0
for i in range(steps):
# Approximates small segments of curve as linear and solve
# for trapezoidal area
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
area += abs(fx2 + fx1) * (x2 - x1) / 2
# Increment step
x1 = x2
fx1 = fx2
return area
if __name__ == "__main__":
def f(x):
return x**3
print("f(x) = x^3")
print("The area between the curve, x = -10, x = 10 and the x axis is:")
i = 10
while i <= 100000:
area = trapezoidal_area(f, -5, 5, i)
print(f"with {i} steps: {area}")
i *= 10
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Picks the random index as the pivot
"""
import random
def partition(A, left_index, right_index):
pivot = A[left_index]
i = left_index + 1
for j in range(left_index + 1, right_index):
if A[j] < pivot:
A[j], A[i] = A[i], A[j]
i += 1
A[left_index], A[i - 1] = A[i - 1], A[left_index]
return i - 1
def quick_sort_random(A, left, right):
if left < right:
pivot = random.randint(left, right - 1)
A[pivot], A[left] = (
A[left],
A[pivot],
) # switches the pivot with the left most bound
pivot_index = partition(A, left, right)
quick_sort_random(
A, left, pivot_index
) # recursive quicksort to the left of the pivot point
quick_sort_random(
A, pivot_index + 1, right
) # recursive quicksort to the right of the pivot point
def main():
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
quick_sort_random(arr, 0, len(arr))
print(arr)
if __name__ == "__main__":
main()
| """
Picks the random index as the pivot
"""
import random
def partition(A, left_index, right_index):
pivot = A[left_index]
i = left_index + 1
for j in range(left_index + 1, right_index):
if A[j] < pivot:
A[j], A[i] = A[i], A[j]
i += 1
A[left_index], A[i - 1] = A[i - 1], A[left_index]
return i - 1
def quick_sort_random(A, left, right):
if left < right:
pivot = random.randint(left, right - 1)
A[pivot], A[left] = (
A[left],
A[pivot],
) # switches the pivot with the left most bound
pivot_index = partition(A, left, right)
quick_sort_random(
A, left, pivot_index
) # recursive quicksort to the left of the pivot point
quick_sort_random(
A, pivot_index + 1, right
) # recursive quicksort to the right of the pivot point
def main():
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
quick_sort_random(arr, 0, len(arr))
print(arr)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # An island in matrix is a group of linked areas, all having the same value.
# This code counts number of islands in a given matrix, with including diagonal
# connections.
class matrix: # Public class to implement a graph
def __init__(self, row: int, col: int, graph: list):
self.ROW = row
self.COL = col
self.graph = graph
def is_safe(self, i, j, visited) -> bool:
return (
0 <= i < self.ROW
and 0 <= j < self.COL
and not visited[i][j]
and self.graph[i][j]
)
def diffs(self, i, j, visited): # Checking all 8 elements surrounding nth element
rowNbr = [-1, -1, -1, 0, 0, 1, 1, 1] # Coordinate order
colNbr = [-1, 0, 1, -1, 1, -1, 0, 1]
visited[i][j] = True # Make those cells visited
for k in range(8):
if self.is_safe(i + rowNbr[k], j + colNbr[k], visited):
self.diffs(i + rowNbr[k], j + colNbr[k], visited)
def count_islands(self) -> int: # And finally, count all islands.
visited = [[False for j in range(self.COL)] for i in range(self.ROW)]
count = 0
for i in range(self.ROW):
for j in range(self.COL):
if visited[i][j] is False and self.graph[i][j] == 1:
self.diffs(i, j, visited)
count += 1
return count
| # An island in matrix is a group of linked areas, all having the same value.
# This code counts number of islands in a given matrix, with including diagonal
# connections.
class matrix: # Public class to implement a graph
def __init__(self, row: int, col: int, graph: list):
self.ROW = row
self.COL = col
self.graph = graph
def is_safe(self, i, j, visited) -> bool:
return (
0 <= i < self.ROW
and 0 <= j < self.COL
and not visited[i][j]
and self.graph[i][j]
)
def diffs(self, i, j, visited): # Checking all 8 elements surrounding nth element
rowNbr = [-1, -1, -1, 0, 0, 1, 1, 1] # Coordinate order
colNbr = [-1, 0, 1, -1, 1, -1, 0, 1]
visited[i][j] = True # Make those cells visited
for k in range(8):
if self.is_safe(i + rowNbr[k], j + colNbr[k], visited):
self.diffs(i + rowNbr[k], j + colNbr[k], visited)
def count_islands(self) -> int: # And finally, count all islands.
visited = [[False for j in range(self.COL)] for i in range(self.ROW)]
count = 0
for i in range(self.ROW):
for j in range(self.COL):
if visited[i][j] is False and self.graph[i][j] == 1:
self.diffs(i, j, visited)
count += 1
return count
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Project Euler Problem 174: https://projecteuler.net/problem=174
We shall define a square lamina to be a square outline with a square "hole" so that
the shape possesses vertical and horizontal symmetry.
Given eight tiles it is possible to form a lamina in only one way: 3x3 square with a
1x1 hole in the middle. However, using thirty-two tiles it is possible to form two
distinct laminae.
If t represents the number of tiles used, we shall say that t = 8 is type L(1) and
t = 32 is type L(2).
Let N(n) be the number of t ≤ 1000000 such that t is type L(n); for example,
N(15) = 832.
What is ∑ N(n) for 1 ≤ n ≤ 10?
"""
from collections import defaultdict
from math import ceil, sqrt
def solution(t_limit: int = 1000000, n_limit: int = 10) -> int:
"""
Return the sum of N(n) for 1 <= n <= n_limit.
>>> solution(1000,5)
249
>>> solution(10000,10)
2383
"""
count: defaultdict = defaultdict(int)
for outer_width in range(3, (t_limit // 4) + 2):
if outer_width * outer_width > t_limit:
hole_width_lower_bound = max(
ceil(sqrt(outer_width * outer_width - t_limit)), 1
)
else:
hole_width_lower_bound = 1
hole_width_lower_bound += (outer_width - hole_width_lower_bound) % 2
for hole_width in range(hole_width_lower_bound, outer_width - 1, 2):
count[outer_width * outer_width - hole_width * hole_width] += 1
return sum(1 for n in count.values() if 1 <= n <= 10)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 174: https://projecteuler.net/problem=174
We shall define a square lamina to be a square outline with a square "hole" so that
the shape possesses vertical and horizontal symmetry.
Given eight tiles it is possible to form a lamina in only one way: 3x3 square with a
1x1 hole in the middle. However, using thirty-two tiles it is possible to form two
distinct laminae.
If t represents the number of tiles used, we shall say that t = 8 is type L(1) and
t = 32 is type L(2).
Let N(n) be the number of t ≤ 1000000 such that t is type L(n); for example,
N(15) = 832.
What is ∑ N(n) for 1 ≤ n ≤ 10?
"""
from collections import defaultdict
from math import ceil, sqrt
def solution(t_limit: int = 1000000, n_limit: int = 10) -> int:
"""
Return the sum of N(n) for 1 <= n <= n_limit.
>>> solution(1000,5)
249
>>> solution(10000,10)
2383
"""
count: defaultdict = defaultdict(int)
for outer_width in range(3, (t_limit // 4) + 2):
if outer_width * outer_width > t_limit:
hole_width_lower_bound = max(
ceil(sqrt(outer_width * outer_width - t_limit)), 1
)
else:
hole_width_lower_bound = 1
hole_width_lower_bound += (outer_width - hole_width_lower_bound) % 2
for hole_width in range(hole_width_lower_bound, outer_width - 1, 2):
count[outer_width * outer_width - hole_width * hole_width] += 1
return sum(1 for n in count.values() if 1 <= n <= 10)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from xml.dom import NotFoundErr
import requests
from bs4 import BeautifulSoup, NavigableString
from fake_useragent import UserAgent
BASE_URL = "https://ww1.gogoanime2.org"
def search_scraper(anime_name: str) -> list:
"""[summary]
Take an url and
return list of anime after scraping the site.
>>> type(search_scraper("demon_slayer"))
<class 'list'>
Args:
anime_name (str): [Name of anime]
Raises:
e: [Raises exception on failure]
Returns:
[list]: [List of animes]
"""
# concat the name to form the search url.
search_url = f"{BASE_URL}/search/{anime_name}"
response = requests.get(
search_url, headers={"UserAgent": UserAgent().chrome}
) # request the url.
# Is the response ok?
response.raise_for_status()
# parse with soup.
soup = BeautifulSoup(response.text, "html.parser")
# get list of anime
anime_ul = soup.find("ul", {"class": "items"})
anime_li = anime_ul.children
# for each anime, insert to list. the name and url.
anime_list = []
for anime in anime_li:
if not isinstance(anime, NavigableString):
try:
anime_url, anime_title = (
anime.find("a")["href"],
anime.find("a")["title"],
)
anime_list.append(
{
"title": anime_title,
"url": anime_url,
}
)
except (NotFoundErr, KeyError):
pass
return anime_list
def search_anime_episode_list(episode_endpoint: str) -> list:
"""[summary]
Take an url and
return list of episodes after scraping the site
for an url.
>>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of episodes]
"""
request_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome})
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# With this id. get the episode list.
episode_page_ul = soup.find("ul", {"id": "episode_related"})
episode_page_li = episode_page_ul.children
episode_list = []
for episode in episode_page_li:
try:
if not isinstance(episode, NavigableString):
episode_list.append(
{
"title": episode.find("div", {"class": "name"}).text.replace(
" ", ""
),
"url": episode.find("a")["href"],
}
)
except (KeyError, NotFoundErr):
pass
return episode_list
def get_anime_episode(episode_endpoint: str) -> list:
"""[summary]
Get click url and download url from episode url
>>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of download and watch url]
"""
episode_page_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(
url=episode_page_url, headers={"User-Agent": UserAgent().chrome}
)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
try:
episode_url = soup.find("iframe", {"id": "playerframe"})["src"]
download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
except (KeyError, NotFoundErr) as e:
raise e
return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"]
if __name__ == "__main__":
anime_name = input("Enter anime name: ").strip()
anime_list = search_scraper(anime_name)
print("\n")
if len(anime_list) == 0:
print("No anime found with this name")
else:
print(f"Found {len(anime_list)} results: ")
for (i, anime) in enumerate(anime_list):
anime_title = anime["title"]
print(f"{i+1}. {anime_title}")
anime_choice = int(input("\nPlease choose from the following list: ").strip())
chosen_anime = anime_list[anime_choice - 1]
print(f"You chose {chosen_anime['title']}. Searching for episodes...")
episode_list = search_anime_episode_list(chosen_anime["url"])
if len(episode_list) == 0:
print("No episode found for this anime")
else:
print(f"Found {len(episode_list)} results: ")
for (i, episode) in enumerate(episode_list):
print(f"{i+1}. {episode['title']}")
episode_choice = int(input("\nChoose an episode by serial no: ").strip())
chosen_episode = episode_list[episode_choice - 1]
print(f"You chose {chosen_episode['title']}. Searching...")
episode_url, download_url = get_anime_episode(chosen_episode["url"])
print(f"\nTo watch, ctrl+click on {episode_url}.")
print(f"To download, ctrl+click on {download_url}.")
| from xml.dom import NotFoundErr
import requests
from bs4 import BeautifulSoup, NavigableString
from fake_useragent import UserAgent
BASE_URL = "https://ww1.gogoanime2.org"
def search_scraper(anime_name: str) -> list:
"""[summary]
Take an url and
return list of anime after scraping the site.
>>> type(search_scraper("demon_slayer"))
<class 'list'>
Args:
anime_name (str): [Name of anime]
Raises:
e: [Raises exception on failure]
Returns:
[list]: [List of animes]
"""
# concat the name to form the search url.
search_url = f"{BASE_URL}/search/{anime_name}"
response = requests.get(
search_url, headers={"UserAgent": UserAgent().chrome}
) # request the url.
# Is the response ok?
response.raise_for_status()
# parse with soup.
soup = BeautifulSoup(response.text, "html.parser")
# get list of anime
anime_ul = soup.find("ul", {"class": "items"})
anime_li = anime_ul.children
# for each anime, insert to list. the name and url.
anime_list = []
for anime in anime_li:
if not isinstance(anime, NavigableString):
try:
anime_url, anime_title = (
anime.find("a")["href"],
anime.find("a")["title"],
)
anime_list.append(
{
"title": anime_title,
"url": anime_url,
}
)
except (NotFoundErr, KeyError):
pass
return anime_list
def search_anime_episode_list(episode_endpoint: str) -> list:
"""[summary]
Take an url and
return list of episodes after scraping the site
for an url.
>>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of episodes]
"""
request_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome})
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# With this id. get the episode list.
episode_page_ul = soup.find("ul", {"id": "episode_related"})
episode_page_li = episode_page_ul.children
episode_list = []
for episode in episode_page_li:
try:
if not isinstance(episode, NavigableString):
episode_list.append(
{
"title": episode.find("div", {"class": "name"}).text.replace(
" ", ""
),
"url": episode.find("a")["href"],
}
)
except (KeyError, NotFoundErr):
pass
return episode_list
def get_anime_episode(episode_endpoint: str) -> list:
"""[summary]
Get click url and download url from episode url
>>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of download and watch url]
"""
episode_page_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(
url=episode_page_url, headers={"User-Agent": UserAgent().chrome}
)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
try:
episode_url = soup.find("iframe", {"id": "playerframe"})["src"]
download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
except (KeyError, NotFoundErr) as e:
raise e
return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"]
if __name__ == "__main__":
anime_name = input("Enter anime name: ").strip()
anime_list = search_scraper(anime_name)
print("\n")
if len(anime_list) == 0:
print("No anime found with this name")
else:
print(f"Found {len(anime_list)} results: ")
for (i, anime) in enumerate(anime_list):
anime_title = anime["title"]
print(f"{i+1}. {anime_title}")
anime_choice = int(input("\nPlease choose from the following list: ").strip())
chosen_anime = anime_list[anime_choice - 1]
print(f"You chose {chosen_anime['title']}. Searching for episodes...")
episode_list = search_anime_episode_list(chosen_anime["url"])
if len(episode_list) == 0:
print("No episode found for this anime")
else:
print(f"Found {len(episode_list)} results: ")
for (i, episode) in enumerate(episode_list):
print(f"{i+1}. {episode['title']}")
episode_choice = int(input("\nChoose an episode by serial no: ").strip())
chosen_episode = episode_list[episode_choice - 1]
print(f"You chose {chosen_episode['title']}. Searching...")
episode_url, download_url = get_anime_episode(chosen_episode["url"])
print(f"\nTo watch, ctrl+click on {episode_url}.")
print(f"To download, ctrl+click on {download_url}.")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | import bs4
import requests
def get_movie_data_from_soup(soup: bs4.element.ResultSet) -> dict[str, str]:
return {
"name": soup.h3.a.text,
"genre": soup.find("span", class_="genre").text.strip(),
"rating": soup.strong.text,
"page_link": f"https://www.imdb.com{soup.a.get('href')}",
}
def get_imdb_top_movies(num_movies: int = 5) -> tuple:
"""Get the top num_movies most highly rated movies from IMDB and
return a tuple of dicts describing each movie's name, genre, rating, and URL.
Args:
num_movies: The number of movies to get. Defaults to 5.
Returns:
A list of tuples containing information about the top n movies.
>>> len(get_imdb_top_movies(5))
5
>>> len(get_imdb_top_movies(-3))
0
>>> len(get_imdb_top_movies(4.99999))
4
"""
num_movies = int(float(num_movies))
if num_movies < 1:
return ()
base_url = (
"https://www.imdb.com/search/title?title_type="
f"feature&sort=num_votes,desc&count={num_movies}"
)
source = bs4.BeautifulSoup(requests.get(base_url).content, "html.parser")
return tuple(
get_movie_data_from_soup(movie)
for movie in source.find_all("div", class_="lister-item mode-advanced")
)
if __name__ == "__main__":
import json
num_movies = int(input("How many movies would you like to see? "))
print(
", ".join(
json.dumps(movie, indent=4) for movie in get_imdb_top_movies(num_movies)
)
)
| import bs4
import requests
def get_movie_data_from_soup(soup: bs4.element.ResultSet) -> dict[str, str]:
return {
"name": soup.h3.a.text,
"genre": soup.find("span", class_="genre").text.strip(),
"rating": soup.strong.text,
"page_link": f"https://www.imdb.com{soup.a.get('href')}",
}
def get_imdb_top_movies(num_movies: int = 5) -> tuple:
"""Get the top num_movies most highly rated movies from IMDB and
return a tuple of dicts describing each movie's name, genre, rating, and URL.
Args:
num_movies: The number of movies to get. Defaults to 5.
Returns:
A list of tuples containing information about the top n movies.
>>> len(get_imdb_top_movies(5))
5
>>> len(get_imdb_top_movies(-3))
0
>>> len(get_imdb_top_movies(4.99999))
4
"""
num_movies = int(float(num_movies))
if num_movies < 1:
return ()
base_url = (
"https://www.imdb.com/search/title?title_type="
f"feature&sort=num_votes,desc&count={num_movies}"
)
source = bs4.BeautifulSoup(requests.get(base_url).content, "html.parser")
return tuple(
get_movie_data_from_soup(movie)
for movie in source.find_all("div", class_="lister-item mode-advanced")
)
if __name__ == "__main__":
import json
num_movies = int(input("How many movies would you like to see? "))
print(
", ".join(
json.dumps(movie, indent=4) for movie in get_imdb_top_movies(num_movies)
)
)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | def topologicalSort(graph):
"""
Kahn's Algorithm is used to find Topological ordering of Directed Acyclic Graph
using BFS
"""
indegree = [0] * len(graph)
queue = []
topo = []
cnt = 0
for key, values in graph.items():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
cnt += 1
topo.append(vertex)
for x in graph[vertex]:
indegree[x] -= 1
if indegree[x] == 0:
queue.append(x)
if cnt != len(graph):
print("Cycle exists")
else:
print(topo)
# Adjacency List of Graph
graph = {0: [1, 2], 1: [3], 2: [3], 3: [4, 5], 4: [], 5: []}
topologicalSort(graph)
| def topologicalSort(graph):
"""
Kahn's Algorithm is used to find Topological ordering of Directed Acyclic Graph
using BFS
"""
indegree = [0] * len(graph)
queue = []
topo = []
cnt = 0
for key, values in graph.items():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
cnt += 1
topo.append(vertex)
for x in graph[vertex]:
indegree[x] -= 1
if indegree[x] == 0:
queue.append(x)
if cnt != len(graph):
print("Cycle exists")
else:
print(topo)
# Adjacency List of Graph
graph = {0: [1, 2], 1: [3], 2: [3], 3: [4, 5], 4: [], 5: []}
topologicalSort(graph)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This is pure Python implementation of counting sort algorithm
For doctests run following command:
python -m doctest -v counting_sort.py
or
python3 -m doctest -v counting_sort.py
For manual testing run:
python counting_sort.py
"""
def counting_sort(collection):
"""Pure implementation of counting sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> counting_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> counting_sort([])
[]
>>> counting_sort([-2, -5, -45])
[-45, -5, -2]
"""
# if the collection is empty, returns empty
if collection == []:
return []
# get some information about the collection
coll_len = len(collection)
coll_max = max(collection)
coll_min = min(collection)
# create the counting array
counting_arr_length = coll_max + 1 - coll_min
counting_arr = [0] * counting_arr_length
# count how much a number appears in the collection
for number in collection:
counting_arr[number - coll_min] += 1
# sum each position with it's predecessors. now, counting_arr[i] tells
# us how many elements <= i has in the collection
for i in range(1, counting_arr_length):
counting_arr[i] = counting_arr[i] + counting_arr[i - 1]
# create the output collection
ordered = [0] * coll_len
# place the elements in the output, respecting the original order (stable
# sort) from end to begin, updating counting_arr
for i in reversed(range(0, coll_len)):
ordered[counting_arr[collection[i] - coll_min] - 1] = collection[i]
counting_arr[collection[i] - coll_min] -= 1
return ordered
def counting_sort_string(string):
"""
>>> counting_sort_string("thisisthestring")
'eghhiiinrsssttt'
"""
return "".join([chr(i) for i in counting_sort([ord(c) for c in string])])
if __name__ == "__main__":
# Test string sort
assert "eghhiiinrsssttt" == counting_sort_string("thisisthestring")
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(counting_sort(unsorted))
| """
This is pure Python implementation of counting sort algorithm
For doctests run following command:
python -m doctest -v counting_sort.py
or
python3 -m doctest -v counting_sort.py
For manual testing run:
python counting_sort.py
"""
def counting_sort(collection):
"""Pure implementation of counting sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> counting_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> counting_sort([])
[]
>>> counting_sort([-2, -5, -45])
[-45, -5, -2]
"""
# if the collection is empty, returns empty
if collection == []:
return []
# get some information about the collection
coll_len = len(collection)
coll_max = max(collection)
coll_min = min(collection)
# create the counting array
counting_arr_length = coll_max + 1 - coll_min
counting_arr = [0] * counting_arr_length
# count how much a number appears in the collection
for number in collection:
counting_arr[number - coll_min] += 1
# sum each position with it's predecessors. now, counting_arr[i] tells
# us how many elements <= i has in the collection
for i in range(1, counting_arr_length):
counting_arr[i] = counting_arr[i] + counting_arr[i - 1]
# create the output collection
ordered = [0] * coll_len
# place the elements in the output, respecting the original order (stable
# sort) from end to begin, updating counting_arr
for i in reversed(range(0, coll_len)):
ordered[counting_arr[collection[i] - coll_min] - 1] = collection[i]
counting_arr[collection[i] - coll_min] -= 1
return ordered
def counting_sort_string(string):
"""
>>> counting_sort_string("thisisthestring")
'eghhiiinrsssttt'
"""
return "".join([chr(i) for i in counting_sort([ord(c) for c in string])])
if __name__ == "__main__":
# Test string sort
assert "eghhiiinrsssttt" == counting_sort_string("thisisthestring")
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(counting_sort(unsorted))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Implementation of an auto-balanced binary tree!
For doctests run following command:
python3 -m doctest -v avl_tree.py
For testing run:
python avl_tree.py
"""
from __future__ import annotations
import math
import random
from typing import Any
class my_queue:
def __init__(self) -> None:
self.data: list[Any] = []
self.head: int = 0
self.tail: int = 0
def is_empty(self) -> bool:
return self.head == self.tail
def push(self, data: Any) -> None:
self.data.append(data)
self.tail = self.tail + 1
def pop(self) -> Any:
ret = self.data[self.head]
self.head = self.head + 1
return ret
def count(self) -> int:
return self.tail - self.head
def print(self) -> None:
print(self.data)
print("**************")
print(self.data[self.head : self.tail])
class my_node:
def __init__(self, data: Any) -> None:
self.data = data
self.left: my_node | None = None
self.right: my_node | None = None
self.height: int = 1
def get_data(self) -> Any:
return self.data
def get_left(self) -> my_node | None:
return self.left
def get_right(self) -> my_node | None:
return self.right
def get_height(self) -> int:
return self.height
def set_data(self, data: Any) -> None:
self.data = data
return
def set_left(self, node: my_node | None) -> None:
self.left = node
return
def set_right(self, node: my_node | None) -> None:
self.right = node
return
def set_height(self, height: int) -> None:
self.height = height
return
def get_height(node: my_node | None) -> int:
if node is None:
return 0
return node.get_height()
def my_max(a: int, b: int) -> int:
if a > b:
return a
return b
def right_rotation(node: my_node) -> my_node:
r"""
A B
/ \ / \
B C Bl A
/ \ --> / / \
Bl Br UB Br C
/
UB
UB = unbalanced node
"""
print("left rotation node:", node.get_data())
ret = node.get_left()
assert ret is not None
node.set_left(ret.get_right())
ret.set_right(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def left_rotation(node: my_node) -> my_node:
"""
a mirror symmetry rotation of the left_rotation
"""
print("right rotation node:", node.get_data())
ret = node.get_right()
assert ret is not None
node.set_right(ret.get_left())
ret.set_left(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def lr_rotation(node: my_node) -> my_node:
r"""
A A Br
/ \ / \ / \
B C LR Br C RR B A
/ \ --> / \ --> / / \
Bl Br B UB Bl UB C
\ /
UB Bl
RR = right_rotation LR = left_rotation
"""
left_child = node.get_left()
assert left_child is not None
node.set_left(left_rotation(left_child))
return right_rotation(node)
def rl_rotation(node: my_node) -> my_node:
right_child = node.get_right()
assert right_child is not None
node.set_right(right_rotation(right_child))
return left_rotation(node)
def insert_node(node: my_node | None, data: Any) -> my_node | None:
if node is None:
return my_node(data)
if data < node.get_data():
node.set_left(insert_node(node.get_left(), data))
if (
get_height(node.get_left()) - get_height(node.get_right()) == 2
): # an unbalance detected
left_child = node.get_left()
assert left_child is not None
if (
data < left_child.get_data()
): # new node is the left child of the left child
node = right_rotation(node)
else:
node = lr_rotation(node)
else:
node.set_right(insert_node(node.get_right(), data))
if get_height(node.get_right()) - get_height(node.get_left()) == 2:
right_child = node.get_right()
assert right_child is not None
if data < right_child.get_data():
node = rl_rotation(node)
else:
node = left_rotation(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
return node
def get_rightMost(root: my_node) -> Any:
while True:
right_child = root.get_right()
if right_child is None:
break
root = right_child
return root.get_data()
def get_leftMost(root: my_node) -> Any:
while True:
left_child = root.get_left()
if left_child is None:
break
root = left_child
return root.get_data()
def del_node(root: my_node, data: Any) -> my_node | None:
left_child = root.get_left()
right_child = root.get_right()
if root.get_data() == data:
if left_child is not None and right_child is not None:
temp_data = get_leftMost(right_child)
root.set_data(temp_data)
root.set_right(del_node(right_child, temp_data))
elif left_child is not None:
root = left_child
elif right_child is not None:
root = right_child
else:
return None
elif root.get_data() > data:
if left_child is None:
print("No such data")
return root
else:
root.set_left(del_node(left_child, data))
else: # root.get_data() < data
if right_child is None:
return root
else:
root.set_right(del_node(right_child, data))
if get_height(right_child) - get_height(left_child) == 2:
assert right_child is not None
if get_height(right_child.get_right()) > get_height(right_child.get_left()):
root = left_rotation(root)
else:
root = rl_rotation(root)
elif get_height(right_child) - get_height(left_child) == -2:
assert left_child is not None
if get_height(left_child.get_left()) > get_height(left_child.get_right()):
root = right_rotation(root)
else:
root = lr_rotation(root)
height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1
root.set_height(height)
return root
class AVLtree:
"""
An AVL tree doctest
Examples:
>>> t = AVLtree()
>>> t.insert(4)
insert:4
>>> print(str(t).replace(" \\n","\\n"))
4
*************************************
>>> t.insert(2)
insert:2
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
>>> t.insert(3)
insert:3
right rotation node: 2
left rotation node: 4
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
3
2 4
*************************************
>>> t.get_height()
2
>>> t.del_node(3)
delete:3
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
"""
def __init__(self) -> None:
self.root: my_node | None = None
def get_height(self) -> int:
return get_height(self.root)
def insert(self, data: Any) -> None:
print("insert:" + str(data))
self.root = insert_node(self.root, data)
def del_node(self, data: Any) -> None:
print("delete:" + str(data))
if self.root is None:
print("Tree is empty!")
return
self.root = del_node(self.root, data)
def __str__(
self,
) -> str: # a level traversale, gives a more intuitive look on the tree
output = ""
q = my_queue()
q.push(self.root)
layer = self.get_height()
if layer == 0:
return output
cnt = 0
while not q.is_empty():
node = q.pop()
space = " " * int(math.pow(2, layer - 1))
output += space
if node is None:
output += "*"
q.push(None)
q.push(None)
else:
output += str(node.get_data())
q.push(node.get_left())
q.push(node.get_right())
output += space
cnt = cnt + 1
for i in range(100):
if cnt == math.pow(2, i) - 1:
layer = layer - 1
if layer == 0:
output += "\n*************************************"
return output
output += "\n"
break
output += "\n*************************************"
return output
def _test() -> None:
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
t = AVLtree()
lst = list(range(10))
random.shuffle(lst)
for i in lst:
t.insert(i)
print(str(t))
random.shuffle(lst)
for i in lst:
t.del_node(i)
print(str(t))
| """
Implementation of an auto-balanced binary tree!
For doctests run following command:
python3 -m doctest -v avl_tree.py
For testing run:
python avl_tree.py
"""
from __future__ import annotations
import math
import random
from typing import Any
class my_queue:
def __init__(self) -> None:
self.data: list[Any] = []
self.head: int = 0
self.tail: int = 0
def is_empty(self) -> bool:
return self.head == self.tail
def push(self, data: Any) -> None:
self.data.append(data)
self.tail = self.tail + 1
def pop(self) -> Any:
ret = self.data[self.head]
self.head = self.head + 1
return ret
def count(self) -> int:
return self.tail - self.head
def print(self) -> None:
print(self.data)
print("**************")
print(self.data[self.head : self.tail])
class my_node:
def __init__(self, data: Any) -> None:
self.data = data
self.left: my_node | None = None
self.right: my_node | None = None
self.height: int = 1
def get_data(self) -> Any:
return self.data
def get_left(self) -> my_node | None:
return self.left
def get_right(self) -> my_node | None:
return self.right
def get_height(self) -> int:
return self.height
def set_data(self, data: Any) -> None:
self.data = data
return
def set_left(self, node: my_node | None) -> None:
self.left = node
return
def set_right(self, node: my_node | None) -> None:
self.right = node
return
def set_height(self, height: int) -> None:
self.height = height
return
def get_height(node: my_node | None) -> int:
if node is None:
return 0
return node.get_height()
def my_max(a: int, b: int) -> int:
if a > b:
return a
return b
def right_rotation(node: my_node) -> my_node:
r"""
A B
/ \ / \
B C Bl A
/ \ --> / / \
Bl Br UB Br C
/
UB
UB = unbalanced node
"""
print("left rotation node:", node.get_data())
ret = node.get_left()
assert ret is not None
node.set_left(ret.get_right())
ret.set_right(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def left_rotation(node: my_node) -> my_node:
"""
a mirror symmetry rotation of the left_rotation
"""
print("right rotation node:", node.get_data())
ret = node.get_right()
assert ret is not None
node.set_right(ret.get_left())
ret.set_left(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1
ret.set_height(h2)
return ret
def lr_rotation(node: my_node) -> my_node:
r"""
A A Br
/ \ / \ / \
B C LR Br C RR B A
/ \ --> / \ --> / / \
Bl Br B UB Bl UB C
\ /
UB Bl
RR = right_rotation LR = left_rotation
"""
left_child = node.get_left()
assert left_child is not None
node.set_left(left_rotation(left_child))
return right_rotation(node)
def rl_rotation(node: my_node) -> my_node:
right_child = node.get_right()
assert right_child is not None
node.set_right(right_rotation(right_child))
return left_rotation(node)
def insert_node(node: my_node | None, data: Any) -> my_node | None:
if node is None:
return my_node(data)
if data < node.get_data():
node.set_left(insert_node(node.get_left(), data))
if (
get_height(node.get_left()) - get_height(node.get_right()) == 2
): # an unbalance detected
left_child = node.get_left()
assert left_child is not None
if (
data < left_child.get_data()
): # new node is the left child of the left child
node = right_rotation(node)
else:
node = lr_rotation(node)
else:
node.set_right(insert_node(node.get_right(), data))
if get_height(node.get_right()) - get_height(node.get_left()) == 2:
right_child = node.get_right()
assert right_child is not None
if data < right_child.get_data():
node = rl_rotation(node)
else:
node = left_rotation(node)
h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1
node.set_height(h1)
return node
def get_rightMost(root: my_node) -> Any:
while True:
right_child = root.get_right()
if right_child is None:
break
root = right_child
return root.get_data()
def get_leftMost(root: my_node) -> Any:
while True:
left_child = root.get_left()
if left_child is None:
break
root = left_child
return root.get_data()
def del_node(root: my_node, data: Any) -> my_node | None:
left_child = root.get_left()
right_child = root.get_right()
if root.get_data() == data:
if left_child is not None and right_child is not None:
temp_data = get_leftMost(right_child)
root.set_data(temp_data)
root.set_right(del_node(right_child, temp_data))
elif left_child is not None:
root = left_child
elif right_child is not None:
root = right_child
else:
return None
elif root.get_data() > data:
if left_child is None:
print("No such data")
return root
else:
root.set_left(del_node(left_child, data))
else: # root.get_data() < data
if right_child is None:
return root
else:
root.set_right(del_node(right_child, data))
if get_height(right_child) - get_height(left_child) == 2:
assert right_child is not None
if get_height(right_child.get_right()) > get_height(right_child.get_left()):
root = left_rotation(root)
else:
root = rl_rotation(root)
elif get_height(right_child) - get_height(left_child) == -2:
assert left_child is not None
if get_height(left_child.get_left()) > get_height(left_child.get_right()):
root = right_rotation(root)
else:
root = lr_rotation(root)
height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1
root.set_height(height)
return root
class AVLtree:
"""
An AVL tree doctest
Examples:
>>> t = AVLtree()
>>> t.insert(4)
insert:4
>>> print(str(t).replace(" \\n","\\n"))
4
*************************************
>>> t.insert(2)
insert:2
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
>>> t.insert(3)
insert:3
right rotation node: 2
left rotation node: 4
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
3
2 4
*************************************
>>> t.get_height()
2
>>> t.del_node(3)
delete:3
>>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n"))
4
2 *
*************************************
"""
def __init__(self) -> None:
self.root: my_node | None = None
def get_height(self) -> int:
return get_height(self.root)
def insert(self, data: Any) -> None:
print("insert:" + str(data))
self.root = insert_node(self.root, data)
def del_node(self, data: Any) -> None:
print("delete:" + str(data))
if self.root is None:
print("Tree is empty!")
return
self.root = del_node(self.root, data)
def __str__(
self,
) -> str: # a level traversale, gives a more intuitive look on the tree
output = ""
q = my_queue()
q.push(self.root)
layer = self.get_height()
if layer == 0:
return output
cnt = 0
while not q.is_empty():
node = q.pop()
space = " " * int(math.pow(2, layer - 1))
output += space
if node is None:
output += "*"
q.push(None)
q.push(None)
else:
output += str(node.get_data())
q.push(node.get_left())
q.push(node.get_right())
output += space
cnt = cnt + 1
for i in range(100):
if cnt == math.pow(2, i) - 1:
layer = layer - 1
if layer == 0:
output += "\n*************************************"
return output
output += "\n"
break
output += "\n*************************************"
return output
def _test() -> None:
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
t = AVLtree()
lst = list(range(10))
random.shuffle(lst)
for i in lst:
t.insert(i)
print(str(t))
random.shuffle(lst)
for i in lst:
t.del_node(i)
print(str(t))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This file fetches quotes from the " ZenQuotes API ".
It does not require any API key as it uses free tier.
For more details and premium features visit:
https://zenquotes.io/
"""
import pprint
import requests
def quote_of_the_day() -> list:
API_ENDPOINT_URL = "https://zenquotes.io/api/today/"
return requests.get(API_ENDPOINT_URL).json()
def random_quotes() -> list:
API_ENDPOINT_URL = "https://zenquotes.io/api/random/"
return requests.get(API_ENDPOINT_URL).json()
if __name__ == "__main__":
"""
response object has all the info with the quote
To retrieve the actual quote access the response.json() object as below
response.json() is a list of json object
response.json()[0]['q'] = actual quote.
response.json()[0]['a'] = author name.
response.json()[0]['h'] = in html format.
"""
response = random_quotes()
pprint.pprint(response)
| """
This file fetches quotes from the " ZenQuotes API ".
It does not require any API key as it uses free tier.
For more details and premium features visit:
https://zenquotes.io/
"""
import pprint
import requests
def quote_of_the_day() -> list:
API_ENDPOINT_URL = "https://zenquotes.io/api/today/"
return requests.get(API_ENDPOINT_URL).json()
def random_quotes() -> list:
API_ENDPOINT_URL = "https://zenquotes.io/api/random/"
return requests.get(API_ENDPOINT_URL).json()
if __name__ == "__main__":
"""
response object has all the info with the quote
To retrieve the actual quote access the response.json() object as below
response.json() is a list of json object
response.json()[0]['q'] = actual quote.
response.json()[0]['a'] = author name.
response.json()[0]['h'] = in html format.
"""
response = random_quotes()
pprint.pprint(response)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | import argparse
import datetime
def zeller(date_input: str) -> str:
"""
Zellers Congruence Algorithm
Find the day of the week for nearly any Gregorian or Julian calendar date
>>> zeller('01-31-2010')
'Your date 01-31-2010, is a Sunday!'
Validate out of range month
>>> zeller('13-31-2010')
Traceback (most recent call last):
...
ValueError: Month must be between 1 - 12
>>> zeller('.2-31-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.2'
Validate out of range date:
>>> zeller('01-33-2010')
Traceback (most recent call last):
...
ValueError: Date must be between 1 - 31
>>> zeller('01-.4-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.4'
Validate second separator:
>>> zeller('01-31*2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate first separator:
>>> zeller('01^31-2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate out of range year:
>>> zeller('01-31-8999')
Traceback (most recent call last):
...
ValueError: Year out of range. There has to be some sort of limit...right?
Test null input:
>>> zeller()
Traceback (most recent call last):
...
TypeError: zeller() missing 1 required positional argument: 'date_input'
Test length of date_input:
>>> zeller('')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long
>>> zeller('01-31-19082939')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long"""
# Days of the week for response
days = {
"0": "Sunday",
"1": "Monday",
"2": "Tuesday",
"3": "Wednesday",
"4": "Thursday",
"5": "Friday",
"6": "Saturday",
}
convert_datetime_days = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 0}
# Validate
if not 0 < len(date_input) < 11:
raise ValueError("Must be 10 characters long")
# Get month
m: int = int(date_input[0] + date_input[1])
# Validate
if not 0 < m < 13:
raise ValueError("Month must be between 1 - 12")
sep_1: str = date_input[2]
# Validate
if sep_1 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get day
d: int = int(date_input[3] + date_input[4])
# Validate
if not 0 < d < 32:
raise ValueError("Date must be between 1 - 31")
# Get second separator
sep_2: str = date_input[5]
# Validate
if sep_2 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get year
y: int = int(date_input[6] + date_input[7] + date_input[8] + date_input[9])
# Arbitrary year range
if not 45 < y < 8500:
raise ValueError(
"Year out of range. There has to be some sort of limit...right?"
)
# Get datetime obj for validation
dt_ck = datetime.date(int(y), int(m), int(d))
# Start math
if m <= 2:
y = y - 1
m = m + 12
# maths var
c: int = int(str(y)[:2])
k: int = int(str(y)[2:])
t: int = int(2.6 * m - 5.39)
u: int = int(c / 4)
v: int = int(k / 4)
x: int = int(d + k)
z: int = int(t + u + v + x)
w: int = int(z - (2 * c))
f: int = round(w % 7)
# End math
# Validate math
if f != convert_datetime_days[dt_ck.weekday()]:
raise AssertionError("The date was evaluated incorrectly. Contact developer.")
# Response
response: str = f"Your date {date_input}, is a {days[str(f)]}!"
return response
if __name__ == "__main__":
import doctest
doctest.testmod()
parser = argparse.ArgumentParser(
description=(
"Find out what day of the week nearly any date is or was. Enter "
"date as a string in the mm-dd-yyyy or mm/dd/yyyy format"
)
)
parser.add_argument(
"date_input", type=str, help="Date as a string (mm-dd-yyyy or mm/dd/yyyy)"
)
args = parser.parse_args()
zeller(args.date_input)
| import argparse
import datetime
def zeller(date_input: str) -> str:
"""
Zellers Congruence Algorithm
Find the day of the week for nearly any Gregorian or Julian calendar date
>>> zeller('01-31-2010')
'Your date 01-31-2010, is a Sunday!'
Validate out of range month
>>> zeller('13-31-2010')
Traceback (most recent call last):
...
ValueError: Month must be between 1 - 12
>>> zeller('.2-31-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.2'
Validate out of range date:
>>> zeller('01-33-2010')
Traceback (most recent call last):
...
ValueError: Date must be between 1 - 31
>>> zeller('01-.4-2010')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: '.4'
Validate second separator:
>>> zeller('01-31*2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate first separator:
>>> zeller('01^31-2010')
Traceback (most recent call last):
...
ValueError: Date separator must be '-' or '/'
Validate out of range year:
>>> zeller('01-31-8999')
Traceback (most recent call last):
...
ValueError: Year out of range. There has to be some sort of limit...right?
Test null input:
>>> zeller()
Traceback (most recent call last):
...
TypeError: zeller() missing 1 required positional argument: 'date_input'
Test length of date_input:
>>> zeller('')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long
>>> zeller('01-31-19082939')
Traceback (most recent call last):
...
ValueError: Must be 10 characters long"""
# Days of the week for response
days = {
"0": "Sunday",
"1": "Monday",
"2": "Tuesday",
"3": "Wednesday",
"4": "Thursday",
"5": "Friday",
"6": "Saturday",
}
convert_datetime_days = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 0}
# Validate
if not 0 < len(date_input) < 11:
raise ValueError("Must be 10 characters long")
# Get month
m: int = int(date_input[0] + date_input[1])
# Validate
if not 0 < m < 13:
raise ValueError("Month must be between 1 - 12")
sep_1: str = date_input[2]
# Validate
if sep_1 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get day
d: int = int(date_input[3] + date_input[4])
# Validate
if not 0 < d < 32:
raise ValueError("Date must be between 1 - 31")
# Get second separator
sep_2: str = date_input[5]
# Validate
if sep_2 not in ["-", "/"]:
raise ValueError("Date separator must be '-' or '/'")
# Get year
y: int = int(date_input[6] + date_input[7] + date_input[8] + date_input[9])
# Arbitrary year range
if not 45 < y < 8500:
raise ValueError(
"Year out of range. There has to be some sort of limit...right?"
)
# Get datetime obj for validation
dt_ck = datetime.date(int(y), int(m), int(d))
# Start math
if m <= 2:
y = y - 1
m = m + 12
# maths var
c: int = int(str(y)[:2])
k: int = int(str(y)[2:])
t: int = int(2.6 * m - 5.39)
u: int = int(c / 4)
v: int = int(k / 4)
x: int = int(d + k)
z: int = int(t + u + v + x)
w: int = int(z - (2 * c))
f: int = round(w % 7)
# End math
# Validate math
if f != convert_datetime_days[dt_ck.weekday()]:
raise AssertionError("The date was evaluated incorrectly. Contact developer.")
# Response
response: str = f"Your date {date_input}, is a {days[str(f)]}!"
return response
if __name__ == "__main__":
import doctest
doctest.testmod()
parser = argparse.ArgumentParser(
description=(
"Find out what day of the week nearly any date is or was. Enter "
"date as a string in the mm-dd-yyyy or mm/dd/yyyy format"
)
)
parser.add_argument(
"date_input", type=str, help="Date as a string (mm-dd-yyyy or mm/dd/yyyy)"
)
args = parser.parse_args()
zeller(args.date_input)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parrameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
>>> pass # A doctest is not possible for this function.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
>>> pass # A doctest is not possible for this function.
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
>>> pass # A doctest is not possible for this function.
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if 0 < filter_scale:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parrameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
>>> pass # A doctest is not possible for this function.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
>>> pass # A doctest is not possible for this function.
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
>>> pass # A doctest is not possible for this function.
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if 0 < filter_scale:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This is to show simple COVID19 info fetching from worldometers site using lxml
* The main motivation to use lxml in place of bs4 is that it is faster and therefore
more convenient to use in Python web projects (e.g. Django or Flask-based)
"""
from collections import namedtuple
import requests
from lxml import html # type: ignore
covid_data = namedtuple("covid_data", "cases deaths recovered")
def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> covid_data:
xpath_str = '//div[@class = "maincounter-number"]/span/text()'
return covid_data(*html.fromstring(requests.get(url).content).xpath(xpath_str))
fmt = """Total COVID-19 cases in the world: {}
Total deaths due to COVID-19 in the world: {}
Total COVID-19 patients recovered in the world: {}"""
print(fmt.format(*covid_stats()))
| """
This is to show simple COVID19 info fetching from worldometers site using lxml
* The main motivation to use lxml in place of bs4 is that it is faster and therefore
more convenient to use in Python web projects (e.g. Django or Flask-based)
"""
from collections import namedtuple
import requests
from lxml import html # type: ignore
covid_data = namedtuple("covid_data", "cases deaths recovered")
def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> covid_data:
xpath_str = '//div[@class = "maincounter-number"]/span/text()'
return covid_data(*html.fromstring(requests.get(url).content).xpath(xpath_str))
fmt = """Total COVID-19 cases in the world: {}
Total deaths due to COVID-19 in the world: {}
Total COVID-19 patients recovered in the world: {}"""
print(fmt.format(*covid_stats()))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # Finding longest distance in Directed Acyclic Graph using KahnsAlgorithm
def longestDistance(graph):
indegree = [0] * len(graph)
queue = []
longDist = [1] * len(graph)
for key, values in graph.items():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
for x in graph[vertex]:
indegree[x] -= 1
if longDist[vertex] + 1 > longDist[x]:
longDist[x] = longDist[vertex] + 1
if indegree[x] == 0:
queue.append(x)
print(max(longDist))
# Adjacency list of Graph
graph = {0: [2, 3, 4], 1: [2, 7], 2: [5], 3: [5, 7], 4: [7], 5: [6], 6: [7], 7: []}
longestDistance(graph)
| # Finding longest distance in Directed Acyclic Graph using KahnsAlgorithm
def longestDistance(graph):
indegree = [0] * len(graph)
queue = []
longDist = [1] * len(graph)
for key, values in graph.items():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
for x in graph[vertex]:
indegree[x] -= 1
if longDist[vertex] + 1 > longDist[x]:
longDist[x] = longDist[vertex] + 1
if indegree[x] == 0:
queue.append(x)
print(max(longDist))
# Adjacency list of Graph
graph = {0: [2, 3, 4], 1: [2, 7], 2: [5], 3: [5, 7], 4: [7], 5: [6], 6: [7], 7: []}
longestDistance(graph)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Problem 20: https://projecteuler.net/problem=20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
"""
def factorial(num: int) -> int:
"""Find the factorial of a given number n"""
fact = 1
for i in range(1, num + 1):
fact *= i
return fact
def split_and_add(number: int) -> int:
"""Split number digits and add them."""
sum_of_digits = 0
while number > 0:
last_digit = number % 10
sum_of_digits += last_digit
number = number // 10 # Removing the last_digit from the given number
return sum_of_digits
def solution(num: int = 100) -> int:
"""Returns the sum of the digits in the factorial of num
>>> solution(100)
648
>>> solution(50)
216
>>> solution(10)
27
>>> solution(5)
3
>>> solution(3)
6
>>> solution(2)
2
>>> solution(1)
1
"""
nfact = factorial(num)
result = split_and_add(nfact)
return result
if __name__ == "__main__":
print(solution(int(input("Enter the Number: ").strip())))
| """
Problem 20: https://projecteuler.net/problem=20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
"""
def factorial(num: int) -> int:
"""Find the factorial of a given number n"""
fact = 1
for i in range(1, num + 1):
fact *= i
return fact
def split_and_add(number: int) -> int:
"""Split number digits and add them."""
sum_of_digits = 0
while number > 0:
last_digit = number % 10
sum_of_digits += last_digit
number = number // 10 # Removing the last_digit from the given number
return sum_of_digits
def solution(num: int = 100) -> int:
"""Returns the sum of the digits in the factorial of num
>>> solution(100)
648
>>> solution(50)
216
>>> solution(10)
27
>>> solution(5)
3
>>> solution(3)
6
>>> solution(2)
2
>>> solution(1)
1
"""
nfact = factorial(num)
result = split_and_add(nfact)
return result
if __name__ == "__main__":
print(solution(int(input("Enter the Number: ").strip())))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class DisjointSetTreeNode(Generic[T]):
# Disjoint Set Node to store the parent and rank
def __init__(self, data: T) -> None:
self.data = data
self.parent = self
self.rank = 0
class DisjointSetTree(Generic[T]):
# Disjoint Set DataStructure
def __init__(self) -> None:
# map from node name to the node object
self.map: dict[T, DisjointSetTreeNode[T]] = {}
def make_set(self, data: T) -> None:
# create a new set with x as its member
self.map[data] = DisjointSetTreeNode(data)
def find_set(self, data: T) -> DisjointSetTreeNode[T]:
# find the set x belongs to (with path-compression)
elem_ref = self.map[data]
if elem_ref != elem_ref.parent:
elem_ref.parent = self.find_set(elem_ref.parent.data)
return elem_ref.parent
def link(
self, node1: DisjointSetTreeNode[T], node2: DisjointSetTreeNode[T]
) -> None:
# helper function for union operation
if node1.rank > node2.rank:
node2.parent = node1
else:
node1.parent = node2
if node1.rank == node2.rank:
node2.rank += 1
def union(self, data1: T, data2: T) -> None:
# merge 2 disjoint sets
self.link(self.find_set(data1), self.find_set(data2))
class GraphUndirectedWeighted(Generic[T]):
def __init__(self) -> None:
# connections: map from the node to the neighbouring nodes (with weights)
self.connections: dict[T, dict[T, int]] = {}
def add_node(self, node: T) -> None:
# add a node ONLY if its not present in the graph
if node not in self.connections:
self.connections[node] = {}
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# add an edge with the given weight
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def kruskal(self) -> GraphUndirectedWeighted[T]:
# Kruskal's Algorithm to generate a Minimum Spanning Tree (MST) of a graph
"""
Details: https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Example:
>>> g1 = GraphUndirectedWeighted[int]()
>>> g1.add_edge(1, 2, 1)
>>> g1.add_edge(2, 3, 2)
>>> g1.add_edge(3, 4, 1)
>>> g1.add_edge(3, 5, 100) # Removed in MST
>>> g1.add_edge(4, 5, 5)
>>> assert 5 in g1.connections[3]
>>> mst = g1.kruskal()
>>> assert 5 not in mst.connections[3]
>>> g2 = GraphUndirectedWeighted[str]()
>>> g2.add_edge('A', 'B', 1)
>>> g2.add_edge('B', 'C', 2)
>>> g2.add_edge('C', 'D', 1)
>>> g2.add_edge('C', 'E', 100) # Removed in MST
>>> g2.add_edge('D', 'E', 5)
>>> assert 'E' in g2.connections["C"]
>>> mst = g2.kruskal()
>>> assert 'E' not in mst.connections['C']
"""
# getting the edges in ascending order of weights
edges = []
seen = set()
for start in self.connections:
for end in self.connections[start]:
if (start, end) not in seen:
seen.add((end, start))
edges.append((start, end, self.connections[start][end]))
edges.sort(key=lambda x: x[2])
# creating the disjoint set
disjoint_set = DisjointSetTree[T]()
for node in self.connections:
disjoint_set.make_set(node)
# MST generation
num_edges = 0
index = 0
graph = GraphUndirectedWeighted[T]()
while num_edges < len(self.connections) - 1:
u, v, w = edges[index]
index += 1
parent_u = disjoint_set.find_set(u)
parent_v = disjoint_set.find_set(v)
if parent_u != parent_v:
num_edges += 1
graph.add_edge(u, v, w)
disjoint_set.union(u, v)
return graph
| from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class DisjointSetTreeNode(Generic[T]):
# Disjoint Set Node to store the parent and rank
def __init__(self, data: T) -> None:
self.data = data
self.parent = self
self.rank = 0
class DisjointSetTree(Generic[T]):
# Disjoint Set DataStructure
def __init__(self) -> None:
# map from node name to the node object
self.map: dict[T, DisjointSetTreeNode[T]] = {}
def make_set(self, data: T) -> None:
# create a new set with x as its member
self.map[data] = DisjointSetTreeNode(data)
def find_set(self, data: T) -> DisjointSetTreeNode[T]:
# find the set x belongs to (with path-compression)
elem_ref = self.map[data]
if elem_ref != elem_ref.parent:
elem_ref.parent = self.find_set(elem_ref.parent.data)
return elem_ref.parent
def link(
self, node1: DisjointSetTreeNode[T], node2: DisjointSetTreeNode[T]
) -> None:
# helper function for union operation
if node1.rank > node2.rank:
node2.parent = node1
else:
node1.parent = node2
if node1.rank == node2.rank:
node2.rank += 1
def union(self, data1: T, data2: T) -> None:
# merge 2 disjoint sets
self.link(self.find_set(data1), self.find_set(data2))
class GraphUndirectedWeighted(Generic[T]):
def __init__(self) -> None:
# connections: map from the node to the neighbouring nodes (with weights)
self.connections: dict[T, dict[T, int]] = {}
def add_node(self, node: T) -> None:
# add a node ONLY if its not present in the graph
if node not in self.connections:
self.connections[node] = {}
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# add an edge with the given weight
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def kruskal(self) -> GraphUndirectedWeighted[T]:
# Kruskal's Algorithm to generate a Minimum Spanning Tree (MST) of a graph
"""
Details: https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Example:
>>> g1 = GraphUndirectedWeighted[int]()
>>> g1.add_edge(1, 2, 1)
>>> g1.add_edge(2, 3, 2)
>>> g1.add_edge(3, 4, 1)
>>> g1.add_edge(3, 5, 100) # Removed in MST
>>> g1.add_edge(4, 5, 5)
>>> assert 5 in g1.connections[3]
>>> mst = g1.kruskal()
>>> assert 5 not in mst.connections[3]
>>> g2 = GraphUndirectedWeighted[str]()
>>> g2.add_edge('A', 'B', 1)
>>> g2.add_edge('B', 'C', 2)
>>> g2.add_edge('C', 'D', 1)
>>> g2.add_edge('C', 'E', 100) # Removed in MST
>>> g2.add_edge('D', 'E', 5)
>>> assert 'E' in g2.connections["C"]
>>> mst = g2.kruskal()
>>> assert 'E' not in mst.connections['C']
"""
# getting the edges in ascending order of weights
edges = []
seen = set()
for start in self.connections:
for end in self.connections[start]:
if (start, end) not in seen:
seen.add((end, start))
edges.append((start, end, self.connections[start][end]))
edges.sort(key=lambda x: x[2])
# creating the disjoint set
disjoint_set = DisjointSetTree[T]()
for node in self.connections:
disjoint_set.make_set(node)
# MST generation
num_edges = 0
index = 0
graph = GraphUndirectedWeighted[T]()
while num_edges < len(self.connections) - 1:
u, v, w = edges[index]
index += 1
parent_u = disjoint_set.find_set(u)
parent_v = disjoint_set.find_set(v)
if parent_u != parent_v:
num_edges += 1
graph.add_edge(u, v, w)
disjoint_set.union(u, v)
return graph
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """Author Alexandre De Zotti
Draws Julia sets of quadratic polynomials and exponential maps.
More specifically, this iterates the function a fixed number of times
then plots whether the absolute value of the last iterate is greater than
a fixed threshold (named "escape radius"). For the exponential map this is not
really an escape radius but rather a convenient way to approximate the Julia
set with bounded orbits.
The examples presented here are:
- The Cauliflower Julia set, see e.g.
https://en.wikipedia.org/wiki/File:Julia_z2%2B0,25.png
- Other examples from https://en.wikipedia.org/wiki/Julia_set
- An exponential map Julia set, ambiantly homeomorphic to the examples in
http://www.math.univ-toulouse.fr/~cheritat/GalII/galery.html
and
https://ddd.uab.cat/pub/pubmat/02141493v43n1/02141493v43n1p27.pdf
Remark: Some overflow runtime warnings are suppressed. This is because of the
way the iteration loop is implemented, using numpy's efficient computations.
Overflows and infinites are replaced after each step by a large number.
"""
import warnings
from collections.abc import Callable
from typing import Any
import numpy
from matplotlib import pyplot
c_cauliflower = 0.25 + 0.0j
c_polynomial_1 = -0.4 + 0.6j
c_polynomial_2 = -0.1 + 0.651j
c_exponential = -2.0
nb_iterations = 56
window_size = 2.0
nb_pixels = 666
def eval_exponential(c_parameter: complex, z_values: numpy.ndarray) -> numpy.ndarray:
"""
Evaluate $e^z + c$.
>>> eval_exponential(0, 0)
1.0
>>> abs(eval_exponential(1, numpy.pi*1.j)) < 1e-15
True
>>> abs(eval_exponential(1.j, 0)-1-1.j) < 1e-15
True
"""
return numpy.exp(z_values) + c_parameter
def eval_quadratic_polynomial(
c_parameter: complex, z_values: numpy.ndarray
) -> numpy.ndarray:
"""
>>> eval_quadratic_polynomial(0, 2)
4
>>> eval_quadratic_polynomial(-1, 1)
0
>>> round(eval_quadratic_polynomial(1.j, 0).imag)
1
>>> round(eval_quadratic_polynomial(1.j, 0).real)
0
"""
return z_values * z_values + c_parameter
def prepare_grid(window_size: float, nb_pixels: int) -> numpy.ndarray:
"""
Create a grid of complex values of size nb_pixels*nb_pixels with real and
imaginary parts ranging from -window_size to window_size (inclusive).
Returns a numpy array.
>>> prepare_grid(1,3)
array([[-1.-1.j, -1.+0.j, -1.+1.j],
[ 0.-1.j, 0.+0.j, 0.+1.j],
[ 1.-1.j, 1.+0.j, 1.+1.j]])
"""
x = numpy.linspace(-window_size, window_size, nb_pixels)
x = x.reshape((nb_pixels, 1))
y = numpy.linspace(-window_size, window_size, nb_pixels)
y = y.reshape((1, nb_pixels))
return x + 1.0j * y
def iterate_function(
eval_function: Callable[[Any, numpy.ndarray], numpy.ndarray],
function_params: Any,
nb_iterations: int,
z_0: numpy.ndarray,
infinity: float = None,
) -> numpy.ndarray:
"""
Iterate the function "eval_function" exactly nb_iterations times.
The first argument of the function is a parameter which is contained in
function_params. The variable z_0 is an array that contains the initial
values to iterate from.
This function returns the final iterates.
>>> iterate_function(eval_quadratic_polynomial, 0, 3, numpy.array([0,1,2])).shape
(3,)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[0])
0j
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[1])
(1+0j)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[2])
(256+0j)
"""
z_n = z_0.astype("complex64")
for i in range(nb_iterations):
z_n = eval_function(function_params, z_n)
if infinity is not None:
numpy.nan_to_num(z_n, copy=False, nan=infinity)
z_n[abs(z_n) == numpy.inf] = infinity
return z_n
def show_results(
function_label: str,
function_params: Any,
escape_radius: float,
z_final: numpy.ndarray,
) -> None:
"""
Plots of whether the absolute value of z_final is greater than
the value of escape_radius. Adds the function_label and function_params to
the title.
>>> show_results('80', 0, 1, numpy.array([[0,1,.5],[.4,2,1.1],[.2,1,1.3]]))
"""
abs_z_final = (abs(z_final)).transpose()
abs_z_final[:, :] = abs_z_final[::-1, :]
pyplot.matshow(abs_z_final < escape_radius)
pyplot.title(f"Julia set of ${function_label}$, $c={function_params}$")
pyplot.show()
def ignore_overflow_warnings() -> None:
"""
Ignore some overflow and invalid value warnings.
>>> ignore_overflow_warnings()
"""
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in multiply"
)
warnings.filterwarnings(
"ignore",
category=RuntimeWarning,
message="invalid value encountered in multiply",
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in absolute"
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in exp"
)
if __name__ == "__main__":
z_0 = prepare_grid(window_size, nb_pixels)
ignore_overflow_warnings() # See file header for explanations
nb_iterations = 24
escape_radius = 2 * abs(c_cauliflower) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_cauliflower,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_cauliflower, escape_radius, z_final)
nb_iterations = 64
escape_radius = 2 * abs(c_polynomial_1) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_1,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_1, escape_radius, z_final)
nb_iterations = 161
escape_radius = 2 * abs(c_polynomial_2) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_2,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_2, escape_radius, z_final)
nb_iterations = 12
escape_radius = 10000.0
z_final = iterate_function(
eval_exponential,
c_exponential,
nb_iterations,
z_0 + 2,
infinity=1.0e10,
)
show_results("e^z+c", c_exponential, escape_radius, z_final)
| """Author Alexandre De Zotti
Draws Julia sets of quadratic polynomials and exponential maps.
More specifically, this iterates the function a fixed number of times
then plots whether the absolute value of the last iterate is greater than
a fixed threshold (named "escape radius"). For the exponential map this is not
really an escape radius but rather a convenient way to approximate the Julia
set with bounded orbits.
The examples presented here are:
- The Cauliflower Julia set, see e.g.
https://en.wikipedia.org/wiki/File:Julia_z2%2B0,25.png
- Other examples from https://en.wikipedia.org/wiki/Julia_set
- An exponential map Julia set, ambiantly homeomorphic to the examples in
http://www.math.univ-toulouse.fr/~cheritat/GalII/galery.html
and
https://ddd.uab.cat/pub/pubmat/02141493v43n1/02141493v43n1p27.pdf
Remark: Some overflow runtime warnings are suppressed. This is because of the
way the iteration loop is implemented, using numpy's efficient computations.
Overflows and infinites are replaced after each step by a large number.
"""
import warnings
from collections.abc import Callable
from typing import Any
import numpy
from matplotlib import pyplot
c_cauliflower = 0.25 + 0.0j
c_polynomial_1 = -0.4 + 0.6j
c_polynomial_2 = -0.1 + 0.651j
c_exponential = -2.0
nb_iterations = 56
window_size = 2.0
nb_pixels = 666
def eval_exponential(c_parameter: complex, z_values: numpy.ndarray) -> numpy.ndarray:
"""
Evaluate $e^z + c$.
>>> eval_exponential(0, 0)
1.0
>>> abs(eval_exponential(1, numpy.pi*1.j)) < 1e-15
True
>>> abs(eval_exponential(1.j, 0)-1-1.j) < 1e-15
True
"""
return numpy.exp(z_values) + c_parameter
def eval_quadratic_polynomial(
c_parameter: complex, z_values: numpy.ndarray
) -> numpy.ndarray:
"""
>>> eval_quadratic_polynomial(0, 2)
4
>>> eval_quadratic_polynomial(-1, 1)
0
>>> round(eval_quadratic_polynomial(1.j, 0).imag)
1
>>> round(eval_quadratic_polynomial(1.j, 0).real)
0
"""
return z_values * z_values + c_parameter
def prepare_grid(window_size: float, nb_pixels: int) -> numpy.ndarray:
"""
Create a grid of complex values of size nb_pixels*nb_pixels with real and
imaginary parts ranging from -window_size to window_size (inclusive).
Returns a numpy array.
>>> prepare_grid(1,3)
array([[-1.-1.j, -1.+0.j, -1.+1.j],
[ 0.-1.j, 0.+0.j, 0.+1.j],
[ 1.-1.j, 1.+0.j, 1.+1.j]])
"""
x = numpy.linspace(-window_size, window_size, nb_pixels)
x = x.reshape((nb_pixels, 1))
y = numpy.linspace(-window_size, window_size, nb_pixels)
y = y.reshape((1, nb_pixels))
return x + 1.0j * y
def iterate_function(
eval_function: Callable[[Any, numpy.ndarray], numpy.ndarray],
function_params: Any,
nb_iterations: int,
z_0: numpy.ndarray,
infinity: float = None,
) -> numpy.ndarray:
"""
Iterate the function "eval_function" exactly nb_iterations times.
The first argument of the function is a parameter which is contained in
function_params. The variable z_0 is an array that contains the initial
values to iterate from.
This function returns the final iterates.
>>> iterate_function(eval_quadratic_polynomial, 0, 3, numpy.array([0,1,2])).shape
(3,)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[0])
0j
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[1])
(1+0j)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[2])
(256+0j)
"""
z_n = z_0.astype("complex64")
for i in range(nb_iterations):
z_n = eval_function(function_params, z_n)
if infinity is not None:
numpy.nan_to_num(z_n, copy=False, nan=infinity)
z_n[abs(z_n) == numpy.inf] = infinity
return z_n
def show_results(
function_label: str,
function_params: Any,
escape_radius: float,
z_final: numpy.ndarray,
) -> None:
"""
Plots of whether the absolute value of z_final is greater than
the value of escape_radius. Adds the function_label and function_params to
the title.
>>> show_results('80', 0, 1, numpy.array([[0,1,.5],[.4,2,1.1],[.2,1,1.3]]))
"""
abs_z_final = (abs(z_final)).transpose()
abs_z_final[:, :] = abs_z_final[::-1, :]
pyplot.matshow(abs_z_final < escape_radius)
pyplot.title(f"Julia set of ${function_label}$, $c={function_params}$")
pyplot.show()
def ignore_overflow_warnings() -> None:
"""
Ignore some overflow and invalid value warnings.
>>> ignore_overflow_warnings()
"""
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in multiply"
)
warnings.filterwarnings(
"ignore",
category=RuntimeWarning,
message="invalid value encountered in multiply",
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in absolute"
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in exp"
)
if __name__ == "__main__":
z_0 = prepare_grid(window_size, nb_pixels)
ignore_overflow_warnings() # See file header for explanations
nb_iterations = 24
escape_radius = 2 * abs(c_cauliflower) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_cauliflower,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_cauliflower, escape_radius, z_final)
nb_iterations = 64
escape_radius = 2 * abs(c_polynomial_1) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_1,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_1, escape_radius, z_final)
nb_iterations = 161
escape_radius = 2 * abs(c_polynomial_2) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_2,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_2, escape_radius, z_final)
nb_iterations = 12
escape_radius = 10000.0
z_final = iterate_function(
eval_exponential,
c_exponential,
nb_iterations,
z_0 + 2,
infinity=1.0e10,
)
show_results("e^z+c", c_exponential, escape_radius, z_final)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Created on Thu Oct 5 16:44:23 2017
@author: Christian Bender
This Python library contains some useful functions to deal with
prime numbers and whole numbers.
Overview:
isPrime(number)
sieveEr(N)
getPrimeNumbers(N)
primeFactorization(number)
greatestPrimeFactor(number)
smallestPrimeFactor(number)
getPrime(n)
getPrimesBetween(pNumber1, pNumber2)
----
isEven(number)
isOdd(number)
gcd(number1, number2) // greatest common divisor
kgV(number1, number2) // least common multiple
getDivisors(number) // all divisors of 'number' inclusive 1, number
isPerfectNumber(number)
NEW-FUNCTIONS
simplifyFraction(numerator, denominator)
factorial (n) // n!
fib (n) // calculate the n-th fibonacci term.
-----
goldbach(number) // Goldbach's assumption
"""
from math import sqrt
def is_prime(number: int) -> bool:
"""
input: positive integer 'number'
returns true if 'number' is prime otherwise false.
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' must been an int and positive"
status = True
# 0 and 1 are none primes.
if number <= 1:
status = False
for divisor in range(2, int(round(sqrt(number))) + 1):
# if 'number' divisible by 'divisor' then sets 'status'
# of false and break up the loop.
if number % divisor == 0:
status = False
break
# precondition
assert isinstance(status, bool), "'status' must been from type bool"
return status
# ------------------------------------------
def sieveEr(N):
"""
input: positive integer 'N' > 2
returns a list of prime numbers from 2 up to N.
This function implements the algorithm called
sieve of erathostenes.
"""
# precondition
assert isinstance(N, int) and (N > 2), "'N' must been an int and > 2"
# beginList: contains all natural numbers from 2 up to N
beginList = [x for x in range(2, N + 1)]
ans = [] # this list will be returns.
# actual sieve of erathostenes
for i in range(len(beginList)):
for j in range(i + 1, len(beginList)):
if (beginList[i] != 0) and (beginList[j] % beginList[i] == 0):
beginList[j] = 0
# filters actual prime numbers.
ans = [x for x in beginList if x != 0]
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# --------------------------------
def getPrimeNumbers(N):
"""
input: positive integer 'N' > 2
returns a list of prime numbers from 2 up to N (inclusive)
This function is more efficient as function 'sieveEr(...)'
"""
# precondition
assert isinstance(N, int) and (N > 2), "'N' must been an int and > 2"
ans = []
# iterates over all numbers between 2 up to N+1
# if a number is prime then appends to list 'ans'
for number in range(2, N + 1):
if is_prime(number):
ans.append(number)
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# -----------------------------------------
def primeFactorization(number):
"""
input: positive integer 'number'
returns a list of the prime number factors of 'number'
"""
# precondition
assert isinstance(number, int) and number >= 0, "'number' must been an int and >= 0"
ans = [] # this list will be returns of the function.
# potential prime number factors.
factor = 2
quotient = number
if number == 0 or number == 1:
ans.append(number)
# if 'number' not prime then builds the prime factorization of 'number'
elif not is_prime(number):
while quotient != 1:
if is_prime(factor) and (quotient % factor == 0):
ans.append(factor)
quotient /= factor
else:
factor += 1
else:
ans.append(number)
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# -----------------------------------------
def greatestPrimeFactor(number):
"""
input: positive integer 'number' >= 0
returns the greatest prime number factor of 'number'
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' bust been an int and >= 0"
ans = 0
# prime factorization of 'number'
primeFactors = primeFactorization(number)
ans = max(primeFactors)
# precondition
assert isinstance(ans, int), "'ans' must been from type int"
return ans
# ----------------------------------------------
def smallestPrimeFactor(number):
"""
input: integer 'number' >= 0
returns the smallest prime number factor of 'number'
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' bust been an int and >= 0"
ans = 0
# prime factorization of 'number'
primeFactors = primeFactorization(number)
ans = min(primeFactors)
# precondition
assert isinstance(ans, int), "'ans' must been from type int"
return ans
# ----------------------
def isEven(number):
"""
input: integer 'number'
returns true if 'number' is even, otherwise false.
"""
# precondition
assert isinstance(number, int), "'number' must been an int"
assert isinstance(number % 2 == 0, bool), "compare bust been from type bool"
return number % 2 == 0
# ------------------------
def isOdd(number):
"""
input: integer 'number'
returns true if 'number' is odd, otherwise false.
"""
# precondition
assert isinstance(number, int), "'number' must been an int"
assert isinstance(number % 2 != 0, bool), "compare bust been from type bool"
return number % 2 != 0
# ------------------------
def goldbach(number):
"""
Goldbach's assumption
input: a even positive integer 'number' > 2
returns a list of two prime numbers whose sum is equal to 'number'
"""
# precondition
assert (
isinstance(number, int) and (number > 2) and isEven(number)
), "'number' must been an int, even and > 2"
ans = [] # this list will returned
# creates a list of prime numbers between 2 up to 'number'
primeNumbers = getPrimeNumbers(number)
lenPN = len(primeNumbers)
# run variable for while-loops.
i = 0
j = None
# exit variable. for break up the loops
loop = True
while i < lenPN and loop:
j = i + 1
while j < lenPN and loop:
if primeNumbers[i] + primeNumbers[j] == number:
loop = False
ans.append(primeNumbers[i])
ans.append(primeNumbers[j])
j += 1
i += 1
# precondition
assert (
isinstance(ans, list)
and (len(ans) == 2)
and (ans[0] + ans[1] == number)
and is_prime(ans[0])
and is_prime(ans[1])
), "'ans' must contains two primes. And sum of elements must been eq 'number'"
return ans
# ----------------------------------------------
def gcd(number1, number2):
"""
Greatest common divisor
input: two positive integer 'number1' and 'number2'
returns the greatest common divisor of 'number1' and 'number2'
"""
# precondition
assert (
isinstance(number1, int)
and isinstance(number2, int)
and (number1 >= 0)
and (number2 >= 0)
), "'number1' and 'number2' must been positive integer."
rest = 0
while number2 != 0:
rest = number1 % number2
number1 = number2
number2 = rest
# precondition
assert isinstance(number1, int) and (
number1 >= 0
), "'number' must been from type int and positive"
return number1
# ----------------------------------------------------
def kgV(number1, number2):
"""
Least common multiple
input: two positive integer 'number1' and 'number2'
returns the least common multiple of 'number1' and 'number2'
"""
# precondition
assert (
isinstance(number1, int)
and isinstance(number2, int)
and (number1 >= 1)
and (number2 >= 1)
), "'number1' and 'number2' must been positive integer."
ans = 1 # actual answer that will be return.
# for kgV (x,1)
if number1 > 1 and number2 > 1:
# builds the prime factorization of 'number1' and 'number2'
primeFac1 = primeFactorization(number1)
primeFac2 = primeFactorization(number2)
elif number1 == 1 or number2 == 1:
primeFac1 = []
primeFac2 = []
ans = max(number1, number2)
count1 = 0
count2 = 0
done = [] # captured numbers int both 'primeFac1' and 'primeFac2'
# iterates through primeFac1
for n in primeFac1:
if n not in done:
if n in primeFac2:
count1 = primeFac1.count(n)
count2 = primeFac2.count(n)
for i in range(max(count1, count2)):
ans *= n
else:
count1 = primeFac1.count(n)
for i in range(count1):
ans *= n
done.append(n)
# iterates through primeFac2
for n in primeFac2:
if n not in done:
count2 = primeFac2.count(n)
for i in range(count2):
ans *= n
done.append(n)
# precondition
assert isinstance(ans, int) and (
ans >= 0
), "'ans' must been from type int and positive"
return ans
# ----------------------------------
def getPrime(n):
"""
Gets the n-th prime number.
input: positive integer 'n' >= 0
returns the n-th prime number, beginning at index 0
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'number' must been a positive int"
index = 0
ans = 2 # this variable holds the answer
while index < n:
index += 1
ans += 1 # counts to the next number
# if ans not prime then
# runs to the next prime number.
while not is_prime(ans):
ans += 1
# precondition
assert isinstance(ans, int) and is_prime(
ans
), "'ans' must been a prime number and from type int"
return ans
# ---------------------------------------------------
def getPrimesBetween(pNumber1, pNumber2):
"""
input: prime numbers 'pNumber1' and 'pNumber2'
pNumber1 < pNumber2
returns a list of all prime numbers between 'pNumber1' (exclusive)
and 'pNumber2' (exclusive)
"""
# precondition
assert (
is_prime(pNumber1) and is_prime(pNumber2) and (pNumber1 < pNumber2)
), "The arguments must been prime numbers and 'pNumber1' < 'pNumber2'"
number = pNumber1 + 1 # jump to the next number
ans = [] # this list will be returns.
# if number is not prime then
# fetch the next prime number.
while not is_prime(number):
number += 1
while number < pNumber2:
ans.append(number)
number += 1
# fetch the next prime number.
while not is_prime(number):
number += 1
# precondition
assert (
isinstance(ans, list) and ans[0] != pNumber1 and ans[len(ans) - 1] != pNumber2
), "'ans' must been a list without the arguments"
# 'ans' contains not 'pNumber1' and 'pNumber2' !
return ans
# ----------------------------------------------------
def getDivisors(n):
"""
input: positive integer 'n' >= 1
returns all divisors of n (inclusive 1 and 'n')
"""
# precondition
assert isinstance(n, int) and (n >= 1), "'n' must been int and >= 1"
ans = [] # will be returned.
for divisor in range(1, n + 1):
if n % divisor == 0:
ans.append(divisor)
# precondition
assert ans[0] == 1 and ans[len(ans) - 1] == n, "Error in function getDivisiors(...)"
return ans
# ----------------------------------------------------
def isPerfectNumber(number):
"""
input: positive integer 'number' > 1
returns true if 'number' is a perfect number otherwise false.
"""
# precondition
assert isinstance(number, int) and (
number > 1
), "'number' must been an int and >= 1"
divisors = getDivisors(number)
# precondition
assert (
isinstance(divisors, list)
and (divisors[0] == 1)
and (divisors[len(divisors) - 1] == number)
), "Error in help-function getDivisiors(...)"
# summed all divisors up to 'number' (exclusive), hence [:-1]
return sum(divisors[:-1]) == number
# ------------------------------------------------------------
def simplifyFraction(numerator, denominator):
"""
input: two integer 'numerator' and 'denominator'
assumes: 'denominator' != 0
returns: a tuple with simplify numerator and denominator.
"""
# precondition
assert (
isinstance(numerator, int)
and isinstance(denominator, int)
and (denominator != 0)
), "The arguments must been from type int and 'denominator' != 0"
# build the greatest common divisor of numerator and denominator.
gcdOfFraction = gcd(abs(numerator), abs(denominator))
# precondition
assert (
isinstance(gcdOfFraction, int)
and (numerator % gcdOfFraction == 0)
and (denominator % gcdOfFraction == 0)
), "Error in function gcd(...,...)"
return (numerator // gcdOfFraction, denominator // gcdOfFraction)
# -----------------------------------------------------------------
def factorial(n):
"""
input: positive integer 'n'
returns the factorial of 'n' (n!)
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'n' must been a int and >= 0"
ans = 1 # this will be return.
for factor in range(1, n + 1):
ans *= factor
return ans
# -------------------------------------------------------------------
def fib(n):
"""
input: positive integer 'n'
returns the n-th fibonacci term , indexing by 0
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'n' must been an int and >= 0"
tmp = 0
fib1 = 1
ans = 1 # this will be return
for i in range(n - 1):
tmp = ans
ans += fib1
fib1 = tmp
return ans
| """
Created on Thu Oct 5 16:44:23 2017
@author: Christian Bender
This Python library contains some useful functions to deal with
prime numbers and whole numbers.
Overview:
isPrime(number)
sieveEr(N)
getPrimeNumbers(N)
primeFactorization(number)
greatestPrimeFactor(number)
smallestPrimeFactor(number)
getPrime(n)
getPrimesBetween(pNumber1, pNumber2)
----
isEven(number)
isOdd(number)
gcd(number1, number2) // greatest common divisor
kgV(number1, number2) // least common multiple
getDivisors(number) // all divisors of 'number' inclusive 1, number
isPerfectNumber(number)
NEW-FUNCTIONS
simplifyFraction(numerator, denominator)
factorial (n) // n!
fib (n) // calculate the n-th fibonacci term.
-----
goldbach(number) // Goldbach's assumption
"""
from math import sqrt
def is_prime(number: int) -> bool:
"""
input: positive integer 'number'
returns true if 'number' is prime otherwise false.
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' must been an int and positive"
status = True
# 0 and 1 are none primes.
if number <= 1:
status = False
for divisor in range(2, int(round(sqrt(number))) + 1):
# if 'number' divisible by 'divisor' then sets 'status'
# of false and break up the loop.
if number % divisor == 0:
status = False
break
# precondition
assert isinstance(status, bool), "'status' must been from type bool"
return status
# ------------------------------------------
def sieveEr(N):
"""
input: positive integer 'N' > 2
returns a list of prime numbers from 2 up to N.
This function implements the algorithm called
sieve of erathostenes.
"""
# precondition
assert isinstance(N, int) and (N > 2), "'N' must been an int and > 2"
# beginList: contains all natural numbers from 2 up to N
beginList = [x for x in range(2, N + 1)]
ans = [] # this list will be returns.
# actual sieve of erathostenes
for i in range(len(beginList)):
for j in range(i + 1, len(beginList)):
if (beginList[i] != 0) and (beginList[j] % beginList[i] == 0):
beginList[j] = 0
# filters actual prime numbers.
ans = [x for x in beginList if x != 0]
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# --------------------------------
def getPrimeNumbers(N):
"""
input: positive integer 'N' > 2
returns a list of prime numbers from 2 up to N (inclusive)
This function is more efficient as function 'sieveEr(...)'
"""
# precondition
assert isinstance(N, int) and (N > 2), "'N' must been an int and > 2"
ans = []
# iterates over all numbers between 2 up to N+1
# if a number is prime then appends to list 'ans'
for number in range(2, N + 1):
if is_prime(number):
ans.append(number)
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# -----------------------------------------
def primeFactorization(number):
"""
input: positive integer 'number'
returns a list of the prime number factors of 'number'
"""
# precondition
assert isinstance(number, int) and number >= 0, "'number' must been an int and >= 0"
ans = [] # this list will be returns of the function.
# potential prime number factors.
factor = 2
quotient = number
if number == 0 or number == 1:
ans.append(number)
# if 'number' not prime then builds the prime factorization of 'number'
elif not is_prime(number):
while quotient != 1:
if is_prime(factor) and (quotient % factor == 0):
ans.append(factor)
quotient /= factor
else:
factor += 1
else:
ans.append(number)
# precondition
assert isinstance(ans, list), "'ans' must been from type list"
return ans
# -----------------------------------------
def greatestPrimeFactor(number):
"""
input: positive integer 'number' >= 0
returns the greatest prime number factor of 'number'
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' bust been an int and >= 0"
ans = 0
# prime factorization of 'number'
primeFactors = primeFactorization(number)
ans = max(primeFactors)
# precondition
assert isinstance(ans, int), "'ans' must been from type int"
return ans
# ----------------------------------------------
def smallestPrimeFactor(number):
"""
input: integer 'number' >= 0
returns the smallest prime number factor of 'number'
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' bust been an int and >= 0"
ans = 0
# prime factorization of 'number'
primeFactors = primeFactorization(number)
ans = min(primeFactors)
# precondition
assert isinstance(ans, int), "'ans' must been from type int"
return ans
# ----------------------
def isEven(number):
"""
input: integer 'number'
returns true if 'number' is even, otherwise false.
"""
# precondition
assert isinstance(number, int), "'number' must been an int"
assert isinstance(number % 2 == 0, bool), "compare bust been from type bool"
return number % 2 == 0
# ------------------------
def isOdd(number):
"""
input: integer 'number'
returns true if 'number' is odd, otherwise false.
"""
# precondition
assert isinstance(number, int), "'number' must been an int"
assert isinstance(number % 2 != 0, bool), "compare bust been from type bool"
return number % 2 != 0
# ------------------------
def goldbach(number):
"""
Goldbach's assumption
input: a even positive integer 'number' > 2
returns a list of two prime numbers whose sum is equal to 'number'
"""
# precondition
assert (
isinstance(number, int) and (number > 2) and isEven(number)
), "'number' must been an int, even and > 2"
ans = [] # this list will returned
# creates a list of prime numbers between 2 up to 'number'
primeNumbers = getPrimeNumbers(number)
lenPN = len(primeNumbers)
# run variable for while-loops.
i = 0
j = None
# exit variable. for break up the loops
loop = True
while i < lenPN and loop:
j = i + 1
while j < lenPN and loop:
if primeNumbers[i] + primeNumbers[j] == number:
loop = False
ans.append(primeNumbers[i])
ans.append(primeNumbers[j])
j += 1
i += 1
# precondition
assert (
isinstance(ans, list)
and (len(ans) == 2)
and (ans[0] + ans[1] == number)
and is_prime(ans[0])
and is_prime(ans[1])
), "'ans' must contains two primes. And sum of elements must been eq 'number'"
return ans
# ----------------------------------------------
def gcd(number1, number2):
"""
Greatest common divisor
input: two positive integer 'number1' and 'number2'
returns the greatest common divisor of 'number1' and 'number2'
"""
# precondition
assert (
isinstance(number1, int)
and isinstance(number2, int)
and (number1 >= 0)
and (number2 >= 0)
), "'number1' and 'number2' must been positive integer."
rest = 0
while number2 != 0:
rest = number1 % number2
number1 = number2
number2 = rest
# precondition
assert isinstance(number1, int) and (
number1 >= 0
), "'number' must been from type int and positive"
return number1
# ----------------------------------------------------
def kgV(number1, number2):
"""
Least common multiple
input: two positive integer 'number1' and 'number2'
returns the least common multiple of 'number1' and 'number2'
"""
# precondition
assert (
isinstance(number1, int)
and isinstance(number2, int)
and (number1 >= 1)
and (number2 >= 1)
), "'number1' and 'number2' must been positive integer."
ans = 1 # actual answer that will be return.
# for kgV (x,1)
if number1 > 1 and number2 > 1:
# builds the prime factorization of 'number1' and 'number2'
primeFac1 = primeFactorization(number1)
primeFac2 = primeFactorization(number2)
elif number1 == 1 or number2 == 1:
primeFac1 = []
primeFac2 = []
ans = max(number1, number2)
count1 = 0
count2 = 0
done = [] # captured numbers int both 'primeFac1' and 'primeFac2'
# iterates through primeFac1
for n in primeFac1:
if n not in done:
if n in primeFac2:
count1 = primeFac1.count(n)
count2 = primeFac2.count(n)
for i in range(max(count1, count2)):
ans *= n
else:
count1 = primeFac1.count(n)
for i in range(count1):
ans *= n
done.append(n)
# iterates through primeFac2
for n in primeFac2:
if n not in done:
count2 = primeFac2.count(n)
for i in range(count2):
ans *= n
done.append(n)
# precondition
assert isinstance(ans, int) and (
ans >= 0
), "'ans' must been from type int and positive"
return ans
# ----------------------------------
def getPrime(n):
"""
Gets the n-th prime number.
input: positive integer 'n' >= 0
returns the n-th prime number, beginning at index 0
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'number' must been a positive int"
index = 0
ans = 2 # this variable holds the answer
while index < n:
index += 1
ans += 1 # counts to the next number
# if ans not prime then
# runs to the next prime number.
while not is_prime(ans):
ans += 1
# precondition
assert isinstance(ans, int) and is_prime(
ans
), "'ans' must been a prime number and from type int"
return ans
# ---------------------------------------------------
def getPrimesBetween(pNumber1, pNumber2):
"""
input: prime numbers 'pNumber1' and 'pNumber2'
pNumber1 < pNumber2
returns a list of all prime numbers between 'pNumber1' (exclusive)
and 'pNumber2' (exclusive)
"""
# precondition
assert (
is_prime(pNumber1) and is_prime(pNumber2) and (pNumber1 < pNumber2)
), "The arguments must been prime numbers and 'pNumber1' < 'pNumber2'"
number = pNumber1 + 1 # jump to the next number
ans = [] # this list will be returns.
# if number is not prime then
# fetch the next prime number.
while not is_prime(number):
number += 1
while number < pNumber2:
ans.append(number)
number += 1
# fetch the next prime number.
while not is_prime(number):
number += 1
# precondition
assert (
isinstance(ans, list) and ans[0] != pNumber1 and ans[len(ans) - 1] != pNumber2
), "'ans' must been a list without the arguments"
# 'ans' contains not 'pNumber1' and 'pNumber2' !
return ans
# ----------------------------------------------------
def getDivisors(n):
"""
input: positive integer 'n' >= 1
returns all divisors of n (inclusive 1 and 'n')
"""
# precondition
assert isinstance(n, int) and (n >= 1), "'n' must been int and >= 1"
ans = [] # will be returned.
for divisor in range(1, n + 1):
if n % divisor == 0:
ans.append(divisor)
# precondition
assert ans[0] == 1 and ans[len(ans) - 1] == n, "Error in function getDivisiors(...)"
return ans
# ----------------------------------------------------
def isPerfectNumber(number):
"""
input: positive integer 'number' > 1
returns true if 'number' is a perfect number otherwise false.
"""
# precondition
assert isinstance(number, int) and (
number > 1
), "'number' must been an int and >= 1"
divisors = getDivisors(number)
# precondition
assert (
isinstance(divisors, list)
and (divisors[0] == 1)
and (divisors[len(divisors) - 1] == number)
), "Error in help-function getDivisiors(...)"
# summed all divisors up to 'number' (exclusive), hence [:-1]
return sum(divisors[:-1]) == number
# ------------------------------------------------------------
def simplifyFraction(numerator, denominator):
"""
input: two integer 'numerator' and 'denominator'
assumes: 'denominator' != 0
returns: a tuple with simplify numerator and denominator.
"""
# precondition
assert (
isinstance(numerator, int)
and isinstance(denominator, int)
and (denominator != 0)
), "The arguments must been from type int and 'denominator' != 0"
# build the greatest common divisor of numerator and denominator.
gcdOfFraction = gcd(abs(numerator), abs(denominator))
# precondition
assert (
isinstance(gcdOfFraction, int)
and (numerator % gcdOfFraction == 0)
and (denominator % gcdOfFraction == 0)
), "Error in function gcd(...,...)"
return (numerator // gcdOfFraction, denominator // gcdOfFraction)
# -----------------------------------------------------------------
def factorial(n):
"""
input: positive integer 'n'
returns the factorial of 'n' (n!)
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'n' must been a int and >= 0"
ans = 1 # this will be return.
for factor in range(1, n + 1):
ans *= factor
return ans
# -------------------------------------------------------------------
def fib(n):
"""
input: positive integer 'n'
returns the n-th fibonacci term , indexing by 0
"""
# precondition
assert isinstance(n, int) and (n >= 0), "'n' must been an int and >= 0"
tmp = 0
fib1 = 1
ans = 1 # this will be return
for i in range(n - 1):
tmp = ans
ans += fib1
fib1 = tmp
return ans
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """Prime Check."""
import math
import unittest
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' must been an int and positive"
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
class Test(unittest.TestCase):
def test_primes(self):
self.assertTrue(is_prime(2))
self.assertTrue(is_prime(3))
self.assertTrue(is_prime(5))
self.assertTrue(is_prime(7))
self.assertTrue(is_prime(11))
self.assertTrue(is_prime(13))
self.assertTrue(is_prime(17))
self.assertTrue(is_prime(19))
self.assertTrue(is_prime(23))
self.assertTrue(is_prime(29))
def test_not_primes(self):
with self.assertRaises(AssertionError):
is_prime(-19)
self.assertFalse(
is_prime(0),
"Zero doesn't have any positive factors, primes must have exactly two.",
)
self.assertFalse(
is_prime(1),
"One only has 1 positive factor, primes must have exactly two.",
)
self.assertFalse(is_prime(2 * 2))
self.assertFalse(is_prime(2 * 3))
self.assertFalse(is_prime(3 * 3))
self.assertFalse(is_prime(3 * 5))
self.assertFalse(is_prime(3 * 5 * 7))
if __name__ == "__main__":
unittest.main()
| """Prime Check."""
import math
import unittest
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
# precondition
assert isinstance(number, int) and (
number >= 0
), "'number' must been an int and positive"
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
class Test(unittest.TestCase):
def test_primes(self):
self.assertTrue(is_prime(2))
self.assertTrue(is_prime(3))
self.assertTrue(is_prime(5))
self.assertTrue(is_prime(7))
self.assertTrue(is_prime(11))
self.assertTrue(is_prime(13))
self.assertTrue(is_prime(17))
self.assertTrue(is_prime(19))
self.assertTrue(is_prime(23))
self.assertTrue(is_prime(29))
def test_not_primes(self):
with self.assertRaises(AssertionError):
is_prime(-19)
self.assertFalse(
is_prime(0),
"Zero doesn't have any positive factors, primes must have exactly two.",
)
self.assertFalse(
is_prime(1),
"One only has 1 positive factor, primes must have exactly two.",
)
self.assertFalse(is_prime(2 * 2))
self.assertFalse(is_prime(2 * 3))
self.assertFalse(is_prime(3 * 3))
self.assertFalse(is_prime(3 * 5))
self.assertFalse(is_prime(3 * 5 * 7))
if __name__ == "__main__":
unittest.main()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This is a Python implementation of the levenshtein distance.
Levenshtein distance is a string metric for measuring the
difference between two sequences.
For doctests run following command:
python -m doctest -v levenshtein-distance.py
or
python3 -m doctest -v levenshtein-distance.py
For manual testing run:
python levenshtein-distance.py
"""
def levenshtein_distance(first_word: str, second_word: str) -> int:
"""Implementation of the levenshtein distance in Python.
:param first_word: the first word to measure the difference.
:param second_word: the second word to measure the difference.
:return: the levenshtein distance between the two words.
Examples:
>>> levenshtein_distance("planet", "planetary")
3
>>> levenshtein_distance("", "test")
4
>>> levenshtein_distance("book", "back")
2
>>> levenshtein_distance("book", "book")
0
>>> levenshtein_distance("test", "")
4
>>> levenshtein_distance("", "")
0
>>> levenshtein_distance("orchestration", "container")
10
"""
# The longer word should come first
if len(first_word) < len(second_word):
return levenshtein_distance(second_word, first_word)
if len(second_word) == 0:
return len(first_word)
previous_row = list(range(len(second_word) + 1))
for i, c1 in enumerate(first_word):
current_row = [i + 1]
for j, c2 in enumerate(second_word):
# Calculate insertions, deletions and substitutions
insertions = previous_row[j + 1] + 1
deletions = current_row[j] + 1
substitutions = previous_row[j] + (c1 != c2)
# Get the minimum to append to the current row
current_row.append(min(insertions, deletions, substitutions))
# Store the previous row
previous_row = current_row
# Returns the last element (distance)
return previous_row[-1]
if __name__ == "__main__":
first_word = input("Enter the first word:\n").strip()
second_word = input("Enter the second word:\n").strip()
result = levenshtein_distance(first_word, second_word)
print(f"Levenshtein distance between {first_word} and {second_word} is {result}")
| """
This is a Python implementation of the levenshtein distance.
Levenshtein distance is a string metric for measuring the
difference between two sequences.
For doctests run following command:
python -m doctest -v levenshtein-distance.py
or
python3 -m doctest -v levenshtein-distance.py
For manual testing run:
python levenshtein-distance.py
"""
def levenshtein_distance(first_word: str, second_word: str) -> int:
"""Implementation of the levenshtein distance in Python.
:param first_word: the first word to measure the difference.
:param second_word: the second word to measure the difference.
:return: the levenshtein distance between the two words.
Examples:
>>> levenshtein_distance("planet", "planetary")
3
>>> levenshtein_distance("", "test")
4
>>> levenshtein_distance("book", "back")
2
>>> levenshtein_distance("book", "book")
0
>>> levenshtein_distance("test", "")
4
>>> levenshtein_distance("", "")
0
>>> levenshtein_distance("orchestration", "container")
10
"""
# The longer word should come first
if len(first_word) < len(second_word):
return levenshtein_distance(second_word, first_word)
if len(second_word) == 0:
return len(first_word)
previous_row = list(range(len(second_word) + 1))
for i, c1 in enumerate(first_word):
current_row = [i + 1]
for j, c2 in enumerate(second_word):
# Calculate insertions, deletions and substitutions
insertions = previous_row[j + 1] + 1
deletions = current_row[j] + 1
substitutions = previous_row[j] + (c1 != c2)
# Get the minimum to append to the current row
current_row.append(min(insertions, deletions, substitutions))
# Store the previous row
previous_row = current_row
# Returns the last element (distance)
return previous_row[-1]
if __name__ == "__main__":
first_word = input("Enter the first word:\n").strip()
second_word = input("Enter the second word:\n").strip()
result = levenshtein_distance(first_word, second_word)
print(f"Levenshtein distance between {first_word} and {second_word} is {result}")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from copy import deepcopy
class FenwickTree:
"""
Fenwick Tree
More info: https://en.wikipedia.org/wiki/Fenwick_tree
"""
def __init__(self, arr: list[int] = None, size: int = None) -> None:
"""
Constructor for the Fenwick tree
Parameters:
arr (list): list of elements to initialize the tree with (optional)
size (int): size of the Fenwick tree (if arr is None)
"""
if arr is None and size is not None:
self.size = size
self.tree = [0] * size
elif arr is not None:
self.init(arr)
else:
raise ValueError("Either arr or size must be specified")
def init(self, arr: list[int]) -> None:
"""
Initialize the Fenwick tree with arr in O(N)
Parameters:
arr (list): list of elements to initialize the tree with
Returns:
None
>>> a = [1, 2, 3, 4, 5]
>>> f1 = FenwickTree(a)
>>> f2 = FenwickTree(size=len(a))
>>> for index, value in enumerate(a):
... f2.add(index, value)
>>> f1.tree == f2.tree
True
"""
self.size = len(arr)
self.tree = deepcopy(arr)
for i in range(1, self.size):
j = self.next(i)
if j < self.size:
self.tree[j] += self.tree[i]
def get_array(self) -> list[int]:
"""
Get the Normal Array of the Fenwick tree in O(N)
Returns:
list: Normal Array of the Fenwick tree
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> f.get_array() == a
True
"""
arr = self.tree[:]
for i in range(self.size - 1, 0, -1):
j = self.next(i)
if j < self.size:
arr[j] -= arr[i]
return arr
@staticmethod
def next(index: int) -> int:
return index + (index & (-index))
@staticmethod
def prev(index: int) -> int:
return index - (index & (-index))
def add(self, index: int, value: int) -> None:
"""
Add a value to index in O(lg N)
Parameters:
index (int): index to add value to
value (int): value to add to index
Returns:
None
>>> f = FenwickTree([1, 2, 3, 4, 5])
>>> f.add(0, 1)
>>> f.add(1, 2)
>>> f.add(2, 3)
>>> f.add(3, 4)
>>> f.add(4, 5)
>>> f.get_array()
[2, 4, 6, 8, 10]
"""
if index == 0:
self.tree[0] += value
return
while index < self.size:
self.tree[index] += value
index = self.next(index)
def update(self, index: int, value: int) -> None:
"""
Set the value of index in O(lg N)
Parameters:
index (int): index to set value to
value (int): value to set in index
Returns:
None
>>> f = FenwickTree([5, 4, 3, 2, 1])
>>> f.update(0, 1)
>>> f.update(1, 2)
>>> f.update(2, 3)
>>> f.update(3, 4)
>>> f.update(4, 5)
>>> f.get_array()
[1, 2, 3, 4, 5]
"""
self.add(index, value - self.get(index))
def prefix(self, right: int) -> int:
"""
Prefix sum of all elements in [0, right) in O(lg N)
Parameters:
right (int): right bound of the query (exclusive)
Returns:
int: sum of all elements in [0, right)
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... res = res and f.prefix(i) == sum(a[:i])
>>> res
True
"""
if right == 0:
return 0
result = self.tree[0]
right -= 1 # make right inclusive
while right > 0:
result += self.tree[right]
right = self.prev(right)
return result
def query(self, left: int, right: int) -> int:
"""
Query the sum of all elements in [left, right) in O(lg N)
Parameters:
left (int): left bound of the query (inclusive)
right (int): right bound of the query (exclusive)
Returns:
int: sum of all elements in [left, right)
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... for j in range(i + 1, len(a)):
... res = res and f.query(i, j) == sum(a[i:j])
>>> res
True
"""
return self.prefix(right) - self.prefix(left)
def get(self, index: int) -> int:
"""
Get value at index in O(lg N)
Parameters:
index (int): index to get the value
Returns:
int: Value of element at index
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... res = res and f.get(i) == a[i]
>>> res
True
"""
return self.query(index, index + 1)
def rank_query(self, value: int) -> int:
"""
Find the largest index with prefix(i) <= value in O(lg N)
NOTE: Requires that all values are non-negative!
Parameters:
value (int): value to find the largest index of
Returns:
-1: if value is smaller than all elements in prefix sum
int: largest index with prefix(i) <= value
>>> f = FenwickTree([1, 2, 0, 3, 0, 5])
>>> f.rank_query(0)
-1
>>> f.rank_query(2)
0
>>> f.rank_query(1)
0
>>> f.rank_query(3)
2
>>> f.rank_query(5)
2
>>> f.rank_query(6)
4
>>> f.rank_query(11)
5
"""
value -= self.tree[0]
if value < 0:
return -1
j = 1 # Largest power of 2 <= size
while j * 2 < self.size:
j *= 2
i = 0
while j > 0:
if i + j < self.size and self.tree[i + j] <= value:
value -= self.tree[i + j]
i += j
j //= 2
return i
if __name__ == "__main__":
import doctest
doctest.testmod()
| from copy import deepcopy
class FenwickTree:
"""
Fenwick Tree
More info: https://en.wikipedia.org/wiki/Fenwick_tree
"""
def __init__(self, arr: list[int] = None, size: int = None) -> None:
"""
Constructor for the Fenwick tree
Parameters:
arr (list): list of elements to initialize the tree with (optional)
size (int): size of the Fenwick tree (if arr is None)
"""
if arr is None and size is not None:
self.size = size
self.tree = [0] * size
elif arr is not None:
self.init(arr)
else:
raise ValueError("Either arr or size must be specified")
def init(self, arr: list[int]) -> None:
"""
Initialize the Fenwick tree with arr in O(N)
Parameters:
arr (list): list of elements to initialize the tree with
Returns:
None
>>> a = [1, 2, 3, 4, 5]
>>> f1 = FenwickTree(a)
>>> f2 = FenwickTree(size=len(a))
>>> for index, value in enumerate(a):
... f2.add(index, value)
>>> f1.tree == f2.tree
True
"""
self.size = len(arr)
self.tree = deepcopy(arr)
for i in range(1, self.size):
j = self.next(i)
if j < self.size:
self.tree[j] += self.tree[i]
def get_array(self) -> list[int]:
"""
Get the Normal Array of the Fenwick tree in O(N)
Returns:
list: Normal Array of the Fenwick tree
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> f.get_array() == a
True
"""
arr = self.tree[:]
for i in range(self.size - 1, 0, -1):
j = self.next(i)
if j < self.size:
arr[j] -= arr[i]
return arr
@staticmethod
def next(index: int) -> int:
return index + (index & (-index))
@staticmethod
def prev(index: int) -> int:
return index - (index & (-index))
def add(self, index: int, value: int) -> None:
"""
Add a value to index in O(lg N)
Parameters:
index (int): index to add value to
value (int): value to add to index
Returns:
None
>>> f = FenwickTree([1, 2, 3, 4, 5])
>>> f.add(0, 1)
>>> f.add(1, 2)
>>> f.add(2, 3)
>>> f.add(3, 4)
>>> f.add(4, 5)
>>> f.get_array()
[2, 4, 6, 8, 10]
"""
if index == 0:
self.tree[0] += value
return
while index < self.size:
self.tree[index] += value
index = self.next(index)
def update(self, index: int, value: int) -> None:
"""
Set the value of index in O(lg N)
Parameters:
index (int): index to set value to
value (int): value to set in index
Returns:
None
>>> f = FenwickTree([5, 4, 3, 2, 1])
>>> f.update(0, 1)
>>> f.update(1, 2)
>>> f.update(2, 3)
>>> f.update(3, 4)
>>> f.update(4, 5)
>>> f.get_array()
[1, 2, 3, 4, 5]
"""
self.add(index, value - self.get(index))
def prefix(self, right: int) -> int:
"""
Prefix sum of all elements in [0, right) in O(lg N)
Parameters:
right (int): right bound of the query (exclusive)
Returns:
int: sum of all elements in [0, right)
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... res = res and f.prefix(i) == sum(a[:i])
>>> res
True
"""
if right == 0:
return 0
result = self.tree[0]
right -= 1 # make right inclusive
while right > 0:
result += self.tree[right]
right = self.prev(right)
return result
def query(self, left: int, right: int) -> int:
"""
Query the sum of all elements in [left, right) in O(lg N)
Parameters:
left (int): left bound of the query (inclusive)
right (int): right bound of the query (exclusive)
Returns:
int: sum of all elements in [left, right)
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... for j in range(i + 1, len(a)):
... res = res and f.query(i, j) == sum(a[i:j])
>>> res
True
"""
return self.prefix(right) - self.prefix(left)
def get(self, index: int) -> int:
"""
Get value at index in O(lg N)
Parameters:
index (int): index to get the value
Returns:
int: Value of element at index
>>> a = [i for i in range(128)]
>>> f = FenwickTree(a)
>>> res = True
>>> for i in range(len(a)):
... res = res and f.get(i) == a[i]
>>> res
True
"""
return self.query(index, index + 1)
def rank_query(self, value: int) -> int:
"""
Find the largest index with prefix(i) <= value in O(lg N)
NOTE: Requires that all values are non-negative!
Parameters:
value (int): value to find the largest index of
Returns:
-1: if value is smaller than all elements in prefix sum
int: largest index with prefix(i) <= value
>>> f = FenwickTree([1, 2, 0, 3, 0, 5])
>>> f.rank_query(0)
-1
>>> f.rank_query(2)
0
>>> f.rank_query(1)
0
>>> f.rank_query(3)
2
>>> f.rank_query(5)
2
>>> f.rank_query(6)
4
>>> f.rank_query(11)
5
"""
value -= self.tree[0]
if value < 0:
return -1
j = 1 # Largest power of 2 <= size
while j * 2 < self.size:
j *= 2
i = 0
while j > 0:
if i + j < self.size and self.tree[i + j] <= value:
value -= self.tree[i + j]
i += j
j //= 2
return i
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """README, Author - Anurag Kumar(mailto:[email protected])
Requirements:
- sklearn
- numpy
- matplotlib
Python:
- 3.5
Inputs:
- X , a 2D numpy array of features.
- k , number of clusters to create.
- initial_centroids , initial centroid values generated by utility function(mentioned
in usage).
- maxiter , maximum number of iterations to process.
- heterogeneity , empty list that will be filled with hetrogeneity values if passed
to kmeans func.
Usage:
1. define 'k' value, 'X' features array and 'hetrogeneity' empty list
2. create initial_centroids,
initial_centroids = get_initial_centroids(
X,
k,
seed=0 # seed value for initial centroid generation,
# None for randomness(default=None)
)
3. find centroids and clusters using kmeans function.
centroids, cluster_assignment = kmeans(
X,
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True # whether to print logs in console or not.(default=False)
)
4. Plot the loss function, hetrogeneity values for every iteration saved in
hetrogeneity list.
plot_heterogeneity(
heterogeneity,
k
)
5. Transfers Dataframe into excel format it must have feature called
'Clust' with k means clustering numbers in it.
"""
import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import pairwise_distances
warnings.filterwarnings("ignore")
TAG = "K-MEANS-CLUST/ "
def get_initial_centroids(data, k, seed=None):
"""Randomly choose k data points as initial centroids"""
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices, :]
return centroids
def centroid_pairwise_dist(X, centroids):
return pairwise_distances(X, centroids, metric="euclidean")
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = centroid_pairwise_dist(data, centroids)
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis=1)
return cluster_assignment
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(
member_data_points, [centroids[i]], metric="euclidean"
)
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7, 4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel("# Iterations")
plt.ylabel("Heterogeneity")
plt.title(f"Heterogeneity of clustering over time, K={k:d}")
plt.rcParams.update({"font.size": 16})
plt.show()
def kmeans(
data, k, initial_centroids, maxiter=500, record_heterogeneity=None, verbose=False
):
"""This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.(default=500)
record_heterogeneity: (optional) a list, to store the history of heterogeneity
as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in
each iteration"""
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in range(maxiter):
if verbose:
print(itr, end="")
# 1. Make cluster assignments using nearest centroids
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data
# points assigned to that cluster.
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if (
prev_cluster_assignment is not None
and (prev_cluster_assignment == cluster_assignment).all()
):
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment != cluster_assignment)
if verbose:
print(
f" {num_changed:5d} elements changed their cluster assignment."
)
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
# Mock test below
if False: # change to true to run this test case.
from sklearn import datasets as ds
dataset = ds.load_iris()
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(dataset["data"], k, seed=0)
centroids, cluster_assignment = kmeans(
dataset["data"],
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True,
)
plot_heterogeneity(heterogeneity, k)
def ReportGenerator(
df: pd.DataFrame, ClusteringVariables: np.ndarray, FillMissingReport=None
) -> pd.DataFrame:
"""
Function generates easy-erading clustering report. It takes 2 arguments as an input:
DataFrame - dataframe with predicted cluester column;
FillMissingReport - dictionary of rules how we are going to fill missing
values of for final report generate (not included in modeling);
in order to run the function following libraries must be imported:
import pandas as pd
import numpy as np
>>> data = pd.DataFrame()
>>> data['numbers'] = [1, 2, 3]
>>> data['col1'] = [0.5, 2.5, 4.5]
>>> data['col2'] = [100, 200, 300]
>>> data['col3'] = [10, 20, 30]
>>> data['Cluster'] = [1, 1, 2]
>>> ReportGenerator(data, ['col1', 'col2'], 0)
Features Type Mark 1 2
0 # of Customers ClusterSize False 2.000000 1.000000
1 % of Customers ClusterProportion False 0.666667 0.333333
2 col1 mean_with_zeros True 1.500000 4.500000
3 col2 mean_with_zeros True 150.000000 300.000000
4 numbers mean_with_zeros False 1.500000 3.000000
.. ... ... ... ... ...
99 dummy 5% False 1.000000 1.000000
100 dummy 95% False 1.000000 1.000000
101 dummy stdev False 0.000000 NaN
102 dummy mode False 1.000000 1.000000
103 dummy median False 1.000000 1.000000
<BLANKLINE>
[104 rows x 5 columns]
"""
# Fill missing values with given rules
if FillMissingReport:
df.fillna(value=FillMissingReport, inplace=True)
df["dummy"] = 1
numeric_cols = df.select_dtypes(np.number).columns
report = (
df.groupby(["Cluster"])[ # construct report dataframe
numeric_cols
] # group by cluster number
.agg(
[
("sum", np.sum),
("mean_with_zeros", lambda x: np.mean(np.nan_to_num(x))),
("mean_without_zeros", lambda x: x.replace(0, np.NaN).mean()),
(
"mean_25-75",
lambda x: np.mean(
np.nan_to_num(
sorted(x)[
round(len(x) * 25 / 100) : round(len(x) * 75 / 100)
]
)
),
),
("mean_with_na", np.mean),
("min", lambda x: x.min()),
("5%", lambda x: x.quantile(0.05)),
("25%", lambda x: x.quantile(0.25)),
("50%", lambda x: x.quantile(0.50)),
("75%", lambda x: x.quantile(0.75)),
("95%", lambda x: x.quantile(0.95)),
("max", lambda x: x.max()),
("count", lambda x: x.count()),
("stdev", lambda x: x.std()),
("mode", lambda x: x.mode()[0]),
("median", lambda x: x.median()),
("# > 0", lambda x: (x > 0).sum()),
]
)
.T.reset_index()
.rename(index=str, columns={"level_0": "Features", "level_1": "Type"})
) # rename columns
# calculate the size of cluster(count of clientID's)
clustersize = report[
(report["Features"] == "dummy") & (report["Type"] == "count")
].copy() # avoid SettingWithCopyWarning
clustersize.Type = (
"ClusterSize" # rename created cluster df to match report column names
)
clustersize.Features = "# of Customers"
clusterproportion = pd.DataFrame(
clustersize.iloc[:, 2:].values
/ clustersize.iloc[:, 2:].values.sum() # calculating the proportion of cluster
)
clusterproportion[
"Type"
] = "% of Customers" # rename created cluster df to match report column names
clusterproportion["Features"] = "ClusterProportion"
cols = clusterproportion.columns.tolist()
cols = cols[-2:] + cols[:-2]
clusterproportion = clusterproportion[cols] # rearrange columns to match report
clusterproportion.columns = report.columns
a = pd.DataFrame(
abs(
report[report["Type"] == "count"].iloc[:, 2:].values
- clustersize.iloc[:, 2:].values
)
) # generating df with count of nan values
a["Features"] = 0
a["Type"] = "# of nan"
a.Features = report[
report["Type"] == "count"
].Features.tolist() # filling values in order to match report
cols = a.columns.tolist()
cols = cols[-2:] + cols[:-2]
a = a[cols] # rearrange columns to match report
a.columns = report.columns # rename columns to match report
report = report.drop(
report[report.Type == "count"].index
) # drop count values except cluster size
report = pd.concat(
[report, a, clustersize, clusterproportion], axis=0
) # concat report with clustert size and nan values
report["Mark"] = report["Features"].isin(ClusteringVariables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
report = report[cols]
sorter1 = {
"ClusterSize": 9,
"ClusterProportion": 8,
"mean_with_zeros": 7,
"mean_with_na": 6,
"max": 5,
"50%": 4,
"min": 3,
"25%": 2,
"75%": 1,
"# of nan": 0,
"# > 0": -1,
"sum_with_na": -2,
}
report = (
report.assign(
Sorter1=lambda x: x.Type.map(sorter1),
Sorter2=lambda x: list(reversed(range(len(x)))),
)
.sort_values(["Sorter1", "Mark", "Sorter2"], ascending=False)
.drop(["Sorter1", "Sorter2"], axis=1)
)
report.columns.name = ""
report = report.reset_index()
report.drop(columns=["index"], inplace=True)
return report
if __name__ == "__main__":
import doctest
doctest.testmod()
| """README, Author - Anurag Kumar(mailto:[email protected])
Requirements:
- sklearn
- numpy
- matplotlib
Python:
- 3.5
Inputs:
- X , a 2D numpy array of features.
- k , number of clusters to create.
- initial_centroids , initial centroid values generated by utility function(mentioned
in usage).
- maxiter , maximum number of iterations to process.
- heterogeneity , empty list that will be filled with hetrogeneity values if passed
to kmeans func.
Usage:
1. define 'k' value, 'X' features array and 'hetrogeneity' empty list
2. create initial_centroids,
initial_centroids = get_initial_centroids(
X,
k,
seed=0 # seed value for initial centroid generation,
# None for randomness(default=None)
)
3. find centroids and clusters using kmeans function.
centroids, cluster_assignment = kmeans(
X,
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True # whether to print logs in console or not.(default=False)
)
4. Plot the loss function, hetrogeneity values for every iteration saved in
hetrogeneity list.
plot_heterogeneity(
heterogeneity,
k
)
5. Transfers Dataframe into excel format it must have feature called
'Clust' with k means clustering numbers in it.
"""
import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import pairwise_distances
warnings.filterwarnings("ignore")
TAG = "K-MEANS-CLUST/ "
def get_initial_centroids(data, k, seed=None):
"""Randomly choose k data points as initial centroids"""
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices, :]
return centroids
def centroid_pairwise_dist(X, centroids):
return pairwise_distances(X, centroids, metric="euclidean")
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = centroid_pairwise_dist(data, centroids)
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis=1)
return cluster_assignment
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(
member_data_points, [centroids[i]], metric="euclidean"
)
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7, 4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel("# Iterations")
plt.ylabel("Heterogeneity")
plt.title(f"Heterogeneity of clustering over time, K={k:d}")
plt.rcParams.update({"font.size": 16})
plt.show()
def kmeans(
data, k, initial_centroids, maxiter=500, record_heterogeneity=None, verbose=False
):
"""This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.(default=500)
record_heterogeneity: (optional) a list, to store the history of heterogeneity
as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in
each iteration"""
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in range(maxiter):
if verbose:
print(itr, end="")
# 1. Make cluster assignments using nearest centroids
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data
# points assigned to that cluster.
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if (
prev_cluster_assignment is not None
and (prev_cluster_assignment == cluster_assignment).all()
):
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment != cluster_assignment)
if verbose:
print(
f" {num_changed:5d} elements changed their cluster assignment."
)
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
# Mock test below
if False: # change to true to run this test case.
from sklearn import datasets as ds
dataset = ds.load_iris()
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(dataset["data"], k, seed=0)
centroids, cluster_assignment = kmeans(
dataset["data"],
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True,
)
plot_heterogeneity(heterogeneity, k)
def ReportGenerator(
df: pd.DataFrame, ClusteringVariables: np.ndarray, FillMissingReport=None
) -> pd.DataFrame:
"""
Function generates easy-erading clustering report. It takes 2 arguments as an input:
DataFrame - dataframe with predicted cluester column;
FillMissingReport - dictionary of rules how we are going to fill missing
values of for final report generate (not included in modeling);
in order to run the function following libraries must be imported:
import pandas as pd
import numpy as np
>>> data = pd.DataFrame()
>>> data['numbers'] = [1, 2, 3]
>>> data['col1'] = [0.5, 2.5, 4.5]
>>> data['col2'] = [100, 200, 300]
>>> data['col3'] = [10, 20, 30]
>>> data['Cluster'] = [1, 1, 2]
>>> ReportGenerator(data, ['col1', 'col2'], 0)
Features Type Mark 1 2
0 # of Customers ClusterSize False 2.000000 1.000000
1 % of Customers ClusterProportion False 0.666667 0.333333
2 col1 mean_with_zeros True 1.500000 4.500000
3 col2 mean_with_zeros True 150.000000 300.000000
4 numbers mean_with_zeros False 1.500000 3.000000
.. ... ... ... ... ...
99 dummy 5% False 1.000000 1.000000
100 dummy 95% False 1.000000 1.000000
101 dummy stdev False 0.000000 NaN
102 dummy mode False 1.000000 1.000000
103 dummy median False 1.000000 1.000000
<BLANKLINE>
[104 rows x 5 columns]
"""
# Fill missing values with given rules
if FillMissingReport:
df.fillna(value=FillMissingReport, inplace=True)
df["dummy"] = 1
numeric_cols = df.select_dtypes(np.number).columns
report = (
df.groupby(["Cluster"])[ # construct report dataframe
numeric_cols
] # group by cluster number
.agg(
[
("sum", np.sum),
("mean_with_zeros", lambda x: np.mean(np.nan_to_num(x))),
("mean_without_zeros", lambda x: x.replace(0, np.NaN).mean()),
(
"mean_25-75",
lambda x: np.mean(
np.nan_to_num(
sorted(x)[
round(len(x) * 25 / 100) : round(len(x) * 75 / 100)
]
)
),
),
("mean_with_na", np.mean),
("min", lambda x: x.min()),
("5%", lambda x: x.quantile(0.05)),
("25%", lambda x: x.quantile(0.25)),
("50%", lambda x: x.quantile(0.50)),
("75%", lambda x: x.quantile(0.75)),
("95%", lambda x: x.quantile(0.95)),
("max", lambda x: x.max()),
("count", lambda x: x.count()),
("stdev", lambda x: x.std()),
("mode", lambda x: x.mode()[0]),
("median", lambda x: x.median()),
("# > 0", lambda x: (x > 0).sum()),
]
)
.T.reset_index()
.rename(index=str, columns={"level_0": "Features", "level_1": "Type"})
) # rename columns
# calculate the size of cluster(count of clientID's)
clustersize = report[
(report["Features"] == "dummy") & (report["Type"] == "count")
].copy() # avoid SettingWithCopyWarning
clustersize.Type = (
"ClusterSize" # rename created cluster df to match report column names
)
clustersize.Features = "# of Customers"
clusterproportion = pd.DataFrame(
clustersize.iloc[:, 2:].values
/ clustersize.iloc[:, 2:].values.sum() # calculating the proportion of cluster
)
clusterproportion[
"Type"
] = "% of Customers" # rename created cluster df to match report column names
clusterproportion["Features"] = "ClusterProportion"
cols = clusterproportion.columns.tolist()
cols = cols[-2:] + cols[:-2]
clusterproportion = clusterproportion[cols] # rearrange columns to match report
clusterproportion.columns = report.columns
a = pd.DataFrame(
abs(
report[report["Type"] == "count"].iloc[:, 2:].values
- clustersize.iloc[:, 2:].values
)
) # generating df with count of nan values
a["Features"] = 0
a["Type"] = "# of nan"
a.Features = report[
report["Type"] == "count"
].Features.tolist() # filling values in order to match report
cols = a.columns.tolist()
cols = cols[-2:] + cols[:-2]
a = a[cols] # rearrange columns to match report
a.columns = report.columns # rename columns to match report
report = report.drop(
report[report.Type == "count"].index
) # drop count values except cluster size
report = pd.concat(
[report, a, clustersize, clusterproportion], axis=0
) # concat report with clustert size and nan values
report["Mark"] = report["Features"].isin(ClusteringVariables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
report = report[cols]
sorter1 = {
"ClusterSize": 9,
"ClusterProportion": 8,
"mean_with_zeros": 7,
"mean_with_na": 6,
"max": 5,
"50%": 4,
"min": 3,
"25%": 2,
"75%": 1,
"# of nan": 0,
"# > 0": -1,
"sum_with_na": -2,
}
report = (
report.assign(
Sorter1=lambda x: x.Type.map(sorter1),
Sorter2=lambda x: list(reversed(range(len(x)))),
)
.sort_values(["Sorter1", "Mark", "Sorter2"], ascending=False)
.drop(["Sorter1", "Sorter2"], axis=1)
)
report.columns.name = ""
report = report.reset_index()
report.drop(columns=["index"], inplace=True)
return report
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | def stooge_sort(arr):
"""
Examples:
>>> stooge_sort([18.1, 0, -7.1, -1, 2, 2])
[-7.1, -1, 0, 2, 2, 18.1]
>>> stooge_sort([])
[]
"""
stooge(arr, 0, len(arr) - 1)
return arr
def stooge(arr, i, h):
if i >= h:
return
# If first element is smaller than the last then swap them
if arr[i] > arr[h]:
arr[i], arr[h] = arr[h], arr[i]
# If there are more than 2 elements in the array
if h - i + 1 > 2:
t = (int)((h - i + 1) / 3)
# Recursively sort first 2/3 elements
stooge(arr, i, (h - t))
# Recursively sort last 2/3 elements
stooge(arr, i + t, (h))
# Recursively sort first 2/3 elements
stooge(arr, i, (h - t))
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(stooge_sort(unsorted))
| def stooge_sort(arr):
"""
Examples:
>>> stooge_sort([18.1, 0, -7.1, -1, 2, 2])
[-7.1, -1, 0, 2, 2, 18.1]
>>> stooge_sort([])
[]
"""
stooge(arr, 0, len(arr) - 1)
return arr
def stooge(arr, i, h):
if i >= h:
return
# If first element is smaller than the last then swap them
if arr[i] > arr[h]:
arr[i], arr[h] = arr[h], arr[i]
# If there are more than 2 elements in the array
if h - i + 1 > 2:
t = (int)((h - i + 1) / 3)
# Recursively sort first 2/3 elements
stooge(arr, i, (h - t))
# Recursively sort last 2/3 elements
stooge(arr, i + t, (h))
# Recursively sort first 2/3 elements
stooge(arr, i, (h - t))
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(stooge_sort(unsorted))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
In this problem, we want to determine all possible combinations of k
numbers out of 1 ... n. We use backtracking to solve this problem.
Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))
"""
from __future__ import annotations
def generate_all_combinations(n: int, k: int) -> list[list[int]]:
"""
>>> generate_all_combinations(n=4, k=2)
[[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
"""
result: list[list[int]] = []
create_all_state(1, n, k, [], result)
return result
def create_all_state(
increment: int,
total_number: int,
level: int,
current_list: list[int],
total_list: list[list[int]],
) -> None:
if level == 0:
total_list.append(current_list[:])
return
for i in range(increment, total_number - level + 2):
current_list.append(i)
create_all_state(i + 1, total_number, level - 1, current_list, total_list)
current_list.pop()
def print_all_state(total_list: list[list[int]]) -> None:
for i in total_list:
print(*i)
if __name__ == "__main__":
n = 4
k = 2
total_list = generate_all_combinations(n, k)
print_all_state(total_list)
| """
In this problem, we want to determine all possible combinations of k
numbers out of 1 ... n. We use backtracking to solve this problem.
Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!)))
"""
from __future__ import annotations
def generate_all_combinations(n: int, k: int) -> list[list[int]]:
"""
>>> generate_all_combinations(n=4, k=2)
[[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
"""
result: list[list[int]] = []
create_all_state(1, n, k, [], result)
return result
def create_all_state(
increment: int,
total_number: int,
level: int,
current_list: list[int],
total_list: list[list[int]],
) -> None:
if level == 0:
total_list.append(current_list[:])
return
for i in range(increment, total_number - level + 2):
current_list.append(i)
create_all_state(i + 1, total_number, level - 1, current_list, total_list)
current_list.pop()
def print_all_state(total_list: list[list[int]]) -> None:
for i in total_list:
print(*i)
if __name__ == "__main__":
n = 4
k = 2
total_list = generate_all_combinations(n, k)
print_all_state(total_list)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Given an array of integers, return indices of the two numbers such that they add up to
a specific target.
You may assume that each input would have exactly one solution, and you may not use the
same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
"""
from __future__ import annotations
def two_sum(nums: list[int], target: int) -> list[int]:
"""
>>> two_sum([2, 7, 11, 15], 9)
[0, 1]
>>> two_sum([15, 2, 11, 7], 13)
[1, 2]
>>> two_sum([2, 7, 11, 15], 17)
[0, 3]
>>> two_sum([7, 15, 11, 2], 18)
[0, 2]
>>> two_sum([2, 7, 11, 15], 26)
[2, 3]
>>> two_sum([2, 7, 11, 15], 8)
[]
>>> two_sum([3 * i for i in range(10)], 19)
[]
"""
chk_map: dict[int, int] = {}
for index, val in enumerate(nums):
compl = target - val
if compl in chk_map:
return [chk_map[compl], index]
chk_map[val] = index
return []
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{two_sum([2, 7, 11, 15], 9) = }")
| """
Given an array of integers, return indices of the two numbers such that they add up to
a specific target.
You may assume that each input would have exactly one solution, and you may not use the
same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
"""
from __future__ import annotations
def two_sum(nums: list[int], target: int) -> list[int]:
"""
>>> two_sum([2, 7, 11, 15], 9)
[0, 1]
>>> two_sum([15, 2, 11, 7], 13)
[1, 2]
>>> two_sum([2, 7, 11, 15], 17)
[0, 3]
>>> two_sum([7, 15, 11, 2], 18)
[0, 2]
>>> two_sum([2, 7, 11, 15], 26)
[2, 3]
>>> two_sum([2, 7, 11, 15], 8)
[]
>>> two_sum([3 * i for i in range(10)], 19)
[]
"""
chk_map: dict[int, int] = {}
for index, val in enumerate(nums):
compl = target - val
if compl in chk_map:
return [chk_map[compl], index]
chk_map[val] = index
return []
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{two_sum([2, 7, 11, 15], 9) = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Counting Summations
Problem 76: https://projecteuler.net/problem=76
It is possible to write five as a sum in exactly six different ways:
4 + 1
3 + 2
3 + 1 + 1
2 + 2 + 1
2 + 1 + 1 + 1
1 + 1 + 1 + 1 + 1
How many different ways can one hundred be written as a sum of at least two
positive integers?
"""
def solution(m: int = 100) -> int:
"""
Returns the number of different ways the number m can be written as a
sum of at least two positive integers.
>>> solution(100)
190569291
>>> solution(50)
204225
>>> solution(30)
5603
>>> solution(10)
41
>>> solution(5)
6
>>> solution(3)
2
>>> solution(2)
1
>>> solution(1)
0
"""
memo = [[0 for _ in range(m)] for _ in range(m + 1)]
for i in range(m + 1):
memo[i][0] = 1
for n in range(m + 1):
for k in range(1, m):
memo[n][k] += memo[n][k - 1]
if n > k:
memo[n][k] += memo[n - k - 1][k]
return memo[m][m - 1] - 1
if __name__ == "__main__":
print(solution(int(input("Enter a number: ").strip())))
| """
Counting Summations
Problem 76: https://projecteuler.net/problem=76
It is possible to write five as a sum in exactly six different ways:
4 + 1
3 + 2
3 + 1 + 1
2 + 2 + 1
2 + 1 + 1 + 1
1 + 1 + 1 + 1 + 1
How many different ways can one hundred be written as a sum of at least two
positive integers?
"""
def solution(m: int = 100) -> int:
"""
Returns the number of different ways the number m can be written as a
sum of at least two positive integers.
>>> solution(100)
190569291
>>> solution(50)
204225
>>> solution(30)
5603
>>> solution(10)
41
>>> solution(5)
6
>>> solution(3)
2
>>> solution(2)
1
>>> solution(1)
0
"""
memo = [[0 for _ in range(m)] for _ in range(m + 1)]
for i in range(m + 1):
memo[i][0] = 1
for n in range(m + 1):
for k in range(1, m):
memo[n][k] += memo[n][k - 1]
if n > k:
memo[n][k] += memo[n - k - 1][k]
return memo[m][m - 1] - 1
if __name__ == "__main__":
print(solution(int(input("Enter a number: ").strip())))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, memoization,
and a simplified form of Binet's formula
NOTE 1: the iterative, recursive, memoization functions are more accurate than
the Binet's formula function because the Binet formula function uses floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
RESULTS: (n = 20)
fib_iterative runtime: 0.0055 ms
fib_recursive runtime: 6.5627 ms
fib_memoization runtime: 0.0107 ms
fib_binet runtime: 0.0174 ms
"""
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_memoization(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using memoization
>>> fib_memoization(0)
[0]
>>> fib_memoization(1)
[0, 1]
>>> fib_memoization(5)
[0, 1, 1, 2, 3, 5]
>>> fib_memoization(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
# Cache must be outside recursuive function
# other it will reset every time it calls itself.
cache: dict[int, int] = {0: 0, 1: 1, 2: 1} # Prefilled cache
def rec_fn_memoized(num: int) -> int:
if num in cache:
return cache[num]
value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2)
cache[num] = value
return value
return [rec_fn_memoized(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi**i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 20
time_func(fib_iterative, num)
time_func(fib_recursive, num)
time_func(fib_memoization, num)
time_func(fib_binet, num)
| # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, memoization,
and a simplified form of Binet's formula
NOTE 1: the iterative, recursive, memoization functions are more accurate than
the Binet's formula function because the Binet formula function uses floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
RESULTS: (n = 20)
fib_iterative runtime: 0.0055 ms
fib_recursive runtime: 6.5627 ms
fib_memoization runtime: 0.0107 ms
fib_binet runtime: 0.0174 ms
"""
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_memoization(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using memoization
>>> fib_memoization(0)
[0]
>>> fib_memoization(1)
[0, 1]
>>> fib_memoization(5)
[0, 1, 1, 2, 3, 5]
>>> fib_memoization(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
# Cache must be outside recursuive function
# other it will reset every time it calls itself.
cache: dict[int, int] = {0: 0, 1: 1, 2: 1} # Prefilled cache
def rec_fn_memoized(num: int) -> int:
if num in cache:
return cache[num]
value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2)
cache[num] = value
return value
return [rec_fn_memoized(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi**i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 20
time_func(fib_iterative, num)
time_func(fib_recursive, num)
time_func(fib_memoization, num)
time_func(fib_binet, num)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return len(tuple(iter(self)))
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for i in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return len(tuple(iter(self)))
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for i in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Problem 28
Url: https://projecteuler.net/problem=28
Statement:
Starting with the number 1 and moving to the right in a clockwise direction a 5
by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed
in the same way?
"""
from math import ceil
def solution(n: int = 1001) -> int:
"""Returns the sum of the numbers on the diagonals in a n by n spiral
formed in the same way.
>>> solution(1001)
669171001
>>> solution(500)
82959497
>>> solution(100)
651897
>>> solution(50)
79697
>>> solution(10)
537
"""
total = 1
for i in range(1, int(ceil(n / 2.0))):
odd = 2 * i + 1
even = 2 * i
total = total + 4 * odd**2 - 6 * even
return total
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution())
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number")
| """
Problem 28
Url: https://projecteuler.net/problem=28
Statement:
Starting with the number 1 and moving to the right in a clockwise direction a 5
by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed
in the same way?
"""
from math import ceil
def solution(n: int = 1001) -> int:
"""Returns the sum of the numbers on the diagonals in a n by n spiral
formed in the same way.
>>> solution(1001)
669171001
>>> solution(500)
82959497
>>> solution(100)
651897
>>> solution(50)
79697
>>> solution(10)
537
"""
total = 1
for i in range(1, int(ceil(n / 2.0))):
odd = 2 * i + 1
even = 2 * i
total = total + 4 * odd**2 - 6 * even
return total
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution())
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | def actual_power(a: int, b: int):
"""
Function using divide and conquer to calculate a^b.
It only works for integer a,b.
"""
if b == 0:
return 1
if (b % 2) == 0:
return actual_power(a, int(b / 2)) * actual_power(a, int(b / 2))
else:
return a * actual_power(a, int(b / 2)) * actual_power(a, int(b / 2))
def power(a: int, b: int) -> float:
"""
>>> power(4,6)
4096
>>> power(2,3)
8
>>> power(-2,3)
-8
>>> power(2,-3)
0.125
>>> power(-2,-3)
-0.125
"""
if b < 0:
return 1 / actual_power(a, b)
return actual_power(a, b)
if __name__ == "__main__":
print(power(-2, -3))
| def actual_power(a: int, b: int):
"""
Function using divide and conquer to calculate a^b.
It only works for integer a,b.
"""
if b == 0:
return 1
if (b % 2) == 0:
return actual_power(a, int(b / 2)) * actual_power(a, int(b / 2))
else:
return a * actual_power(a, int(b / 2)) * actual_power(a, int(b / 2))
def power(a: int, b: int) -> float:
"""
>>> power(4,6)
4096
>>> power(2,3)
8
>>> power(-2,3)
-8
>>> power(2,-3)
0.125
>>> power(-2,-3)
-0.125
"""
if b < 0:
return 1 / actual_power(a, b)
return actual_power(a, b)
if __name__ == "__main__":
print(power(-2, -3))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Get book and author data from https://openlibrary.org
ISBN: https://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
from json import JSONDecodeError # Workaround for requests.exceptions.JSONDecodeError
import requests
def get_openlibrary_data(olid: str = "isbn/0140328726") -> dict:
"""
Given an 'isbn/0140328726', return book data from Open Library as a Python dict.
Given an '/authors/OL34184A', return authors data as a Python dict.
This code must work for olids with or without a leading slash ('/').
# Comment out doctests if they take too long or have results that may change
# >>> get_openlibrary_data(olid='isbn/0140328726') # doctest: +ELLIPSIS
{'publishers': ['Puffin'], 'number_of_pages': 96, 'isbn_10': ['0140328726'], ...
# >>> get_openlibrary_data(olid='/authors/OL7353617A') # doctest: +ELLIPSIS
{'name': 'Adrian Brisku', 'created': {'type': '/type/datetime', ...
>>> pass # Placate https://github.com/apps/algorithms-keeper
"""
new_olid = olid.strip().strip("/") # Remove leading/trailing whitespace & slashes
if new_olid.count("/") != 1:
raise ValueError(f"{olid} is not a valid Open Library olid")
return requests.get(f"https://openlibrary.org/{new_olid}.json").json()
def summarize_book(ol_book_data: dict) -> dict:
"""
Given Open Library book data, return a summary as a Python dict.
>>> pass # Placate https://github.com/apps/algorithms-keeper
"""
desired_keys = {
"title": "Title",
"publish_date": "Publish date",
"authors": "Authors",
"number_of_pages": "Number of pages:",
"first_sentence": "First sentence",
"isbn_10": "ISBN (10)",
"isbn_13": "ISBN (13)",
}
data = {better_key: ol_book_data[key] for key, better_key in desired_keys.items()}
data["Authors"] = [
get_openlibrary_data(author["key"])["name"] for author in data["Authors"]
]
data["First sentence"] = data["First sentence"]["value"]
for key, value in data.items():
if isinstance(value, list):
data[key] = ", ".join(value)
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
while True:
isbn = input("\nEnter the ISBN code to search (or 'quit' to stop): ").strip()
if isbn.lower() in ("", "q", "quit", "exit", "stop"):
break
if len(isbn) not in (10, 13) or not isbn.isdigit():
print(f"Sorry, {isbn} is not a valid ISBN. Please, input a valid ISBN.")
continue
print(f"\nSearching Open Library for ISBN: {isbn}...\n")
try:
book_summary = summarize_book(get_openlibrary_data(f"isbn/{isbn}"))
print("\n".join(f"{key}: {value}" for key, value in book_summary.items()))
except JSONDecodeError: # Workaround for requests.exceptions.RequestException:
print(f"Sorry, there are no results for ISBN: {isbn}.")
| """
Get book and author data from https://openlibrary.org
ISBN: https://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
from json import JSONDecodeError # Workaround for requests.exceptions.JSONDecodeError
import requests
def get_openlibrary_data(olid: str = "isbn/0140328726") -> dict:
"""
Given an 'isbn/0140328726', return book data from Open Library as a Python dict.
Given an '/authors/OL34184A', return authors data as a Python dict.
This code must work for olids with or without a leading slash ('/').
# Comment out doctests if they take too long or have results that may change
# >>> get_openlibrary_data(olid='isbn/0140328726') # doctest: +ELLIPSIS
{'publishers': ['Puffin'], 'number_of_pages': 96, 'isbn_10': ['0140328726'], ...
# >>> get_openlibrary_data(olid='/authors/OL7353617A') # doctest: +ELLIPSIS
{'name': 'Adrian Brisku', 'created': {'type': '/type/datetime', ...
>>> pass # Placate https://github.com/apps/algorithms-keeper
"""
new_olid = olid.strip().strip("/") # Remove leading/trailing whitespace & slashes
if new_olid.count("/") != 1:
raise ValueError(f"{olid} is not a valid Open Library olid")
return requests.get(f"https://openlibrary.org/{new_olid}.json").json()
def summarize_book(ol_book_data: dict) -> dict:
"""
Given Open Library book data, return a summary as a Python dict.
>>> pass # Placate https://github.com/apps/algorithms-keeper
"""
desired_keys = {
"title": "Title",
"publish_date": "Publish date",
"authors": "Authors",
"number_of_pages": "Number of pages:",
"first_sentence": "First sentence",
"isbn_10": "ISBN (10)",
"isbn_13": "ISBN (13)",
}
data = {better_key: ol_book_data[key] for key, better_key in desired_keys.items()}
data["Authors"] = [
get_openlibrary_data(author["key"])["name"] for author in data["Authors"]
]
data["First sentence"] = data["First sentence"]["value"]
for key, value in data.items():
if isinstance(value, list):
data[key] = ", ".join(value)
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
while True:
isbn = input("\nEnter the ISBN code to search (or 'quit' to stop): ").strip()
if isbn.lower() in ("", "q", "quit", "exit", "stop"):
break
if len(isbn) not in (10, 13) or not isbn.isdigit():
print(f"Sorry, {isbn} is not a valid ISBN. Please, input a valid ISBN.")
continue
print(f"\nSearching Open Library for ISBN: {isbn}...\n")
try:
book_summary = summarize_book(get_openlibrary_data(f"isbn/{isbn}"))
print("\n".join(f"{key}: {value}" for key, value in book_summary.items()))
except JSONDecodeError: # Workaround for requests.exceptions.RequestException:
print(f"Sorry, there are no results for ISBN: {isbn}.")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # @Author : lightXu
# @File : convolve.py
# @Time : 2019/7/8 0008 下午 16:13
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import array, dot, pad, ravel, uint8, zeros
def im2col(image, block_size):
rows, cols = image.shape
dst_height = cols - block_size[1] + 1
dst_width = rows - block_size[0] + 1
image_array = zeros((dst_height * dst_width, block_size[1] * block_size[0]))
row = 0
for i in range(0, dst_height):
for j in range(0, dst_width):
window = ravel(image[i : i + block_size[0], j : j + block_size[1]])
image_array[row, :] = window
row += 1
return image_array
def img_convolve(image, filter_kernel):
height, width = image.shape[0], image.shape[1]
k_size = filter_kernel.shape[0]
pad_size = k_size // 2
# Pads image with the edge values of array.
image_tmp = pad(image, pad_size, mode="edge")
# im2col, turn the k_size*k_size pixels into a row and np.vstack all rows
image_array = im2col(image_tmp, (k_size, k_size))
# turn the kernel into shape(k*k, 1)
kernel_array = ravel(filter_kernel)
# reshape and get the dst image
dst = dot(image_array, kernel_array).reshape(height, width)
return dst
if __name__ == "__main__":
# read original image
img = imread(r"../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# Laplace operator
Laplace_kernel = array([[0, 1, 0], [1, -4, 1], [0, 1, 0]])
out = img_convolve(gray, Laplace_kernel).astype(uint8)
imshow("Laplacian", out)
waitKey(0)
| # @Author : lightXu
# @File : convolve.py
# @Time : 2019/7/8 0008 下午 16:13
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import array, dot, pad, ravel, uint8, zeros
def im2col(image, block_size):
rows, cols = image.shape
dst_height = cols - block_size[1] + 1
dst_width = rows - block_size[0] + 1
image_array = zeros((dst_height * dst_width, block_size[1] * block_size[0]))
row = 0
for i in range(0, dst_height):
for j in range(0, dst_width):
window = ravel(image[i : i + block_size[0], j : j + block_size[1]])
image_array[row, :] = window
row += 1
return image_array
def img_convolve(image, filter_kernel):
height, width = image.shape[0], image.shape[1]
k_size = filter_kernel.shape[0]
pad_size = k_size // 2
# Pads image with the edge values of array.
image_tmp = pad(image, pad_size, mode="edge")
# im2col, turn the k_size*k_size pixels into a row and np.vstack all rows
image_array = im2col(image_tmp, (k_size, k_size))
# turn the kernel into shape(k*k, 1)
kernel_array = ravel(filter_kernel)
# reshape and get the dst image
dst = dot(image_array, kernel_array).reshape(height, width)
return dst
if __name__ == "__main__":
# read original image
img = imread(r"../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# Laplace operator
Laplace_kernel = array([[0, 1, 0], [1, -4, 1], [0, 1, 0]])
out = img_convolve(gray, Laplace_kernel).astype(uint8)
imshow("Laplacian", out)
waitKey(0)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This is a pure Python implementation of the pancake sort algorithm
For doctests run following command:
python3 -m doctest -v pancake_sort.py
or
python -m doctest -v pancake_sort.py
For manual testing run:
python pancake_sort.py
"""
def pancake_sort(arr):
"""Sort Array with Pancake Sort.
:param arr: Collection containing comparable items
:return: Collection ordered in ascending order of items
Examples:
>>> pancake_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> pancake_sort([])
[]
>>> pancake_sort([-2, -5, -45])
[-45, -5, -2]
"""
cur = len(arr)
while cur > 1:
# Find the maximum number in arr
mi = arr.index(max(arr[0:cur]))
# Reverse from 0 to mi
arr = arr[mi::-1] + arr[mi + 1 : len(arr)]
# Reverse whole list
arr = arr[cur - 1 :: -1] + arr[cur : len(arr)]
cur -= 1
return arr
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(pancake_sort(unsorted))
| """
This is a pure Python implementation of the pancake sort algorithm
For doctests run following command:
python3 -m doctest -v pancake_sort.py
or
python -m doctest -v pancake_sort.py
For manual testing run:
python pancake_sort.py
"""
def pancake_sort(arr):
"""Sort Array with Pancake Sort.
:param arr: Collection containing comparable items
:return: Collection ordered in ascending order of items
Examples:
>>> pancake_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> pancake_sort([])
[]
>>> pancake_sort([-2, -5, -45])
[-45, -5, -2]
"""
cur = len(arr)
while cur > 1:
# Find the maximum number in arr
mi = arr.index(max(arr[0:cur]))
# Reverse from 0 to mi
arr = arr[mi::-1] + arr[mi + 1 : len(arr)]
# Reverse whole list
arr = arr[cur - 1 :: -1] + arr[cur : len(arr)]
cur -= 1
return arr
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(pancake_sort(unsorted))
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | -1 |
||
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from collections.abc import Callable
from math import log
from os import name, system
from random import gauss, seed
from typing import TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from collections.abc import Callable
from math import log
from os import name, system
from random import gauss, seed
from typing import TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Project Euler Problem 104 : https://projecteuler.net/problem=104
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
It turns out that F541, which contains 113 digits, is the first Fibonacci number
for which the last nine digits are 1-9 pandigital (contain all the digits 1 to 9,
but not necessarily in order). And F2749, which contains 575 digits, is the first
Fibonacci number for which the first nine digits are 1-9 pandigital.
Given that Fk is the first Fibonacci number for which the first nine digits AND
the last nine digits are 1-9 pandigital, find k.
"""
def check(number: int) -> bool:
"""
Takes a number and checks if it is pandigital both from start and end
>>> check(123456789987654321)
True
>>> check(120000987654321)
False
>>> check(1234567895765677987654321)
True
"""
check_last = [0] * 11
check_front = [0] * 11
# mark last 9 numbers
for x in range(9):
check_last[int(number % 10)] = 1
number = number // 10
# flag
f = True
# check last 9 numbers for pandigitality
for x in range(9):
if not check_last[x + 1]:
f = False
if not f:
return f
# mark first 9 numbers
number = int(str(number)[:9])
for x in range(9):
check_front[int(number % 10)] = 1
number = number // 10
# check first 9 numbers for pandigitality
for x in range(9):
if not check_front[x + 1]:
f = False
return f
def check1(number: int) -> bool:
"""
Takes a number and checks if it is pandigital from END
>>> check1(123456789987654321)
True
>>> check1(120000987654321)
True
>>> check1(12345678957656779870004321)
False
"""
check_last = [0] * 11
# mark last 9 numbers
for x in range(9):
check_last[int(number % 10)] = 1
number = number // 10
# flag
f = True
# check last 9 numbers for pandigitality
for x in range(9):
if not check_last[x + 1]:
f = False
return f
def solution() -> int:
"""
Outputs the answer is the least Fibonacci number pandigital from both sides.
>>> solution()
329468
"""
a = 1
b = 1
c = 2
# temporary Fibonacci numbers
a1 = 1
b1 = 1
c1 = 2
# temporary Fibonacci numbers mod 1e9
# mod m=1e9, done for fast optimisation
tocheck = [0] * 1000000
m = 1000000000
for x in range(1000000):
c1 = (a1 + b1) % m
a1 = b1 % m
b1 = c1 % m
if check1(b1):
tocheck[x + 3] = 1
for x in range(1000000):
c = a + b
a = b
b = c
# perform check only if in tocheck
if tocheck[x + 3] and check(b):
return x + 3 # first 2 already done
return -1
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 104 : https://projecteuler.net/problem=104
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
It turns out that F541, which contains 113 digits, is the first Fibonacci number
for which the last nine digits are 1-9 pandigital (contain all the digits 1 to 9,
but not necessarily in order). And F2749, which contains 575 digits, is the first
Fibonacci number for which the first nine digits are 1-9 pandigital.
Given that Fk is the first Fibonacci number for which the first nine digits AND
the last nine digits are 1-9 pandigital, find k.
"""
def check(number: int) -> bool:
"""
Takes a number and checks if it is pandigital both from start and end
>>> check(123456789987654321)
True
>>> check(120000987654321)
False
>>> check(1234567895765677987654321)
True
"""
check_last = [0] * 11
check_front = [0] * 11
# mark last 9 numbers
for x in range(9):
check_last[int(number % 10)] = 1
number = number // 10
# flag
f = True
# check last 9 numbers for pandigitality
for x in range(9):
if not check_last[x + 1]:
f = False
if not f:
return f
# mark first 9 numbers
number = int(str(number)[:9])
for x in range(9):
check_front[int(number % 10)] = 1
number = number // 10
# check first 9 numbers for pandigitality
for x in range(9):
if not check_front[x + 1]:
f = False
return f
def check1(number: int) -> bool:
"""
Takes a number and checks if it is pandigital from END
>>> check1(123456789987654321)
True
>>> check1(120000987654321)
True
>>> check1(12345678957656779870004321)
False
"""
check_last = [0] * 11
# mark last 9 numbers
for x in range(9):
check_last[int(number % 10)] = 1
number = number // 10
# flag
f = True
# check last 9 numbers for pandigitality
for x in range(9):
if not check_last[x + 1]:
f = False
return f
def solution() -> int:
"""
Outputs the answer is the least Fibonacci number pandigital from both sides.
>>> solution()
329468
"""
a = 1
b = 1
c = 2
# temporary Fibonacci numbers
a1 = 1
b1 = 1
c1 = 2
# temporary Fibonacci numbers mod 1e9
# mod m=1e9, done for fast optimisation
tocheck = [0] * 1000000
m = 1000000000
for x in range(1000000):
c1 = (a1 + b1) % m
a1 = b1 % m
b1 = c1 % m
if check1(b1):
tocheck[x + 3] = 1
for x in range(1000000):
c = a + b
a = b
b = c
# perform check only if in tocheck
if tocheck[x + 3] and check(b):
return x + 3 # first 2 already done
return -1
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from __future__ import annotations
import math
from collections.abc import Callable
def line_length(
fnc: Callable[[int | float], int | float],
x_start: int | float,
x_end: int | float,
steps: int = 100,
) -> float:
"""
Approximates the arc length of a line segment by treating the curve as a
sequence of linear lines and summing their lengths
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return x
>>> f"{line_length(f, 0, 1, 10):.6f}"
'1.414214'
>>> def f(x):
... return 1
>>> f"{line_length(f, -5.5, 4.5):.6f}"
'10.000000'
>>> def f(x):
... return math.sin(5 * x) + math.cos(10 * x) + x * x/10
>>> f"{line_length(f, 0.0, 10.0, 10000):.6f}"
'69.534930'
"""
x1 = x_start
fx1 = fnc(x_start)
length = 0.0
for i in range(steps):
# Approximates curve as a sequence of linear lines and sums their length
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
length += math.hypot(x2 - x1, fx2 - fx1)
# Increment step
x1 = x2
fx1 = fx2
return length
if __name__ == "__main__":
def f(x):
return math.sin(10 * x)
print("f(x) = sin(10 * x)")
print("The length of the curve from x = -10 to x = 10 is:")
i = 10
while i <= 100000:
print(f"With {i} steps: {line_length(f, -10, 10, i)}")
i *= 10
| from __future__ import annotations
import math
from collections.abc import Callable
def line_length(
fnc: Callable[[int | float], int | float],
x_start: int | float,
x_end: int | float,
steps: int = 100,
) -> float:
"""
Approximates the arc length of a line segment by treating the curve as a
sequence of linear lines and summing their lengths
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return x
>>> f"{line_length(f, 0, 1, 10):.6f}"
'1.414214'
>>> def f(x):
... return 1
>>> f"{line_length(f, -5.5, 4.5):.6f}"
'10.000000'
>>> def f(x):
... return math.sin(5 * x) + math.cos(10 * x) + x * x/10
>>> f"{line_length(f, 0.0, 10.0, 10000):.6f}"
'69.534930'
"""
x1 = x_start
fx1 = fnc(x_start)
length = 0.0
for i in range(steps):
# Approximates curve as a sequence of linear lines and sums their length
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
length += math.hypot(x2 - x1, fx2 - fx1)
# Increment step
x1 = x2
fx1 = fx2
return length
if __name__ == "__main__":
def f(x):
return math.sin(10 * x)
print("f(x) = sin(10 * x)")
print("The length of the curve from x = -10 to x = 10 is:")
i = 10
while i <= 100000:
print(f"With {i} steps: {line_length(f, -10, 10, i)}")
i *= 10
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
In laser physics, a "white cell" is a mirror system that acts as a delay line for the
laser beam. The beam enters the cell, bounces around on the mirrors, and eventually
works its way back out.
The specific white cell we will be considering is an ellipse with the equation
4x^2 + y^2 = 100
The section corresponding to −0.01 ≤ x ≤ +0.01 at the top is missing, allowing the
light to enter and exit through the hole.

The light beam in this problem starts at the point (0.0,10.1) just outside the white
cell, and the beam first impacts the mirror at (1.4,-9.6).
Each time the laser beam hits the surface of the ellipse, it follows the usual law of
reflection "angle of incidence equals angle of reflection." That is, both the incident
and reflected beams make the same angle with the normal line at the point of incidence.
In the figure on the left, the red line shows the first two points of contact between
the laser beam and the wall of the white cell; the blue line shows the line tangent to
the ellipse at the point of incidence of the first bounce.
The slope m of the tangent line at any point (x,y) of the given ellipse is: m = −4x/y
The normal line is perpendicular to this tangent line at the point of incidence.
The animation on the right shows the first 10 reflections of the beam.
How many times does the beam hit the internal surface of the white cell before exiting?
"""
from math import isclose, sqrt
def next_point(
point_x: float, point_y: float, incoming_gradient: float
) -> tuple[float, float, float]:
"""
Given that a laser beam hits the interior of the white cell at point
(point_x, point_y) with gradient incoming_gradient, return a tuple (x,y,m1)
where the next point of contact with the interior is (x,y) with gradient m1.
>>> next_point(5.0, 0.0, 0.0)
(-5.0, 0.0, 0.0)
>>> next_point(5.0, 0.0, -2.0)
(0.0, -10.0, 2.0)
"""
# normal_gradient = gradient of line through which the beam is reflected
# outgoing_gradient = gradient of reflected line
normal_gradient = point_y / 4 / point_x
s2 = 2 * normal_gradient / (1 + normal_gradient * normal_gradient)
c2 = (1 - normal_gradient * normal_gradient) / (
1 + normal_gradient * normal_gradient
)
outgoing_gradient = (s2 - c2 * incoming_gradient) / (c2 + s2 * incoming_gradient)
# to find the next point, solve the simultaeneous equations:
# y^2 + 4x^2 = 100
# y - b = m * (x - a)
# ==> A x^2 + B x + C = 0
quadratic_term = outgoing_gradient**2 + 4
linear_term = 2 * outgoing_gradient * (point_y - outgoing_gradient * point_x)
constant_term = (point_y - outgoing_gradient * point_x) ** 2 - 100
x_minus = (
-linear_term - sqrt(linear_term**2 - 4 * quadratic_term * constant_term)
) / (2 * quadratic_term)
x_plus = (
-linear_term + sqrt(linear_term**2 - 4 * quadratic_term * constant_term)
) / (2 * quadratic_term)
# two solutions, one of which is our input point
next_x = x_minus if isclose(x_plus, point_x) else x_plus
next_y = point_y + outgoing_gradient * (next_x - point_x)
return next_x, next_y, outgoing_gradient
def solution(first_x_coord: float = 1.4, first_y_coord: float = -9.6) -> int:
"""
Return the number of times that the beam hits the interior wall of the
cell before exiting.
>>> solution(0.00001,-10)
1
>>> solution(5, 0)
287
"""
num_reflections: int = 0
point_x: float = first_x_coord
point_y: float = first_y_coord
gradient: float = (10.1 - point_y) / (0.0 - point_x)
while not (-0.01 <= point_x <= 0.01 and point_y > 0):
point_x, point_y, gradient = next_point(point_x, point_y, gradient)
num_reflections += 1
return num_reflections
if __name__ == "__main__":
print(f"{solution() = }")
| """
In laser physics, a "white cell" is a mirror system that acts as a delay line for the
laser beam. The beam enters the cell, bounces around on the mirrors, and eventually
works its way back out.
The specific white cell we will be considering is an ellipse with the equation
4x^2 + y^2 = 100
The section corresponding to −0.01 ≤ x ≤ +0.01 at the top is missing, allowing the
light to enter and exit through the hole.

The light beam in this problem starts at the point (0.0,10.1) just outside the white
cell, and the beam first impacts the mirror at (1.4,-9.6).
Each time the laser beam hits the surface of the ellipse, it follows the usual law of
reflection "angle of incidence equals angle of reflection." That is, both the incident
and reflected beams make the same angle with the normal line at the point of incidence.
In the figure on the left, the red line shows the first two points of contact between
the laser beam and the wall of the white cell; the blue line shows the line tangent to
the ellipse at the point of incidence of the first bounce.
The slope m of the tangent line at any point (x,y) of the given ellipse is: m = −4x/y
The normal line is perpendicular to this tangent line at the point of incidence.
The animation on the right shows the first 10 reflections of the beam.
How many times does the beam hit the internal surface of the white cell before exiting?
"""
from math import isclose, sqrt
def next_point(
point_x: float, point_y: float, incoming_gradient: float
) -> tuple[float, float, float]:
"""
Given that a laser beam hits the interior of the white cell at point
(point_x, point_y) with gradient incoming_gradient, return a tuple (x,y,m1)
where the next point of contact with the interior is (x,y) with gradient m1.
>>> next_point(5.0, 0.0, 0.0)
(-5.0, 0.0, 0.0)
>>> next_point(5.0, 0.0, -2.0)
(0.0, -10.0, 2.0)
"""
# normal_gradient = gradient of line through which the beam is reflected
# outgoing_gradient = gradient of reflected line
normal_gradient = point_y / 4 / point_x
s2 = 2 * normal_gradient / (1 + normal_gradient * normal_gradient)
c2 = (1 - normal_gradient * normal_gradient) / (
1 + normal_gradient * normal_gradient
)
outgoing_gradient = (s2 - c2 * incoming_gradient) / (c2 + s2 * incoming_gradient)
# to find the next point, solve the simultaeneous equations:
# y^2 + 4x^2 = 100
# y - b = m * (x - a)
# ==> A x^2 + B x + C = 0
quadratic_term = outgoing_gradient**2 + 4
linear_term = 2 * outgoing_gradient * (point_y - outgoing_gradient * point_x)
constant_term = (point_y - outgoing_gradient * point_x) ** 2 - 100
x_minus = (
-linear_term - sqrt(linear_term**2 - 4 * quadratic_term * constant_term)
) / (2 * quadratic_term)
x_plus = (
-linear_term + sqrt(linear_term**2 - 4 * quadratic_term * constant_term)
) / (2 * quadratic_term)
# two solutions, one of which is our input point
next_x = x_minus if isclose(x_plus, point_x) else x_plus
next_y = point_y + outgoing_gradient * (next_x - point_x)
return next_x, next_y, outgoing_gradient
def solution(first_x_coord: float = 1.4, first_y_coord: float = -9.6) -> int:
"""
Return the number of times that the beam hits the interior wall of the
cell before exiting.
>>> solution(0.00001,-10)
1
>>> solution(5, 0)
287
"""
num_reflections: int = 0
point_x: float = first_x_coord
point_y: float = first_y_coord
gradient: float = (10.1 - point_y) / (0.0 - point_x)
while not (-0.01 <= point_x <= 0.01 and point_y > 0):
point_x, point_y, gradient = next_point(point_x, point_y, gradient)
num_reflections += 1
return num_reflections
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
The algorithm finds the pattern in given text using following rule.
The bad-character rule considers the mismatched character in Text.
The next occurrence of that character to the left in Pattern is found,
If the mismatched character occurs to the left in Pattern,
a shift is proposed that aligns text block and pattern.
If the mismatched character does not occur to the left in Pattern,
a shift is proposed that moves the entirety of Pattern past
the point of mismatch in the text.
If there no mismatch then the pattern matches with text block.
Time Complexity : O(n/m)
n=length of main string
m=length of pattern string
"""
from __future__ import annotations
class BoyerMooreSearch:
def __init__(self, text: str, pattern: str):
self.text, self.pattern = text, pattern
self.textLen, self.patLen = len(text), len(pattern)
def match_in_pattern(self, char: str) -> int:
"""finds the index of char in pattern in reverse order
Parameters :
char (chr): character to be searched
Returns :
i (int): index of char from last in pattern
-1 (int): if char is not found in pattern
"""
for i in range(self.patLen - 1, -1, -1):
if char == self.pattern[i]:
return i
return -1
def mismatch_in_text(self, currentPos: int) -> int:
"""
find the index of mis-matched character in text when compared with pattern
from last
Parameters :
currentPos (int): current index position of text
Returns :
i (int): index of mismatched char from last in text
-1 (int): if there is no mismatch between pattern and text block
"""
for i in range(self.patLen - 1, -1, -1):
if self.pattern[i] != self.text[currentPos + i]:
return currentPos + i
return -1
def bad_character_heuristic(self) -> list[int]:
# searches pattern in text and returns index positions
positions = []
for i in range(self.textLen - self.patLen + 1):
mismatch_index = self.mismatch_in_text(i)
if mismatch_index == -1:
positions.append(i)
else:
match_index = self.match_in_pattern(self.text[mismatch_index])
i = (
mismatch_index - match_index
) # shifting index lgtm [py/multiple-definition]
return positions
text = "ABAABA"
pattern = "AB"
bms = BoyerMooreSearch(text, pattern)
positions = bms.bad_character_heuristic()
if len(positions) == 0:
print("No match found")
else:
print("Pattern found in following positions: ")
print(positions)
| """
The algorithm finds the pattern in given text using following rule.
The bad-character rule considers the mismatched character in Text.
The next occurrence of that character to the left in Pattern is found,
If the mismatched character occurs to the left in Pattern,
a shift is proposed that aligns text block and pattern.
If the mismatched character does not occur to the left in Pattern,
a shift is proposed that moves the entirety of Pattern past
the point of mismatch in the text.
If there no mismatch then the pattern matches with text block.
Time Complexity : O(n/m)
n=length of main string
m=length of pattern string
"""
from __future__ import annotations
class BoyerMooreSearch:
def __init__(self, text: str, pattern: str):
self.text, self.pattern = text, pattern
self.textLen, self.patLen = len(text), len(pattern)
def match_in_pattern(self, char: str) -> int:
"""finds the index of char in pattern in reverse order
Parameters :
char (chr): character to be searched
Returns :
i (int): index of char from last in pattern
-1 (int): if char is not found in pattern
"""
for i in range(self.patLen - 1, -1, -1):
if char == self.pattern[i]:
return i
return -1
def mismatch_in_text(self, currentPos: int) -> int:
"""
find the index of mis-matched character in text when compared with pattern
from last
Parameters :
currentPos (int): current index position of text
Returns :
i (int): index of mismatched char from last in text
-1 (int): if there is no mismatch between pattern and text block
"""
for i in range(self.patLen - 1, -1, -1):
if self.pattern[i] != self.text[currentPos + i]:
return currentPos + i
return -1
def bad_character_heuristic(self) -> list[int]:
# searches pattern in text and returns index positions
positions = []
for i in range(self.textLen - self.patLen + 1):
mismatch_index = self.mismatch_in_text(i)
if mismatch_index == -1:
positions.append(i)
else:
match_index = self.match_in_pattern(self.text[mismatch_index])
i = (
mismatch_index - match_index
) # shifting index lgtm [py/multiple-definition]
return positions
text = "ABAABA"
pattern = "AB"
bms = BoyerMooreSearch(text, pattern)
positions = bms.bad_character_heuristic()
if len(positions) == 0:
print("No match found")
else:
print("Pattern found in following positions: ")
print(positions)
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | from binascii import hexlify
from hashlib import sha256
from os import urandom
# RFC 3526 - More Modular Exponential (MODP) Diffie-Hellman groups for
# Internet Key Exchange (IKE) https://tools.ietf.org/html/rfc3526
primes = {
# 1536-bit
5: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 2048-bit
14: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AACAA68FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 3072-bit
15: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 4096-bit
16: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
+ "FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 6144-bit
17: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E08"
+ "8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
+ "302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
+ "A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
+ "49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
+ "FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
+ "180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
+ "3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
+ "04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
+ "B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
+ "1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
+ "E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
+ "99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
+ "04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
+ "233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
+ "D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
+ "AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
+ "DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
+ "2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
+ "F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
+ "BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
+ "B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
+ "387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
+ "6DCC4024FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 8192-bit
18: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
+ "F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
+ "179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
+ "DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
+ "5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
+ "D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
+ "23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
+ "06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
+ "DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
+ "12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
+ "38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
+ "741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
+ "3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
+ "22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
+ "4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
+ "062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
+ "4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
+ "B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
+ "4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
+ "9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
+ "60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
}
class DiffieHellman:
"""
Class to represent the Diffie-Hellman key exchange protocol
>>> alice = DiffieHellman()
>>> bob = DiffieHellman()
>>> alice_private = alice.get_private_key()
>>> alice_public = alice.generate_public_key()
>>> bob_private = bob.get_private_key()
>>> bob_public = bob.generate_public_key()
>>> # generating shared key using the DH object
>>> alice_shared = alice.generate_shared_key(bob_public)
>>> bob_shared = bob.generate_shared_key(alice_public)
>>> assert alice_shared == bob_shared
>>> # generating shared key using static methods
>>> alice_shared = DiffieHellman.generate_shared_key_static(
... alice_private, bob_public
... )
>>> bob_shared = DiffieHellman.generate_shared_key_static(
... bob_private, alice_public
... )
>>> assert alice_shared == bob_shared
"""
# Current minimum recommendation is 2048 bit (group 14)
def __init__(self, group: int = 14) -> None:
if group not in primes:
raise ValueError("Unsupported Group")
self.prime = primes[group]["prime"]
self.generator = primes[group]["generator"]
self.__private_key = int(hexlify(urandom(32)), base=16)
def get_private_key(self) -> str:
return hex(self.__private_key)[2:]
def generate_public_key(self) -> str:
public_key = pow(self.generator, self.__private_key, self.prime)
return hex(public_key)[2:]
def is_valid_public_key(self, key: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
if 2 <= key and key <= self.prime - 2:
if pow(key, (self.prime - 1) // 2, self.prime) == 1:
return True
return False
def generate_shared_key(self, other_key_str: str) -> str:
other_key = int(other_key_str, base=16)
if not self.is_valid_public_key(other_key):
raise ValueError("Invalid public key")
shared_key = pow(other_key, self.__private_key, self.prime)
return sha256(str(shared_key).encode()).hexdigest()
@staticmethod
def is_valid_public_key_static(remote_public_key_str: int, prime: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
if 2 <= remote_public_key_str and remote_public_key_str <= prime - 2:
if pow(remote_public_key_str, (prime - 1) // 2, prime) == 1:
return True
return False
@staticmethod
def generate_shared_key_static(
local_private_key_str: str, remote_public_key_str: str, group: int = 14
) -> str:
local_private_key = int(local_private_key_str, base=16)
remote_public_key = int(remote_public_key_str, base=16)
prime = primes[group]["prime"]
if not DiffieHellman.is_valid_public_key_static(remote_public_key, prime):
raise ValueError("Invalid public key")
shared_key = pow(remote_public_key, local_private_key, prime)
return sha256(str(shared_key).encode()).hexdigest()
if __name__ == "__main__":
import doctest
doctest.testmod()
| from binascii import hexlify
from hashlib import sha256
from os import urandom
# RFC 3526 - More Modular Exponential (MODP) Diffie-Hellman groups for
# Internet Key Exchange (IKE) https://tools.ietf.org/html/rfc3526
primes = {
# 1536-bit
5: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 2048-bit
14: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AACAA68FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 3072-bit
15: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 4096-bit
16: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
+ "FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 6144-bit
17: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E08"
+ "8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
+ "302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
+ "A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
+ "49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
+ "FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
+ "180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
+ "3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
+ "04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
+ "B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
+ "1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
+ "E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
+ "99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
+ "04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
+ "233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
+ "D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
+ "AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
+ "DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
+ "2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
+ "F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
+ "BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
+ "B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
+ "387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
+ "6DCC4024FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
# 8192-bit
18: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
+ "F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
+ "179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
+ "DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
+ "5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
+ "D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
+ "23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
+ "06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
+ "DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
+ "12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
+ "38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
+ "741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
+ "3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
+ "22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
+ "4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
+ "062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
+ "4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
+ "B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
+ "4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
+ "9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
+ "60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
},
}
class DiffieHellman:
"""
Class to represent the Diffie-Hellman key exchange protocol
>>> alice = DiffieHellman()
>>> bob = DiffieHellman()
>>> alice_private = alice.get_private_key()
>>> alice_public = alice.generate_public_key()
>>> bob_private = bob.get_private_key()
>>> bob_public = bob.generate_public_key()
>>> # generating shared key using the DH object
>>> alice_shared = alice.generate_shared_key(bob_public)
>>> bob_shared = bob.generate_shared_key(alice_public)
>>> assert alice_shared == bob_shared
>>> # generating shared key using static methods
>>> alice_shared = DiffieHellman.generate_shared_key_static(
... alice_private, bob_public
... )
>>> bob_shared = DiffieHellman.generate_shared_key_static(
... bob_private, alice_public
... )
>>> assert alice_shared == bob_shared
"""
# Current minimum recommendation is 2048 bit (group 14)
def __init__(self, group: int = 14) -> None:
if group not in primes:
raise ValueError("Unsupported Group")
self.prime = primes[group]["prime"]
self.generator = primes[group]["generator"]
self.__private_key = int(hexlify(urandom(32)), base=16)
def get_private_key(self) -> str:
return hex(self.__private_key)[2:]
def generate_public_key(self) -> str:
public_key = pow(self.generator, self.__private_key, self.prime)
return hex(public_key)[2:]
def is_valid_public_key(self, key: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
if 2 <= key and key <= self.prime - 2:
if pow(key, (self.prime - 1) // 2, self.prime) == 1:
return True
return False
def generate_shared_key(self, other_key_str: str) -> str:
other_key = int(other_key_str, base=16)
if not self.is_valid_public_key(other_key):
raise ValueError("Invalid public key")
shared_key = pow(other_key, self.__private_key, self.prime)
return sha256(str(shared_key).encode()).hexdigest()
@staticmethod
def is_valid_public_key_static(remote_public_key_str: int, prime: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
if 2 <= remote_public_key_str and remote_public_key_str <= prime - 2:
if pow(remote_public_key_str, (prime - 1) // 2, prime) == 1:
return True
return False
@staticmethod
def generate_shared_key_static(
local_private_key_str: str, remote_public_key_str: str, group: int = 14
) -> str:
local_private_key = int(local_private_key_str, base=16)
remote_public_key = int(remote_public_key_str, base=16)
prime = primes[group]["prime"]
if not DiffieHellman.is_valid_public_key_static(remote_public_key, prime):
raise ValueError("Invalid public key")
shared_key = pow(remote_public_key, local_private_key, prime)
return sha256(str(shared_key).encode()).hexdigest()
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
This is a type of divide and conquer algorithm which divides the search space into
3 parts and finds the target value based on the property of the array or list
(usually monotonic property).
Time Complexity : O(log3 N)
Space Complexity : O(1)
"""
from __future__ import annotations
# This is the precision for this function which can be altered.
# It is recommended for users to keep this number greater than or equal to 10.
precision = 10
# This is the linear search that will occur after the search space has become smaller.
def lin_search(left: int, right: int, array: list[int], target: int) -> int:
"""Perform linear search in list. Returns -1 if element is not found.
Parameters
----------
left : int
left index bound.
right : int
right index bound.
array : List[int]
List of elements to be searched on
target : int
Element that is searched
Returns
-------
int
index of element that is looked for.
Examples
--------
>>> lin_search(0, 4, [4, 5, 6, 7], 7)
3
>>> lin_search(0, 3, [4, 5, 6, 7], 7)
-1
>>> lin_search(0, 2, [-18, 2], -18)
0
>>> lin_search(0, 1, [5], 5)
0
>>> lin_search(0, 3, ['a', 'c', 'd'], 'c')
1
>>> lin_search(0, 3, [.1, .4 , -.1], .1)
0
>>> lin_search(0, 3, [.1, .4 , -.1], -.1)
2
"""
for i in range(left, right):
if array[i] == target:
return i
return -1
def ite_ternary_search(array: list[int], target: int) -> int:
"""Iterative method of the ternary search algorithm.
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> ite_ternary_search(test_list, 3)
-1
>>> ite_ternary_search(test_list, 13)
4
>>> ite_ternary_search([4, 5, 6, 7], 4)
0
>>> ite_ternary_search([4, 5, 6, 7], -10)
-1
>>> ite_ternary_search([-18, 2], -18)
0
>>> ite_ternary_search([5], 5)
0
>>> ite_ternary_search(['a', 'c', 'd'], 'c')
1
>>> ite_ternary_search(['a', 'c', 'd'], 'f')
-1
>>> ite_ternary_search([], 1)
-1
>>> ite_ternary_search([.1, .4 , -.1], .1)
0
"""
left = 0
right = len(array)
while left <= right:
if right - left < precision:
return lin_search(left, right, array, target)
one_third = (left + right) // 3 + 1
two_third = 2 * (left + right) // 3 + 1
if array[one_third] == target:
return one_third
elif array[two_third] == target:
return two_third
elif target < array[one_third]:
right = one_third - 1
elif array[two_third] < target:
left = two_third + 1
else:
left = one_third + 1
right = two_third - 1
else:
return -1
def rec_ternary_search(left: int, right: int, array: list[int], target: int) -> int:
"""Recursive method of the ternary search algorithm.
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> rec_ternary_search(0, len(test_list), test_list, 3)
-1
>>> rec_ternary_search(4, len(test_list), test_list, 42)
8
>>> rec_ternary_search(0, 2, [4, 5, 6, 7], 4)
0
>>> rec_ternary_search(0, 3, [4, 5, 6, 7], -10)
-1
>>> rec_ternary_search(0, 1, [-18, 2], -18)
0
>>> rec_ternary_search(0, 1, [5], 5)
0
>>> rec_ternary_search(0, 2, ['a', 'c', 'd'], 'c')
1
>>> rec_ternary_search(0, 2, ['a', 'c', 'd'], 'f')
-1
>>> rec_ternary_search(0, 0, [], 1)
-1
>>> rec_ternary_search(0, 3, [.1, .4 , -.1], .1)
0
"""
if left < right:
if right - left < precision:
return lin_search(left, right, array, target)
one_third = (left + right) // 3 + 1
two_third = 2 * (left + right) // 3 + 1
if array[one_third] == target:
return one_third
elif array[two_third] == target:
return two_third
elif target < array[one_third]:
return rec_ternary_search(left, one_third - 1, array, target)
elif array[two_third] < target:
return rec_ternary_search(two_third + 1, right, array, target)
else:
return rec_ternary_search(one_third + 1, two_third - 1, array, target)
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by comma:\n").strip()
collection = [int(item.strip()) for item in user_input.split(",")]
assert collection == sorted(collection), f"List must be ordered.\n{collection}."
target = int(input("Enter the number to be found in the list:\n").strip())
result1 = ite_ternary_search(collection, target)
result2 = rec_ternary_search(0, len(collection) - 1, collection, target)
if result2 != -1:
print(f"Iterative search: {target} found at positions: {result1}")
print(f"Recursive search: {target} found at positions: {result2}")
else:
print("Not found")
| """
This is a type of divide and conquer algorithm which divides the search space into
3 parts and finds the target value based on the property of the array or list
(usually monotonic property).
Time Complexity : O(log3 N)
Space Complexity : O(1)
"""
from __future__ import annotations
# This is the precision for this function which can be altered.
# It is recommended for users to keep this number greater than or equal to 10.
precision = 10
# This is the linear search that will occur after the search space has become smaller.
def lin_search(left: int, right: int, array: list[int], target: int) -> int:
"""Perform linear search in list. Returns -1 if element is not found.
Parameters
----------
left : int
left index bound.
right : int
right index bound.
array : List[int]
List of elements to be searched on
target : int
Element that is searched
Returns
-------
int
index of element that is looked for.
Examples
--------
>>> lin_search(0, 4, [4, 5, 6, 7], 7)
3
>>> lin_search(0, 3, [4, 5, 6, 7], 7)
-1
>>> lin_search(0, 2, [-18, 2], -18)
0
>>> lin_search(0, 1, [5], 5)
0
>>> lin_search(0, 3, ['a', 'c', 'd'], 'c')
1
>>> lin_search(0, 3, [.1, .4 , -.1], .1)
0
>>> lin_search(0, 3, [.1, .4 , -.1], -.1)
2
"""
for i in range(left, right):
if array[i] == target:
return i
return -1
def ite_ternary_search(array: list[int], target: int) -> int:
"""Iterative method of the ternary search algorithm.
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> ite_ternary_search(test_list, 3)
-1
>>> ite_ternary_search(test_list, 13)
4
>>> ite_ternary_search([4, 5, 6, 7], 4)
0
>>> ite_ternary_search([4, 5, 6, 7], -10)
-1
>>> ite_ternary_search([-18, 2], -18)
0
>>> ite_ternary_search([5], 5)
0
>>> ite_ternary_search(['a', 'c', 'd'], 'c')
1
>>> ite_ternary_search(['a', 'c', 'd'], 'f')
-1
>>> ite_ternary_search([], 1)
-1
>>> ite_ternary_search([.1, .4 , -.1], .1)
0
"""
left = 0
right = len(array)
while left <= right:
if right - left < precision:
return lin_search(left, right, array, target)
one_third = (left + right) // 3 + 1
two_third = 2 * (left + right) // 3 + 1
if array[one_third] == target:
return one_third
elif array[two_third] == target:
return two_third
elif target < array[one_third]:
right = one_third - 1
elif array[two_third] < target:
left = two_third + 1
else:
left = one_third + 1
right = two_third - 1
else:
return -1
def rec_ternary_search(left: int, right: int, array: list[int], target: int) -> int:
"""Recursive method of the ternary search algorithm.
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> rec_ternary_search(0, len(test_list), test_list, 3)
-1
>>> rec_ternary_search(4, len(test_list), test_list, 42)
8
>>> rec_ternary_search(0, 2, [4, 5, 6, 7], 4)
0
>>> rec_ternary_search(0, 3, [4, 5, 6, 7], -10)
-1
>>> rec_ternary_search(0, 1, [-18, 2], -18)
0
>>> rec_ternary_search(0, 1, [5], 5)
0
>>> rec_ternary_search(0, 2, ['a', 'c', 'd'], 'c')
1
>>> rec_ternary_search(0, 2, ['a', 'c', 'd'], 'f')
-1
>>> rec_ternary_search(0, 0, [], 1)
-1
>>> rec_ternary_search(0, 3, [.1, .4 , -.1], .1)
0
"""
if left < right:
if right - left < precision:
return lin_search(left, right, array, target)
one_third = (left + right) // 3 + 1
two_third = 2 * (left + right) // 3 + 1
if array[one_third] == target:
return one_third
elif array[two_third] == target:
return two_third
elif target < array[one_third]:
return rec_ternary_search(left, one_third - 1, array, target)
elif array[two_third] < target:
return rec_ternary_search(two_third + 1, right, array, target)
else:
return rec_ternary_search(one_third + 1, two_third - 1, array, target)
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by comma:\n").strip()
collection = [int(item.strip()) for item in user_input.split(",")]
assert collection == sorted(collection), f"List must be ordered.\n{collection}."
target = int(input("Enter the number to be found in the list:\n").strip())
result1 = ite_ternary_search(collection, target)
result2 = rec_ternary_search(0, len(collection) - 1, collection, target)
if result2 != -1:
print(f"Iterative search: {target} found at positions: {result1}")
print(f"Recursive search: {target} found at positions: {result2}")
else:
print("Not found")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = """73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450"""
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 8: https://projecteuler.net/problem=8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest
product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the
greatest product. What is the value of this product?
"""
import sys
N = """73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450"""
def str_eval(s: str) -> int:
"""
Returns product of digits in given string n
>>> str_eval("987654321")
362880
>>> str_eval("22222222")
256
"""
product = 1
for digit in s:
product *= int(digit)
return product
def solution(n: str = N) -> int:
"""
Find the thirteen adjacent digits in the 1000-digit number n that have
the greatest product and returns it.
"""
largest_product = -sys.maxsize - 1
substr = n[:13]
cur_index = 13
while cur_index < len(n) - 13:
if int(n[cur_index]) >= int(substr[0]):
substr = substr[1:] + n[cur_index]
cur_index += 1
else:
largest_product = max(largest_product, str_eval(substr))
substr = n[cur_index : cur_index + 13]
cur_index += 13
return largest_product
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,629 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2022-10-03T19:21:25Z" | "2022-10-03T20:00:46Z" | e9862adafce9eb682cabcf8ac502893e0272ae65 | 756bb268eb22199534fc8d6478cf0e006f02b56b | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/psf/black: 22.6.0 → 22.8.0](https://github.com/psf/black/compare/22.6.0...22.8.0)
- [github.com/asottile/pyupgrade: v2.37.0 → v2.38.2](https://github.com/asottile/pyupgrade/compare/v2.37.0...v2.38.2)
- https://gitlab.com/pycqa/flake8 → https://github.com/PyCQA/flake8
- [github.com/PyCQA/flake8: 3.9.2 → 5.0.4](https://github.com/PyCQA/flake8/compare/3.9.2...5.0.4)
- [github.com/pre-commit/mirrors-mypy: v0.961 → v0.981](https://github.com/pre-commit/mirrors-mypy/compare/v0.961...v0.981)
- [github.com/codespell-project/codespell: v2.1.0 → v2.2.1](https://github.com/codespell-project/codespell/compare/v2.1.0...v2.2.1)
<!--pre-commit.ci end--> |