repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from typing import Callable, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LRU Cache
>>> DoubleLinkedListNode(1,1)
Node: key: 1, val: 1, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LRU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: None, val: None, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LRUCache(Generic[T, U]):
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 2, has next: True, has prev: True,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
2: Node: key: 2, val: 2, has next: True, has prev: True}
>>> cache.set(3, 3)
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: 3, val: 3, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
3: Node: key: 3, val: 3, has next: True, has prev: True}
>>> cache.get(2) is None
True
>>> cache.set(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LRUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List.
Returns None if key is not present in cache
"""
# Note: pythonic interface would throw KeyError rather than return None
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def set(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node (oldest) when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert (
self.list.remove(first_node) is not None
) # node guaranteed to be in list assert node.key is not None
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
# bump node to the end of the list, update value
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls, size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LRU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LRUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].set(args[0], result)
return result
def cache_info() -> LRUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info)
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from typing import Callable, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LRU Cache
>>> DoubleLinkedListNode(1,1)
Node: key: 1, val: 1, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LRU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: None, val: None, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LRUCache(Generic[T, U]):
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 2, has next: True, has prev: True,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
2: Node: key: 2, val: 2, has next: True, has prev: True}
>>> cache.set(3, 3)
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: 3, val: 3, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
3: Node: key: 3, val: 3, has next: True, has prev: True}
>>> cache.get(2) is None
True
>>> cache.set(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LRUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List.
Returns None if key is not present in cache
"""
# Note: pythonic interface would throw KeyError rather than return None
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def set(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node (oldest) when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert (
self.list.remove(first_node) is not None
) # node guaranteed to be in list assert node.key is not None
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
# bump node to the end of the list, update value
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls, size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LRU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LRUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].set(args[0], result)
return result
def cache_info() -> LRUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info)
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| [core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = https://github.com/TheAlgorithms/Python.git
fetch = +refs/heads/*:refs/remotes/origin/*
gh-resolved = base
[branch "master"]
remote = origin
merge = refs/heads/master
| [core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = https://github.com/TheAlgorithms/Python.git
fetch = +refs/heads/*:refs/remotes/origin/*
gh-resolved = base
[branch "master"]
remote = origin
merge = refs/heads/master
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Ford-Fulkerson Algorithm for Maximum Flow Problem
"""
Description:
(1) Start with initial flow as 0;
(2) Choose augmenting path from source to sink and add path to flow;
"""
def BFS(graph, s, t, parent):
# Return True if there is node that has not iterated.
visited = [False] * len(graph)
queue = []
queue.append(s)
visited[s] = True
while queue:
u = queue.pop(0)
for ind in range(len(graph[u])):
if visited[ind] is False and graph[u][ind] > 0:
queue.append(ind)
visited[ind] = True
parent[ind] = u
return True if visited[t] else False
def FordFulkerson(graph, source, sink):
# This array is filled by BFS and to store path
parent = [-1] * (len(graph))
max_flow = 0
while BFS(graph, source, sink, parent):
path_flow = float("Inf")
s = sink
while s != source:
# Find the minimum value in select path
path_flow = min(path_flow, graph[parent[s]][s])
s = parent[s]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
return max_flow
graph = [
[0, 16, 13, 0, 0, 0],
[0, 0, 10, 12, 0, 0],
[0, 4, 0, 0, 14, 0],
[0, 0, 9, 0, 0, 20],
[0, 0, 0, 7, 0, 4],
[0, 0, 0, 0, 0, 0],
]
source, sink = 0, 5
print(FordFulkerson(graph, source, sink))
| # Ford-Fulkerson Algorithm for Maximum Flow Problem
"""
Description:
(1) Start with initial flow as 0;
(2) Choose augmenting path from source to sink and add path to flow;
"""
def BFS(graph, s, t, parent):
# Return True if there is node that has not iterated.
visited = [False] * len(graph)
queue = []
queue.append(s)
visited[s] = True
while queue:
u = queue.pop(0)
for ind in range(len(graph[u])):
if visited[ind] is False and graph[u][ind] > 0:
queue.append(ind)
visited[ind] = True
parent[ind] = u
return True if visited[t] else False
def FordFulkerson(graph, source, sink):
# This array is filled by BFS and to store path
parent = [-1] * (len(graph))
max_flow = 0
while BFS(graph, source, sink, parent):
path_flow = float("Inf")
s = sink
while s != source:
# Find the minimum value in select path
path_flow = min(path_flow, graph[parent[s]][s])
s = parent[s]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
return max_flow
graph = [
[0, 16, 13, 0, 0, 0],
[0, 0, 10, 12, 0, 0],
[0, 4, 0, 0, 14, 0],
[0, 0, 9, 0, 0, 20],
[0, 0, 0, 7, 0, 4],
[0, 0, 0, 0, 0, 0],
]
source, sink = 0, 5
print(FordFulkerson(graph, source, sink))
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/python
"""
The Fisher–Yates shuffle is an algorithm for generating a random permutation of a
finite sequence.
For more details visit
wikipedia/Fischer-Yates-Shuffle.
"""
import random
from typing import Any
def fisher_yates_shuffle(data: list) -> list[Any]:
for _ in range(len(data)):
a = random.randint(0, len(data) - 1)
b = random.randint(0, len(data) - 1)
data[a], data[b] = data[b], data[a]
return data
if __name__ == "__main__":
integers = [0, 1, 2, 3, 4, 5, 6, 7]
strings = ["python", "says", "hello", "!"]
print("Fisher-Yates Shuffle:")
print("List", integers, strings)
print("FY Shuffle", fisher_yates_shuffle(integers), fisher_yates_shuffle(strings))
| #!/usr/bin/python
"""
The Fisher–Yates shuffle is an algorithm for generating a random permutation of a
finite sequence.
For more details visit
wikipedia/Fischer-Yates-Shuffle.
"""
import random
from typing import Any
def fisher_yates_shuffle(data: list) -> list[Any]:
for _ in range(len(data)):
a = random.randint(0, len(data) - 1)
b = random.randint(0, len(data) - 1)
data[a], data[b] = data[b], data[a]
return data
if __name__ == "__main__":
integers = [0, 1, 2, 3, 4, 5, 6, 7]
strings = ["python", "says", "hello", "!"]
print("Fisher-Yates Shuffle:")
print("List", integers, strings)
print("FY Shuffle", fisher_yates_shuffle(integers), fisher_yates_shuffle(strings))
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implemented an algorithm using opencv to tone an image with sepia technique
"""
from cv2 import destroyAllWindows, imread, imshow, waitKey
def make_sepia(img, factor: int):
"""
Function create sepia tone.
Source: https://en.wikipedia.org/wiki/Sepia_(color)
"""
pixel_h, pixel_v = img.shape[0], img.shape[1]
def to_grayscale(blue, green, red):
"""
Helper function to create pixel's greyscale representation
Src: https://pl.wikipedia.org/wiki/YUV
"""
return 0.2126 * red + 0.587 * green + 0.114 * blue
def normalize(value):
"""Helper function to normalize R/G/B value -> return 255 if value > 255"""
return min(value, 255)
for i in range(pixel_h):
for j in range(pixel_v):
greyscale = int(to_grayscale(*img[i][j]))
img[i][j] = [
normalize(greyscale),
normalize(greyscale + factor),
normalize(greyscale + 2 * factor),
]
return img
if __name__ == "__main__":
# read original image
images = {
percentage: imread("image_data/lena.jpg", 1) for percentage in (10, 20, 30, 40)
}
for percentage, img in images.items():
make_sepia(img, percentage)
for percentage, img in images.items():
imshow(f"Original image with sepia (factor: {percentage})", img)
waitKey(0)
destroyAllWindows()
| """
Implemented an algorithm using opencv to tone an image with sepia technique
"""
from cv2 import destroyAllWindows, imread, imshow, waitKey
def make_sepia(img, factor: int):
"""
Function create sepia tone.
Source: https://en.wikipedia.org/wiki/Sepia_(color)
"""
pixel_h, pixel_v = img.shape[0], img.shape[1]
def to_grayscale(blue, green, red):
"""
Helper function to create pixel's greyscale representation
Src: https://pl.wikipedia.org/wiki/YUV
"""
return 0.2126 * red + 0.587 * green + 0.114 * blue
def normalize(value):
"""Helper function to normalize R/G/B value -> return 255 if value > 255"""
return min(value, 255)
for i in range(pixel_h):
for j in range(pixel_v):
greyscale = int(to_grayscale(*img[i][j]))
img[i][j] = [
normalize(greyscale),
normalize(greyscale + factor),
normalize(greyscale + 2 * factor),
]
return img
if __name__ == "__main__":
# read original image
images = {
percentage: imread("image_data/lena.jpg", 1) for percentage in (10, 20, 30, 40)
}
for percentage, img in images.items():
make_sepia(img, percentage)
for percentage, img in images.items():
imshow(f"Original image with sepia (factor: {percentage})", img)
waitKey(0)
destroyAllWindows()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
README, Author - Jigyasa Gandhi(mailto:[email protected])
Requirements:
- scikit-fuzzy
- numpy
- matplotlib
Python:
- 3.5
"""
import numpy as np
try:
import skfuzzy as fuzz
except ImportError:
fuzz = None
if __name__ == "__main__":
# Create universe of discourse in Python using linspace ()
X = np.linspace(start=0, stop=75, num=75, endpoint=True, retstep=False)
# Create two fuzzy sets by defining any membership function
# (trapmf(), gbellmf(), gaussmf(), etc).
abc1 = [0, 25, 50]
abc2 = [25, 50, 75]
young = fuzz.membership.trimf(X, abc1)
middle_aged = fuzz.membership.trimf(X, abc2)
# Compute the different operations using inbuilt functions.
one = np.ones(75)
zero = np.zeros((75,))
# 1. Union = max(µA(x), µB(x))
union = fuzz.fuzzy_or(X, young, X, middle_aged)[1]
# 2. Intersection = min(µA(x), µB(x))
intersection = fuzz.fuzzy_and(X, young, X, middle_aged)[1]
# 3. Complement (A) = (1- min(µA(x))
complement_a = fuzz.fuzzy_not(young)
# 4. Difference (A/B) = min(µA(x),(1- µB(x)))
difference = fuzz.fuzzy_and(X, young, X, fuzz.fuzzy_not(middle_aged)[1])[1]
# 5. Algebraic Sum = [µA(x) + µB(x) – (µA(x) * µB(x))]
alg_sum = young + middle_aged - (young * middle_aged)
# 6. Algebraic Product = (µA(x) * µB(x))
alg_product = young * middle_aged
# 7. Bounded Sum = min[1,(µA(x), µB(x))]
bdd_sum = fuzz.fuzzy_and(X, one, X, young + middle_aged)[1]
# 8. Bounded difference = min[0,(µA(x), µB(x))]
bdd_difference = fuzz.fuzzy_or(X, zero, X, young - middle_aged)[1]
# max-min composition
# max-product composition
# Plot each set A, set B and each operation result using plot() and subplot().
from matplotlib import pyplot as plt
plt.figure()
plt.subplot(4, 3, 1)
plt.plot(X, young)
plt.title("Young")
plt.grid(True)
plt.subplot(4, 3, 2)
plt.plot(X, middle_aged)
plt.title("Middle aged")
plt.grid(True)
plt.subplot(4, 3, 3)
plt.plot(X, union)
plt.title("union")
plt.grid(True)
plt.subplot(4, 3, 4)
plt.plot(X, intersection)
plt.title("intersection")
plt.grid(True)
plt.subplot(4, 3, 5)
plt.plot(X, complement_a)
plt.title("complement_a")
plt.grid(True)
plt.subplot(4, 3, 6)
plt.plot(X, difference)
plt.title("difference a/b")
plt.grid(True)
plt.subplot(4, 3, 7)
plt.plot(X, alg_sum)
plt.title("alg_sum")
plt.grid(True)
plt.subplot(4, 3, 8)
plt.plot(X, alg_product)
plt.title("alg_product")
plt.grid(True)
plt.subplot(4, 3, 9)
plt.plot(X, bdd_sum)
plt.title("bdd_sum")
plt.grid(True)
plt.subplot(4, 3, 10)
plt.plot(X, bdd_difference)
plt.title("bdd_difference")
plt.grid(True)
plt.subplots_adjust(hspace=0.5)
plt.show()
| """
README, Author - Jigyasa Gandhi(mailto:[email protected])
Requirements:
- scikit-fuzzy
- numpy
- matplotlib
Python:
- 3.5
"""
import numpy as np
try:
import skfuzzy as fuzz
except ImportError:
fuzz = None
if __name__ == "__main__":
# Create universe of discourse in Python using linspace ()
X = np.linspace(start=0, stop=75, num=75, endpoint=True, retstep=False)
# Create two fuzzy sets by defining any membership function
# (trapmf(), gbellmf(), gaussmf(), etc).
abc1 = [0, 25, 50]
abc2 = [25, 50, 75]
young = fuzz.membership.trimf(X, abc1)
middle_aged = fuzz.membership.trimf(X, abc2)
# Compute the different operations using inbuilt functions.
one = np.ones(75)
zero = np.zeros((75,))
# 1. Union = max(µA(x), µB(x))
union = fuzz.fuzzy_or(X, young, X, middle_aged)[1]
# 2. Intersection = min(µA(x), µB(x))
intersection = fuzz.fuzzy_and(X, young, X, middle_aged)[1]
# 3. Complement (A) = (1- min(µA(x))
complement_a = fuzz.fuzzy_not(young)
# 4. Difference (A/B) = min(µA(x),(1- µB(x)))
difference = fuzz.fuzzy_and(X, young, X, fuzz.fuzzy_not(middle_aged)[1])[1]
# 5. Algebraic Sum = [µA(x) + µB(x) – (µA(x) * µB(x))]
alg_sum = young + middle_aged - (young * middle_aged)
# 6. Algebraic Product = (µA(x) * µB(x))
alg_product = young * middle_aged
# 7. Bounded Sum = min[1,(µA(x), µB(x))]
bdd_sum = fuzz.fuzzy_and(X, one, X, young + middle_aged)[1]
# 8. Bounded difference = min[0,(µA(x), µB(x))]
bdd_difference = fuzz.fuzzy_or(X, zero, X, young - middle_aged)[1]
# max-min composition
# max-product composition
# Plot each set A, set B and each operation result using plot() and subplot().
from matplotlib import pyplot as plt
plt.figure()
plt.subplot(4, 3, 1)
plt.plot(X, young)
plt.title("Young")
plt.grid(True)
plt.subplot(4, 3, 2)
plt.plot(X, middle_aged)
plt.title("Middle aged")
plt.grid(True)
plt.subplot(4, 3, 3)
plt.plot(X, union)
plt.title("union")
plt.grid(True)
plt.subplot(4, 3, 4)
plt.plot(X, intersection)
plt.title("intersection")
plt.grid(True)
plt.subplot(4, 3, 5)
plt.plot(X, complement_a)
plt.title("complement_a")
plt.grid(True)
plt.subplot(4, 3, 6)
plt.plot(X, difference)
plt.title("difference a/b")
plt.grid(True)
plt.subplot(4, 3, 7)
plt.plot(X, alg_sum)
plt.title("alg_sum")
plt.grid(True)
plt.subplot(4, 3, 8)
plt.plot(X, alg_product)
plt.title("alg_product")
plt.grid(True)
plt.subplot(4, 3, 9)
plt.plot(X, bdd_sum)
plt.title("bdd_sum")
plt.grid(True)
plt.subplot(4, 3, 10)
plt.plot(X, bdd_difference)
plt.title("bdd_difference")
plt.grid(True)
plt.subplots_adjust(hspace=0.5)
plt.show()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem:
Comparing two numbers written in index form like 2'11 and 3'7 is not difficult, as any
calculator would confirm that 2^11 = 2048 < 3^7 = 2187.
However, confirming that 632382^518061 > 519432^525806 would be much more difficult, as
both numbers contain over three million digits.
Using base_exp.txt, a 22K text file containing one thousand lines with a base/exponent
pair on each line, determine which line number has the greatest numerical value.
NOTE: The first two lines in the file represent the numbers in the example given above.
"""
import os
from math import log10
def solution(data_file: str = "base_exp.txt") -> int:
"""
>>> solution()
709
"""
largest: float = 0
result = 0
for i, line in enumerate(open(os.path.join(os.path.dirname(__file__), data_file))):
a, x = list(map(int, line.split(",")))
if x * log10(a) > largest:
largest = x * log10(a)
result = i + 1
return result
if __name__ == "__main__":
print(solution())
| """
Problem:
Comparing two numbers written in index form like 2'11 and 3'7 is not difficult, as any
calculator would confirm that 2^11 = 2048 < 3^7 = 2187.
However, confirming that 632382^518061 > 519432^525806 would be much more difficult, as
both numbers contain over three million digits.
Using base_exp.txt, a 22K text file containing one thousand lines with a base/exponent
pair on each line, determine which line number has the greatest numerical value.
NOTE: The first two lines in the file represent the numbers in the example given above.
"""
import os
from math import log10
def solution(data_file: str = "base_exp.txt") -> int:
"""
>>> solution()
709
"""
largest: float = 0
result = 0
for i, line in enumerate(open(os.path.join(os.path.dirname(__file__), data_file))):
a, x = list(map(int, line.split(",")))
if x * log10(a) > largest:
largest = x * log10(a)
result = i + 1
return result
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 64: https://projecteuler.net/problem=64
All square roots are periodic when written as continued fractions.
For example, let us consider sqrt(23).
It can be seen that the sequence is repeating.
For conciseness, we use the notation sqrt(23)=[4;(1,3,1,8)],
to indicate that the block (1,3,1,8) repeats indefinitely.
Exactly four continued fractions, for N<=13, have an odd period.
How many continued fractions for N<=10000 have an odd period?
References:
- https://en.wikipedia.org/wiki/Continued_fraction
"""
from math import floor, sqrt
def continuous_fraction_period(n: int) -> int:
"""
Returns the continued fraction period of a number n.
>>> continuous_fraction_period(2)
1
>>> continuous_fraction_period(5)
1
>>> continuous_fraction_period(7)
4
>>> continuous_fraction_period(11)
2
>>> continuous_fraction_period(13)
5
"""
numerator = 0.0
denominator = 1.0
ROOT = int(sqrt(n))
integer_part = ROOT
period = 0
while integer_part != 2 * ROOT:
numerator = denominator * integer_part - numerator
denominator = (n - numerator**2) / denominator
integer_part = int((ROOT + numerator) / denominator)
period += 1
return period
def solution(n: int = 10000) -> int:
"""
Returns the count of numbers <= 10000 with odd periods.
This function calls continuous_fraction_period for numbers which are
not perfect squares.
This is checked in if sr - floor(sr) != 0 statement.
If an odd period is returned by continuous_fraction_period,
count_odd_periods is increased by 1.
>>> solution(2)
1
>>> solution(5)
2
>>> solution(7)
2
>>> solution(11)
3
>>> solution(13)
4
"""
count_odd_periods = 0
for i in range(2, n + 1):
sr = sqrt(i)
if sr - floor(sr) != 0:
if continuous_fraction_period(i) % 2 == 1:
count_odd_periods += 1
return count_odd_periods
if __name__ == "__main__":
print(f"{solution(int(input().strip()))}")
| """
Project Euler Problem 64: https://projecteuler.net/problem=64
All square roots are periodic when written as continued fractions.
For example, let us consider sqrt(23).
It can be seen that the sequence is repeating.
For conciseness, we use the notation sqrt(23)=[4;(1,3,1,8)],
to indicate that the block (1,3,1,8) repeats indefinitely.
Exactly four continued fractions, for N<=13, have an odd period.
How many continued fractions for N<=10000 have an odd period?
References:
- https://en.wikipedia.org/wiki/Continued_fraction
"""
from math import floor, sqrt
def continuous_fraction_period(n: int) -> int:
"""
Returns the continued fraction period of a number n.
>>> continuous_fraction_period(2)
1
>>> continuous_fraction_period(5)
1
>>> continuous_fraction_period(7)
4
>>> continuous_fraction_period(11)
2
>>> continuous_fraction_period(13)
5
"""
numerator = 0.0
denominator = 1.0
ROOT = int(sqrt(n))
integer_part = ROOT
period = 0
while integer_part != 2 * ROOT:
numerator = denominator * integer_part - numerator
denominator = (n - numerator**2) / denominator
integer_part = int((ROOT + numerator) / denominator)
period += 1
return period
def solution(n: int = 10000) -> int:
"""
Returns the count of numbers <= 10000 with odd periods.
This function calls continuous_fraction_period for numbers which are
not perfect squares.
This is checked in if sr - floor(sr) != 0 statement.
If an odd period is returned by continuous_fraction_period,
count_odd_periods is increased by 1.
>>> solution(2)
1
>>> solution(5)
2
>>> solution(7)
2
>>> solution(11)
3
>>> solution(13)
4
"""
count_odd_periods = 0
for i in range(2, n + 1):
sr = sqrt(i)
if sr - floor(sr) != 0:
if continuous_fraction_period(i) % 2 == 1:
count_odd_periods += 1
return count_odd_periods
if __name__ == "__main__":
print(f"{solution(int(input().strip()))}")
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def n31(a: int) -> tuple[list[int], int]:
"""
Returns the Collatz sequence and its length of any positive integer.
>>> n31(4)
([4, 2, 1], 3)
"""
if not isinstance(a, int):
raise TypeError(f"Must be int, not {type(a).__name__}")
if a < 1:
raise ValueError(f"Given integer must be greater than 1, not {a}")
path = [a]
while a != 1:
if a % 2 == 0:
a = a // 2
else:
a = 3 * a + 1
path += [a]
return path, len(path)
def test_n31():
"""
>>> test_n31()
"""
assert n31(4) == ([4, 2, 1], 3)
assert n31(11) == ([11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1], 15)
assert n31(31) == (
[
31,
94,
47,
142,
71,
214,
107,
322,
161,
484,
242,
121,
364,
182,
91,
274,
137,
412,
206,
103,
310,
155,
466,
233,
700,
350,
175,
526,
263,
790,
395,
1186,
593,
1780,
890,
445,
1336,
668,
334,
167,
502,
251,
754,
377,
1132,
566,
283,
850,
425,
1276,
638,
319,
958,
479,
1438,
719,
2158,
1079,
3238,
1619,
4858,
2429,
7288,
3644,
1822,
911,
2734,
1367,
4102,
2051,
6154,
3077,
9232,
4616,
2308,
1154,
577,
1732,
866,
433,
1300,
650,
325,
976,
488,
244,
122,
61,
184,
92,
46,
23,
70,
35,
106,
53,
160,
80,
40,
20,
10,
5,
16,
8,
4,
2,
1,
],
107,
)
if __name__ == "__main__":
num = 4
path, length = n31(num)
print(f"The Collatz sequence of {num} took {length} steps. \nPath: {path}")
| from __future__ import annotations
def n31(a: int) -> tuple[list[int], int]:
"""
Returns the Collatz sequence and its length of any positive integer.
>>> n31(4)
([4, 2, 1], 3)
"""
if not isinstance(a, int):
raise TypeError(f"Must be int, not {type(a).__name__}")
if a < 1:
raise ValueError(f"Given integer must be greater than 1, not {a}")
path = [a]
while a != 1:
if a % 2 == 0:
a = a // 2
else:
a = 3 * a + 1
path += [a]
return path, len(path)
def test_n31():
"""
>>> test_n31()
"""
assert n31(4) == ([4, 2, 1], 3)
assert n31(11) == ([11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1], 15)
assert n31(31) == (
[
31,
94,
47,
142,
71,
214,
107,
322,
161,
484,
242,
121,
364,
182,
91,
274,
137,
412,
206,
103,
310,
155,
466,
233,
700,
350,
175,
526,
263,
790,
395,
1186,
593,
1780,
890,
445,
1336,
668,
334,
167,
502,
251,
754,
377,
1132,
566,
283,
850,
425,
1276,
638,
319,
958,
479,
1438,
719,
2158,
1079,
3238,
1619,
4858,
2429,
7288,
3644,
1822,
911,
2734,
1367,
4102,
2051,
6154,
3077,
9232,
4616,
2308,
1154,
577,
1732,
866,
433,
1300,
650,
325,
976,
488,
244,
122,
61,
184,
92,
46,
23,
70,
35,
106,
53,
160,
80,
40,
20,
10,
5,
16,
8,
4,
2,
1,
],
107,
)
if __name__ == "__main__":
num = 4
path, length = n31(num)
print(f"The Collatz sequence of {num} took {length} steps. \nPath: {path}")
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""
This program calculates the nth Fibonacci number in O(log(n)).
It's possible to calculate F(1_000_000) in less than a second.
"""
from __future__ import annotations
import sys
def fibonacci(n: int) -> int:
"""
return F(n)
>>> [fibonacci(i) for i in range(13)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]
"""
if n < 0:
raise ValueError("Negative arguments are not supported")
return _fib(n)[0]
# returns (F(n), F(n-1))
def _fib(n: int) -> tuple[int, int]:
if n == 0: # (F(0), F(1))
return (0, 1)
# F(2n) = F(n)[2F(n+1) − F(n)]
# F(2n+1) = F(n+1)^2+F(n)^2
a, b = _fib(n // 2)
c = a * (b * 2 - a)
d = a * a + b * b
return (d, c + d) if n % 2 else (c, d)
if __name__ == "__main__":
n = int(sys.argv[1])
print(f"fibonacci({n}) is {fibonacci(n)}")
| #!/usr/bin/env python3
"""
This program calculates the nth Fibonacci number in O(log(n)).
It's possible to calculate F(1_000_000) in less than a second.
"""
from __future__ import annotations
import sys
def fibonacci(n: int) -> int:
"""
return F(n)
>>> [fibonacci(i) for i in range(13)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]
"""
if n < 0:
raise ValueError("Negative arguments are not supported")
return _fib(n)[0]
# returns (F(n), F(n-1))
def _fib(n: int) -> tuple[int, int]:
if n == 0: # (F(0), F(1))
return (0, 1)
# F(2n) = F(n)[2F(n+1) − F(n)]
# F(2n+1) = F(n+1)^2+F(n)^2
a, b = _fib(n // 2)
c = a * (b * 2 - a)
d = a * a + b * b
return (d, c + d) if n % 2 else (c, d)
if __name__ == "__main__":
n = int(sys.argv[1])
print(f"fibonacci({n}) is {fibonacci(n)}")
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
You are given a tree(a simple connected graph with no cycles). The tree has N
nodes numbered from 1 to N and is rooted at node 1.
Find the maximum number of edges you can remove from the tree to get a forest
such that each connected component of the forest contains an even number of
nodes.
Constraints
2 <= 2 <= 100
Note: The tree input will be such that it can always be decomposed into
components containing an even number of nodes.
"""
# pylint: disable=invalid-name
from collections import defaultdict
def dfs(start: int) -> int:
"""DFS traversal"""
# pylint: disable=redefined-outer-name
ret = 1
visited[start] = True
for v in tree[start]:
if v not in visited:
ret += dfs(v)
if ret % 2 == 0:
cuts.append(start)
return ret
def even_tree():
"""
2 1
3 1
4 3
5 2
6 1
7 2
8 6
9 8
10 8
On removing edges (1,3) and (1,6), we can get the desired result 2.
"""
dfs(1)
if __name__ == "__main__":
n, m = 10, 9
tree = defaultdict(list)
visited: dict[int, bool] = {}
cuts: list[int] = []
count = 0
edges = [(2, 1), (3, 1), (4, 3), (5, 2), (6, 1), (7, 2), (8, 6), (9, 8), (10, 8)]
for u, v in edges:
tree[u].append(v)
tree[v].append(u)
even_tree()
print(len(cuts) - 1)
| """
You are given a tree(a simple connected graph with no cycles). The tree has N
nodes numbered from 1 to N and is rooted at node 1.
Find the maximum number of edges you can remove from the tree to get a forest
such that each connected component of the forest contains an even number of
nodes.
Constraints
2 <= 2 <= 100
Note: The tree input will be such that it can always be decomposed into
components containing an even number of nodes.
"""
# pylint: disable=invalid-name
from collections import defaultdict
def dfs(start: int) -> int:
"""DFS traversal"""
# pylint: disable=redefined-outer-name
ret = 1
visited[start] = True
for v in tree[start]:
if v not in visited:
ret += dfs(v)
if ret % 2 == 0:
cuts.append(start)
return ret
def even_tree():
"""
2 1
3 1
4 3
5 2
6 1
7 2
8 6
9 8
10 8
On removing edges (1,3) and (1,6), we can get the desired result 2.
"""
dfs(1)
if __name__ == "__main__":
n, m = 10, 9
tree = defaultdict(list)
visited: dict[int, bool] = {}
cuts: list[int] = []
count = 0
edges = [(2, 1), (3, 1), (4, 3), (5, 2), (6, 1), (7, 2), (8, 6), (9, 8), (10, 8)]
for u, v in edges:
tree[u].append(v)
tree[v].append(u)
even_tree()
print(len(cuts) - 1)
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import sys
import webbrowser
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
if __name__ == "__main__":
print("Googling.....")
url = "https://www.google.com/search?q=" + " ".join(sys.argv[1:])
res = requests.get(url, headers={"UserAgent": UserAgent().random})
# res.raise_for_status()
with open("project1a.html", "wb") as out_file: # only for knowing the class
for data in res.iter_content(10000):
out_file.write(data)
soup = BeautifulSoup(res.text, "html.parser")
links = list(soup.select(".eZt8xd"))[:5]
print(len(links))
for link in links:
if link.text == "Maps":
webbrowser.open(link.get("href"))
else:
webbrowser.open(f"http://google.com{link.get('href')}")
| import sys
import webbrowser
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
if __name__ == "__main__":
print("Googling.....")
url = "https://www.google.com/search?q=" + " ".join(sys.argv[1:])
res = requests.get(url, headers={"UserAgent": UserAgent().random})
# res.raise_for_status()
with open("project1a.html", "wb") as out_file: # only for knowing the class
for data in res.iter_content(10000):
out_file.write(data)
soup = BeautifulSoup(res.text, "html.parser")
links = list(soup.select(".eZt8xd"))[:5]
print(len(links))
for link in links:
if link.text == "Maps":
webbrowser.open(link.get("href"))
else:
webbrowser.open(f"http://google.com{link.get('href')}")
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/perl
use strict;
use warnings;
use IPC::Open2;
# An example hook script to integrate Watchman
# (https://facebook.github.io/watchman/) with git to speed up detecting
# new and modified files.
#
# The hook is passed a version (currently 1) and a time in nanoseconds
# formatted as a string and outputs to stdout all files that have been
# modified since the given time. Paths must be relative to the root of
# the working tree and separated by a single NUL.
#
# To enable this hook, rename this file to "query-watchman" and set
# 'git config core.fsmonitor .git/hooks/query-watchman'
#
my ($version, $time) = @ARGV;
# Check the hook interface version
if ($version == 1) {
# convert nanoseconds to seconds
# subtract one second to make sure watchman will return all changes
$time = int ($time / 1000000000) - 1;
} else {
die "Unsupported query-fsmonitor hook version '$version'.\n" .
"Falling back to scanning...\n";
}
my $git_work_tree;
if ($^O =~ 'msys' || $^O =~ 'cygwin') {
$git_work_tree = Win32::GetCwd();
$git_work_tree =~ tr/\\/\//;
} else {
require Cwd;
$git_work_tree = Cwd::cwd();
}
my $retry = 1;
launch_watchman();
sub launch_watchman {
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty')
or die "open2() failed: $!\n" .
"Falling back to scanning...\n";
# In the query expression below we're asking for names of files that
# changed since $time but were not transient (ie created after
# $time but no longer exist).
#
# To accomplish this, we're using the "since" generator to use the
# recency index to select candidate nodes and "fields" to limit the
# output to file names only.
my $query = <<" END";
["query", "$git_work_tree", {
"since": $time,
"fields": ["name"]
}]
END
print CHLD_IN $query;
close CHLD_IN;
my $response = do {local $/; <CHLD_OUT>};
die "Watchman: command returned no output.\n" .
"Falling back to scanning...\n" if $response eq "";
die "Watchman: command returned invalid output: $response\n" .
"Falling back to scanning...\n" unless $response =~ /^\{/;
my $json_pkg;
eval {
require JSON::XS;
$json_pkg = "JSON::XS";
1;
} or do {
require JSON::PP;
$json_pkg = "JSON::PP";
};
my $o = $json_pkg->new->utf8->decode($response);
if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
$retry--;
qx/watchman watch "$git_work_tree"/;
die "Failed to make watchman watch '$git_work_tree'.\n" .
"Falling back to scanning...\n" if $? != 0;
# Watchman will always return all files on the first query so
# return the fast "everything is dirty" flag to git and do the
# Watchman query just to get it over with now so we won't pay
# the cost in git to look up each individual file.
print "/\0";
eval { launch_watchman() };
exit 0;
}
die "Watchman: $o->{error}.\n" .
"Falling back to scanning...\n" if $o->{error};
binmode STDOUT, ":utf8";
local $, = "\0";
print @{$o->{files}};
}
| #!/usr/bin/perl
use strict;
use warnings;
use IPC::Open2;
# An example hook script to integrate Watchman
# (https://facebook.github.io/watchman/) with git to speed up detecting
# new and modified files.
#
# The hook is passed a version (currently 1) and a time in nanoseconds
# formatted as a string and outputs to stdout all files that have been
# modified since the given time. Paths must be relative to the root of
# the working tree and separated by a single NUL.
#
# To enable this hook, rename this file to "query-watchman" and set
# 'git config core.fsmonitor .git/hooks/query-watchman'
#
my ($version, $time) = @ARGV;
# Check the hook interface version
if ($version == 1) {
# convert nanoseconds to seconds
# subtract one second to make sure watchman will return all changes
$time = int ($time / 1000000000) - 1;
} else {
die "Unsupported query-fsmonitor hook version '$version'.\n" .
"Falling back to scanning...\n";
}
my $git_work_tree;
if ($^O =~ 'msys' || $^O =~ 'cygwin') {
$git_work_tree = Win32::GetCwd();
$git_work_tree =~ tr/\\/\//;
} else {
require Cwd;
$git_work_tree = Cwd::cwd();
}
my $retry = 1;
launch_watchman();
sub launch_watchman {
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty')
or die "open2() failed: $!\n" .
"Falling back to scanning...\n";
# In the query expression below we're asking for names of files that
# changed since $time but were not transient (ie created after
# $time but no longer exist).
#
# To accomplish this, we're using the "since" generator to use the
# recency index to select candidate nodes and "fields" to limit the
# output to file names only.
my $query = <<" END";
["query", "$git_work_tree", {
"since": $time,
"fields": ["name"]
}]
END
print CHLD_IN $query;
close CHLD_IN;
my $response = do {local $/; <CHLD_OUT>};
die "Watchman: command returned no output.\n" .
"Falling back to scanning...\n" if $response eq "";
die "Watchman: command returned invalid output: $response\n" .
"Falling back to scanning...\n" unless $response =~ /^\{/;
my $json_pkg;
eval {
require JSON::XS;
$json_pkg = "JSON::XS";
1;
} or do {
require JSON::PP;
$json_pkg = "JSON::PP";
};
my $o = $json_pkg->new->utf8->decode($response);
if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
$retry--;
qx/watchman watch "$git_work_tree"/;
die "Failed to make watchman watch '$git_work_tree'.\n" .
"Falling back to scanning...\n" if $? != 0;
# Watchman will always return all files on the first query so
# return the fast "everything is dirty" flag to git and do the
# Watchman query just to get it over with now so we won't pay
# the cost in git to look up each individual file.
print "/\0";
eval { launch_watchman() };
exit 0;
}
die "Watchman: $o->{error}.\n" .
"Falling back to scanning...\n" if $o->{error};
binmode STDOUT, ":utf8";
local $, = "\0";
print @{$o->{files}};
}
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author: Mohit Radadiya
"""
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
# This function generates the key in
# a cyclic manner until it's length isn't
# equal to the length of original text
def generate_key(message: str, key: str) -> str:
"""
>>> generate_key("THE GERMAN ATTACK","SECRET")
'SECRETSECRETSECRE'
"""
x = len(message)
i = 0
while True:
if x == i:
i = 0
if len(key) == len(message):
break
key += key[i]
i += 1
return key
# This function returns the encrypted text
# generated with the help of the key
def cipher_text(message: str, key_new: str) -> str:
"""
>>> cipher_text("THE GERMAN ATTACK","SECRETSECRETSECRE")
'BDC PAYUWL JPAIYI'
"""
cipher_text = ""
i = 0
for letter in message:
if letter == " ":
cipher_text += " "
else:
x = (dict1[letter] - dict1[key_new[i]]) % 26
i += 1
cipher_text += dict2[x]
return cipher_text
# This function decrypts the encrypted text
# and returns the original text
def original_text(cipher_text: str, key_new: str) -> str:
"""
>>> original_text("BDC PAYUWL JPAIYI","SECRETSECRETSECRE")
'THE GERMAN ATTACK'
"""
or_txt = ""
i = 0
for letter in cipher_text:
if letter == " ":
or_txt += " "
else:
x = (dict1[letter] + dict1[key_new[i]] + 26) % 26
i += 1
or_txt += dict2[x]
return or_txt
def main() -> None:
message = "THE GERMAN ATTACK"
key = "SECRET"
key_new = generate_key(message, key)
s = cipher_text(message, key_new)
print(f"Encrypted Text = {s}")
print(f"Original Text = {original_text(s, key_new)}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
Author: Mohit Radadiya
"""
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
# This function generates the key in
# a cyclic manner until it's length isn't
# equal to the length of original text
def generate_key(message: str, key: str) -> str:
"""
>>> generate_key("THE GERMAN ATTACK","SECRET")
'SECRETSECRETSECRE'
"""
x = len(message)
i = 0
while True:
if x == i:
i = 0
if len(key) == len(message):
break
key += key[i]
i += 1
return key
# This function returns the encrypted text
# generated with the help of the key
def cipher_text(message: str, key_new: str) -> str:
"""
>>> cipher_text("THE GERMAN ATTACK","SECRETSECRETSECRE")
'BDC PAYUWL JPAIYI'
"""
cipher_text = ""
i = 0
for letter in message:
if letter == " ":
cipher_text += " "
else:
x = (dict1[letter] - dict1[key_new[i]]) % 26
i += 1
cipher_text += dict2[x]
return cipher_text
# This function decrypts the encrypted text
# and returns the original text
def original_text(cipher_text: str, key_new: str) -> str:
"""
>>> original_text("BDC PAYUWL JPAIYI","SECRETSECRETSECRE")
'THE GERMAN ATTACK'
"""
or_txt = ""
i = 0
for letter in cipher_text:
if letter == " ":
or_txt += " "
else:
x = (dict1[letter] + dict1[key_new[i]] + 26) % 26
i += 1
or_txt += dict2[x]
return or_txt
def main() -> None:
message = "THE GERMAN ATTACK"
key = "SECRET"
key_new = generate_key(message, key)
s = cipher_text(message, key_new)
print(f"Encrypted Text = {s}")
print(f"Original Text = {original_text(s, key_new)}")
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of median filter algorithm
"""
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import divide, int8, multiply, ravel, sort, zeros_like
def median_filter(gray_img, mask=3):
"""
:param gray_img: gray image
:param mask: mask size
:return: image with median filter
"""
# set image borders
bd = int(mask / 2)
# copy image size
median_img = zeros_like(gray_img)
for i in range(bd, gray_img.shape[0] - bd):
for j in range(bd, gray_img.shape[1] - bd):
# get mask according with mask
kernel = ravel(gray_img[i - bd : i + bd + 1, j - bd : j + bd + 1])
# calculate mask median
median = sort(kernel)[int8(divide((multiply(mask, mask)), 2) + 1)]
median_img[i, j] = median
return median_img
if __name__ == "__main__":
# read original image
img = imread("../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# get values with two different mask size
median3x3 = median_filter(gray, 3)
median5x5 = median_filter(gray, 5)
# show result images
imshow("median filter with 3x3 mask", median3x3)
imshow("median filter with 5x5 mask", median5x5)
waitKey(0)
| """
Implementation of median filter algorithm
"""
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import divide, int8, multiply, ravel, sort, zeros_like
def median_filter(gray_img, mask=3):
"""
:param gray_img: gray image
:param mask: mask size
:return: image with median filter
"""
# set image borders
bd = int(mask / 2)
# copy image size
median_img = zeros_like(gray_img)
for i in range(bd, gray_img.shape[0] - bd):
for j in range(bd, gray_img.shape[1] - bd):
# get mask according with mask
kernel = ravel(gray_img[i - bd : i + bd + 1, j - bd : j + bd + 1])
# calculate mask median
median = sort(kernel)[int8(divide((multiply(mask, mask)), 2) + 1)]
median_img[i, j] = median
return median_img
if __name__ == "__main__":
# read original image
img = imread("../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# get values with two different mask size
median3x3 = median_filter(gray, 3)
median5x5 = median_filter(gray, 5)
# show result images
imshow("median filter with 3x3 mask", median3x3)
imshow("median filter with 5x5 mask", median5x5)
waitKey(0)
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Setup for pytest
[pytest]
markers =
mat_ops: mark a test as utilizing matrix operations.
addopts = --durations=10
| # Setup for pytest
[pytest]
markers =
mat_ops: mark a test as utilizing matrix operations.
addopts = --durations=10
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # flake8: noqa
"""
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def mergeTrees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> print(first_heap.size)
30
Deleting - delete() test
>>> for i in range(25):
... print(first_heap.deleteMin(), end=" ")
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> print(second_heap.preOrder())
[(17, 0), ('#', 1), (31, 1), (20, 2), ('#', 3), ('#', 3), (34, 2), ('#', 3), ('#', 3)]
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>> merged = second_heap.mergeHeaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> while not first_heap.isEmpty():
... print(first_heap.deleteMin(), end=" ")
17 20 25 26 27 28 29 31 34
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def mergeHeaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.mergeTrees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.mergeTrees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def isEmpty(self):
return self.size == 0
def deleteMin(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
newHeap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.mergeHeaps(newHeap)
return min_value
def preOrder(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_preOrder = []
self.__traversal(top_root, heap_preOrder)
return heap_preOrder
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.isEmpty():
return ""
preorder_heap = self.preOrder()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| # flake8: noqa
"""
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def mergeTrees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> print(first_heap.size)
30
Deleting - delete() test
>>> for i in range(25):
... print(first_heap.deleteMin(), end=" ")
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> print(second_heap.preOrder())
[(17, 0), ('#', 1), (31, 1), (20, 2), ('#', 3), ('#', 3), (34, 2), ('#', 3), ('#', 3)]
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>> merged = second_heap.mergeHeaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> while not first_heap.isEmpty():
... print(first_heap.deleteMin(), end=" ")
17 20 25 26 27 28 29 31 34
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def mergeHeaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.mergeTrees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.mergeTrees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def isEmpty(self):
return self.size == 0
def deleteMin(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
newHeap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.mergeHeaps(newHeap)
return min_value
def preOrder(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_preOrder = []
self.__traversal(top_root, heap_preOrder)
return heap_preOrder
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.isEmpty():
return ""
preorder_heap = self.preOrder()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,126 | Improve Code | ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Infiniticity | "2022-05-06T10:47:51Z" | "2022-05-13T12:51:45Z" | e95ecfaf27c545391bdb7a2d1d8948943a40f828 | dbee5f072f68c57bce3443e5ed07fe496ba9d76d | Improve Code. ### Describe your change:
This PR improves overall code
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
If we are presented with the first k terms of a sequence it is impossible to say with
certainty the value of the next term, as there are infinitely many polynomial functions
that can model the sequence.
As an example, let us consider the sequence of cube
numbers. This is defined by the generating function,
u(n) = n3: 1, 8, 27, 64, 125, 216, ...
Suppose we were only given the first two terms of this sequence. Working on the
principle that "simple is best" we should assume a linear relationship and predict the
next term to be 15 (common difference 7). Even if we were presented with the first three
terms, by the same principle of simplicity, a quadratic relationship should be
assumed.
We shall define OP(k, n) to be the nth term of the optimum polynomial
generating function for the first k terms of a sequence. It should be clear that
OP(k, n) will accurately generate the terms of the sequence for n ≤ k, and potentially
the first incorrect term (FIT) will be OP(k, k+1); in which case we shall call it a
bad OP (BOP).
As a basis, if we were only given the first term of sequence, it would be most
sensible to assume constancy; that is, for n ≥ 2, OP(1, n) = u(1).
Hence we obtain the
following OPs for the cubic sequence:
OP(1, n) = 1 1, 1, 1, 1, ...
OP(2, n) = 7n-6 1, 8, 15, ...
OP(3, n) = 6n^2-11n+6 1, 8, 27, 58, ...
OP(4, n) = n^3 1, 8, 27, 64, 125, ...
Clearly no BOPs exist for k ≥ 4.
By considering the sum of FITs generated by the BOPs (indicated in red above), we
obtain 1 + 15 + 58 = 74.
Consider the following tenth degree polynomial generating function:
1 - n + n^2 - n^3 + n^4 - n^5 + n^6 - n^7 + n^8 - n^9 + n^10
Find the sum of FITs for the BOPs.
"""
from __future__ import annotations
from typing import Callable, Union
Matrix = list[list[Union[float, int]]]
def solve(matrix: Matrix, vector: Matrix) -> Matrix:
"""
Solve the linear system of equations Ax = b (A = "matrix", b = "vector")
for x using Gaussian elimination and back substitution. We assume that A
is an invertible square matrix and that b is a column vector of the
same height.
>>> solve([[1, 0], [0, 1]], [[1],[2]])
[[1.0], [2.0]]
>>> solve([[2, 1, -1],[-3, -1, 2],[-2, 1, 2]],[[8], [-11],[-3]])
[[2.0], [3.0], [-1.0]]
"""
size: int = len(matrix)
augmented: Matrix = [[0 for _ in range(size + 1)] for _ in range(size)]
row: int
row2: int
col: int
col2: int
pivot_row: int
ratio: float
for row in range(size):
for col in range(size):
augmented[row][col] = matrix[row][col]
augmented[row][size] = vector[row][0]
row = 0
col = 0
while row < size and col < size:
# pivoting
pivot_row = max((abs(augmented[row2][col]), row2) for row2 in range(col, size))[
1
]
if augmented[pivot_row][col] == 0:
col += 1
continue
else:
augmented[row], augmented[pivot_row] = augmented[pivot_row], augmented[row]
for row2 in range(row + 1, size):
ratio = augmented[row2][col] / augmented[row][col]
augmented[row2][col] = 0
for col2 in range(col + 1, size + 1):
augmented[row2][col2] -= augmented[row][col2] * ratio
row += 1
col += 1
# back substitution
for col in range(1, size):
for row in range(col):
ratio = augmented[row][col] / augmented[col][col]
for col2 in range(col, size + 1):
augmented[row][col2] -= augmented[col][col2] * ratio
# round to get rid of numbers like 2.000000000000004
return [
[round(augmented[row][size] / augmented[row][row], 10)] for row in range(size)
]
def interpolate(y_list: list[int]) -> Callable[[int], int]:
"""
Given a list of data points (1,y0),(2,y1), ..., return a function that
interpolates the data points. We find the coefficients of the interpolating
polynomial by solving a system of linear equations corresponding to
x = 1, 2, 3...
>>> interpolate([1])(3)
1
>>> interpolate([1, 8])(3)
15
>>> interpolate([1, 8, 27])(4)
58
>>> interpolate([1, 8, 27, 64])(6)
216
"""
size: int = len(y_list)
matrix: Matrix = [[0 for _ in range(size)] for _ in range(size)]
vector: Matrix = [[0] for _ in range(size)]
coeffs: Matrix
x_val: int
y_val: int
col: int
for x_val, y_val in enumerate(y_list):
for col in range(size):
matrix[x_val][col] = (x_val + 1) ** (size - col - 1)
vector[x_val][0] = y_val
coeffs = solve(matrix, vector)
def interpolated_func(var: int) -> int:
"""
>>> interpolate([1])(3)
1
>>> interpolate([1, 8])(3)
15
>>> interpolate([1, 8, 27])(4)
58
>>> interpolate([1, 8, 27, 64])(6)
216
"""
return sum(
round(coeffs[x_val][0]) * (var ** (size - x_val - 1))
for x_val in range(size)
)
return interpolated_func
def question_function(variable: int) -> int:
"""
The generating function u as specified in the question.
>>> question_function(0)
1
>>> question_function(1)
1
>>> question_function(5)
8138021
>>> question_function(10)
9090909091
"""
return (
1
- variable
+ variable**2
- variable**3
+ variable**4
- variable**5
+ variable**6
- variable**7
+ variable**8
- variable**9
+ variable**10
)
def solution(func: Callable[[int], int] = question_function, order: int = 10) -> int:
"""
Find the sum of the FITs of the BOPS. For each interpolating polynomial of order
1, 2, ... , 10, find the first x such that the value of the polynomial at x does
not equal u(x).
>>> solution(lambda n: n ** 3, 3)
74
"""
data_points: list[int] = [func(x_val) for x_val in range(1, order + 1)]
polynomials: list[Callable[[int], int]] = [
interpolate(data_points[:max_coeff]) for max_coeff in range(1, order + 1)
]
ret: int = 0
poly: Callable[[int], int]
x_val: int
for poly in polynomials:
x_val = 1
while func(x_val) == poly(x_val):
x_val += 1
ret += poly(x_val)
return ret
if __name__ == "__main__":
print(f"{solution() = }")
| """
If we are presented with the first k terms of a sequence it is impossible to say with
certainty the value of the next term, as there are infinitely many polynomial functions
that can model the sequence.
As an example, let us consider the sequence of cube
numbers. This is defined by the generating function,
u(n) = n3: 1, 8, 27, 64, 125, 216, ...
Suppose we were only given the first two terms of this sequence. Working on the
principle that "simple is best" we should assume a linear relationship and predict the
next term to be 15 (common difference 7). Even if we were presented with the first three
terms, by the same principle of simplicity, a quadratic relationship should be
assumed.
We shall define OP(k, n) to be the nth term of the optimum polynomial
generating function for the first k terms of a sequence. It should be clear that
OP(k, n) will accurately generate the terms of the sequence for n ≤ k, and potentially
the first incorrect term (FIT) will be OP(k, k+1); in which case we shall call it a
bad OP (BOP).
As a basis, if we were only given the first term of sequence, it would be most
sensible to assume constancy; that is, for n ≥ 2, OP(1, n) = u(1).
Hence we obtain the
following OPs for the cubic sequence:
OP(1, n) = 1 1, 1, 1, 1, ...
OP(2, n) = 7n-6 1, 8, 15, ...
OP(3, n) = 6n^2-11n+6 1, 8, 27, 58, ...
OP(4, n) = n^3 1, 8, 27, 64, 125, ...
Clearly no BOPs exist for k ≥ 4.
By considering the sum of FITs generated by the BOPs (indicated in red above), we
obtain 1 + 15 + 58 = 74.
Consider the following tenth degree polynomial generating function:
1 - n + n^2 - n^3 + n^4 - n^5 + n^6 - n^7 + n^8 - n^9 + n^10
Find the sum of FITs for the BOPs.
"""
from __future__ import annotations
from typing import Callable, Union
Matrix = list[list[Union[float, int]]]
def solve(matrix: Matrix, vector: Matrix) -> Matrix:
"""
Solve the linear system of equations Ax = b (A = "matrix", b = "vector")
for x using Gaussian elimination and back substitution. We assume that A
is an invertible square matrix and that b is a column vector of the
same height.
>>> solve([[1, 0], [0, 1]], [[1],[2]])
[[1.0], [2.0]]
>>> solve([[2, 1, -1],[-3, -1, 2],[-2, 1, 2]],[[8], [-11],[-3]])
[[2.0], [3.0], [-1.0]]
"""
size: int = len(matrix)
augmented: Matrix = [[0 for _ in range(size + 1)] for _ in range(size)]
row: int
row2: int
col: int
col2: int
pivot_row: int
ratio: float
for row in range(size):
for col in range(size):
augmented[row][col] = matrix[row][col]
augmented[row][size] = vector[row][0]
row = 0
col = 0
while row < size and col < size:
# pivoting
pivot_row = max((abs(augmented[row2][col]), row2) for row2 in range(col, size))[
1
]
if augmented[pivot_row][col] == 0:
col += 1
continue
else:
augmented[row], augmented[pivot_row] = augmented[pivot_row], augmented[row]
for row2 in range(row + 1, size):
ratio = augmented[row2][col] / augmented[row][col]
augmented[row2][col] = 0
for col2 in range(col + 1, size + 1):
augmented[row2][col2] -= augmented[row][col2] * ratio
row += 1
col += 1
# back substitution
for col in range(1, size):
for row in range(col):
ratio = augmented[row][col] / augmented[col][col]
for col2 in range(col, size + 1):
augmented[row][col2] -= augmented[col][col2] * ratio
# round to get rid of numbers like 2.000000000000004
return [
[round(augmented[row][size] / augmented[row][row], 10)] for row in range(size)
]
def interpolate(y_list: list[int]) -> Callable[[int], int]:
"""
Given a list of data points (1,y0),(2,y1), ..., return a function that
interpolates the data points. We find the coefficients of the interpolating
polynomial by solving a system of linear equations corresponding to
x = 1, 2, 3...
>>> interpolate([1])(3)
1
>>> interpolate([1, 8])(3)
15
>>> interpolate([1, 8, 27])(4)
58
>>> interpolate([1, 8, 27, 64])(6)
216
"""
size: int = len(y_list)
matrix: Matrix = [[0 for _ in range(size)] for _ in range(size)]
vector: Matrix = [[0] for _ in range(size)]
coeffs: Matrix
x_val: int
y_val: int
col: int
for x_val, y_val in enumerate(y_list):
for col in range(size):
matrix[x_val][col] = (x_val + 1) ** (size - col - 1)
vector[x_val][0] = y_val
coeffs = solve(matrix, vector)
def interpolated_func(var: int) -> int:
"""
>>> interpolate([1])(3)
1
>>> interpolate([1, 8])(3)
15
>>> interpolate([1, 8, 27])(4)
58
>>> interpolate([1, 8, 27, 64])(6)
216
"""
return sum(
round(coeffs[x_val][0]) * (var ** (size - x_val - 1))
for x_val in range(size)
)
return interpolated_func
def question_function(variable: int) -> int:
"""
The generating function u as specified in the question.
>>> question_function(0)
1
>>> question_function(1)
1
>>> question_function(5)
8138021
>>> question_function(10)
9090909091
"""
return (
1
- variable
+ variable**2
- variable**3
+ variable**4
- variable**5
+ variable**6
- variable**7
+ variable**8
- variable**9
+ variable**10
)
def solution(func: Callable[[int], int] = question_function, order: int = 10) -> int:
"""
Find the sum of the FITs of the BOPS. For each interpolating polynomial of order
1, 2, ... , 10, find the first x such that the value of the polynomial at x does
not equal u(x).
>>> solution(lambda n: n ** 3, 3)
74
"""
data_points: list[int] = [func(x_val) for x_val in range(1, order + 1)]
polynomials: list[Callable[[int], int]] = [
interpolate(data_points[:max_coeff]) for max_coeff in range(1, order + 1)
]
ret: int = 0
poly: Callable[[int], int]
x_val: int
for poly in polynomials:
x_val = 1
while func(x_val) == poly(x_val):
x_val += 1
ret += poly(x_val)
return ret
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
import random
import string
class ShuffledShiftCipher:
"""
This algorithm uses the Caesar Cipher algorithm but removes the option to
use brute force to decrypt the message.
The passcode is a a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
Using unique characters from the passcode, the normal list of characters,
that can be allowed in the plaintext, is pivoted and shuffled. Refer to docstring
of __make_key_list() to learn more about the shuffling.
Then, using the passcode, a number is calculated which is used to encrypt the
plaintext message with the normal shift cipher method, only in this case, the
reference, to look back at while decrypting, is shuffled.
Each cipher object can possess an optional argument as passcode, without which a
new passcode is generated for that object automatically.
cip1 = ShuffledShiftCipher('d4usr9TWxw9wMD')
cip2 = ShuffledShiftCipher()
"""
def __init__(self, passcode: str | None = None) -> None:
"""
Initializes a cipher object with a passcode as it's entity
Note: No new passcode is generated if user provides a passcode
while creating the object
"""
self.__passcode = passcode or self.__passcode_creator()
self.__key_list = self.__make_key_list()
self.__shift_key = self.__make_shift_key()
def __str__(self) -> str:
"""
:return: passcode of the cipher object
"""
return "Passcode is: " + "".join(self.__passcode)
def __neg_pos(self, iterlist: list[int]) -> list[int]:
"""
Mutates the list by changing the sign of each alternate element
:param iterlist: takes a list iterable
:return: the mutated list
"""
for i in range(1, len(iterlist), 2):
iterlist[i] *= -1
return iterlist
def __passcode_creator(self) -> list[str]:
"""
Creates a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
:rtype: list
:return: a password of a random length between 10 to 20
"""
choices = string.ascii_letters + string.digits
password = [random.choice(choices) for _ in range(random.randint(10, 20))]
return password
def __make_key_list(self) -> list[str]:
"""
Shuffles the ordered character choices by pivoting at breakpoints
Breakpoints are the set of characters in the passcode
eg:
if, ABCDEFGHIJKLMNOPQRSTUVWXYZ are the possible characters
and CAMERA is the passcode
then, breakpoints = [A,C,E,M,R] # sorted set of characters from passcode
shuffled parts: [A,CB,ED,MLKJIHGF,RQPON,ZYXWVUTS]
shuffled __key_list : ACBEDMLKJIHGFRQPONZYXWVUTS
Shuffling only 26 letters of the english alphabet can generate 26!
combinations for the shuffled list. In the program we consider, a set of
97 characters (including letters, digits, punctuation and whitespaces),
thereby creating a possibility of 97! combinations (which is a 152 digit number
in itself), thus diminishing the possibility of a brute force approach.
Moreover, shift keys even introduce a multiple of 26 for a brute force approach
for each of the already 97! combinations.
"""
# key_list_options contain nearly all printable except few elements from
# string.whitespace
key_list_options = (
string.ascii_letters + string.digits + string.punctuation + " \t\n"
)
keys_l = []
# creates points known as breakpoints to break the key_list_options at those
# points and pivot each substring
breakpoints = sorted(set(self.__passcode))
temp_list: list[str] = []
# algorithm for creating a new shuffled list, keys_l, out of key_list_options
for i in key_list_options:
temp_list.extend(i)
# checking breakpoints at which to pivot temporary sublist and add it into
# keys_l
if i in breakpoints or i == key_list_options[-1]:
keys_l.extend(temp_list[::-1])
temp_list.clear()
# returning a shuffled keys_l to prevent brute force guessing of shift key
return keys_l
def __make_shift_key(self) -> int:
"""
sum() of the mutated list of ascii values of all characters where the
mutated list is the one returned by __neg_pos()
"""
num = sum(self.__neg_pos([ord(x) for x in self.__passcode]))
return num if num > 0 else len(self.__passcode)
def decrypt(self, encoded_message: str) -> str:
"""
Performs shifting of the encoded_message w.r.t. the shuffled __key_list
to create the decoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.decrypt("d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#")
'Hello, this is a modified Caesar cipher'
"""
decoded_message = ""
# decoding shift like Caesar cipher algorithm implementing negative shift or
# reverse shift or left shift
for i in encoded_message:
position = self.__key_list.index(i)
decoded_message += self.__key_list[
(position - self.__shift_key) % -len(self.__key_list)
]
return decoded_message
def encrypt(self, plaintext: str) -> str:
"""
Performs shifting of the plaintext w.r.t. the shuffled __key_list
to create the encoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.encrypt('Hello, this is a modified Caesar cipher')
"d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#"
"""
encoded_message = ""
# encoding shift like Caesar cipher algorithm implementing positive shift or
# forward shift or right shift
for i in plaintext:
position = self.__key_list.index(i)
encoded_message += self.__key_list[
(position + self.__shift_key) % len(self.__key_list)
]
return encoded_message
def test_end_to_end(msg: str = "Hello, this is a modified Caesar cipher") -> str:
"""
>>> test_end_to_end()
'Hello, this is a modified Caesar cipher'
"""
cip1 = ShuffledShiftCipher()
return cip1.decrypt(cip1.encrypt(msg))
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
import random
import string
class ShuffledShiftCipher:
"""
This algorithm uses the Caesar Cipher algorithm but removes the option to
use brute force to decrypt the message.
The passcode is a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
Using unique characters from the passcode, the normal list of characters,
that can be allowed in the plaintext, is pivoted and shuffled. Refer to docstring
of __make_key_list() to learn more about the shuffling.
Then, using the passcode, a number is calculated which is used to encrypt the
plaintext message with the normal shift cipher method, only in this case, the
reference, to look back at while decrypting, is shuffled.
Each cipher object can possess an optional argument as passcode, without which a
new passcode is generated for that object automatically.
cip1 = ShuffledShiftCipher('d4usr9TWxw9wMD')
cip2 = ShuffledShiftCipher()
"""
def __init__(self, passcode: str | None = None) -> None:
"""
Initializes a cipher object with a passcode as it's entity
Note: No new passcode is generated if user provides a passcode
while creating the object
"""
self.__passcode = passcode or self.__passcode_creator()
self.__key_list = self.__make_key_list()
self.__shift_key = self.__make_shift_key()
def __str__(self) -> str:
"""
:return: passcode of the cipher object
"""
return "Passcode is: " + "".join(self.__passcode)
def __neg_pos(self, iterlist: list[int]) -> list[int]:
"""
Mutates the list by changing the sign of each alternate element
:param iterlist: takes a list iterable
:return: the mutated list
"""
for i in range(1, len(iterlist), 2):
iterlist[i] *= -1
return iterlist
def __passcode_creator(self) -> list[str]:
"""
Creates a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
:rtype: list
:return: a password of a random length between 10 to 20
"""
choices = string.ascii_letters + string.digits
password = [random.choice(choices) for _ in range(random.randint(10, 20))]
return password
def __make_key_list(self) -> list[str]:
"""
Shuffles the ordered character choices by pivoting at breakpoints
Breakpoints are the set of characters in the passcode
eg:
if, ABCDEFGHIJKLMNOPQRSTUVWXYZ are the possible characters
and CAMERA is the passcode
then, breakpoints = [A,C,E,M,R] # sorted set of characters from passcode
shuffled parts: [A,CB,ED,MLKJIHGF,RQPON,ZYXWVUTS]
shuffled __key_list : ACBEDMLKJIHGFRQPONZYXWVUTS
Shuffling only 26 letters of the english alphabet can generate 26!
combinations for the shuffled list. In the program we consider, a set of
97 characters (including letters, digits, punctuation and whitespaces),
thereby creating a possibility of 97! combinations (which is a 152 digit number
in itself), thus diminishing the possibility of a brute force approach.
Moreover, shift keys even introduce a multiple of 26 for a brute force approach
for each of the already 97! combinations.
"""
# key_list_options contain nearly all printable except few elements from
# string.whitespace
key_list_options = (
string.ascii_letters + string.digits + string.punctuation + " \t\n"
)
keys_l = []
# creates points known as breakpoints to break the key_list_options at those
# points and pivot each substring
breakpoints = sorted(set(self.__passcode))
temp_list: list[str] = []
# algorithm for creating a new shuffled list, keys_l, out of key_list_options
for i in key_list_options:
temp_list.extend(i)
# checking breakpoints at which to pivot temporary sublist and add it into
# keys_l
if i in breakpoints or i == key_list_options[-1]:
keys_l.extend(temp_list[::-1])
temp_list.clear()
# returning a shuffled keys_l to prevent brute force guessing of shift key
return keys_l
def __make_shift_key(self) -> int:
"""
sum() of the mutated list of ascii values of all characters where the
mutated list is the one returned by __neg_pos()
"""
num = sum(self.__neg_pos([ord(x) for x in self.__passcode]))
return num if num > 0 else len(self.__passcode)
def decrypt(self, encoded_message: str) -> str:
"""
Performs shifting of the encoded_message w.r.t. the shuffled __key_list
to create the decoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.decrypt("d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#")
'Hello, this is a modified Caesar cipher'
"""
decoded_message = ""
# decoding shift like Caesar cipher algorithm implementing negative shift or
# reverse shift or left shift
for i in encoded_message:
position = self.__key_list.index(i)
decoded_message += self.__key_list[
(position - self.__shift_key) % -len(self.__key_list)
]
return decoded_message
def encrypt(self, plaintext: str) -> str:
"""
Performs shifting of the plaintext w.r.t. the shuffled __key_list
to create the encoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.encrypt('Hello, this is a modified Caesar cipher')
"d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#"
"""
encoded_message = ""
# encoding shift like Caesar cipher algorithm implementing positive shift or
# forward shift or right shift
for i in plaintext:
position = self.__key_list.index(i)
encoded_message += self.__key_list[
(position + self.__shift_key) % len(self.__key_list)
]
return encoded_message
def test_end_to_end(msg: str = "Hello, this is a modified Caesar cipher") -> str:
"""
>>> test_end_to_end()
'Hello, this is a modified Caesar cipher'
"""
cip1 = ShuffledShiftCipher()
return cip1.decrypt(cip1.encrypt(msg))
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author: Alexander Joslin
GitHub: github.com/echoaj
Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack-
algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea
We can use Dijkstra's two stack algorithm to solve an equation
such as: (5 + ((4 * 2) * (2 + 3)))
THESE ARE THE ALGORITHM'S RULES:
RULE 1: Scan the expression from left to right. When an operand is encountered,
push it onto the the operand stack.
RULE 2: When an operator is encountered in the expression,
push it onto the operator stack.
RULE 3: When a left parenthesis is encountered in the expression, ignore it.
RULE 4: When a right parenthesis is encountered in the expression,
pop an operator off the operator stack. The two operands it must
operate on must be the last two operands pushed onto the operand stack.
We therefore pop the operand stack twice, perform the operation,
and push the result back onto the operand stack so it will be available
for use as an operand of the next operator popped off the operator stack.
RULE 5: When the entire infix expression has been scanned, the value left on
the operand stack represents the value of the expression.
NOTE: It only works with whole numbers.
"""
__author__ = "Alexander Joslin"
import operator as op
from .stack import Stack
def dijkstras_two_stack_algorithm(equation: str) -> int:
"""
DocTests
>>> dijkstras_two_stack_algorithm("(5 + 3)")
8
>>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))")
5
>>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)")
-3
:param equation: a string
:return: result: an integer
"""
operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub}
operand_stack: Stack[int] = Stack()
operator_stack: Stack[str] = Stack()
for i in equation:
if i.isdigit():
# RULE 1
operand_stack.push(int(i))
elif i in operators:
# RULE 2
operator_stack.push(i)
elif i == ")":
# RULE 4
opr = operator_stack.peek()
operator_stack.pop()
num1 = operand_stack.peek()
operand_stack.pop()
num2 = operand_stack.peek()
operand_stack.pop()
total = operators[opr](num2, num1)
operand_stack.push(total)
# RULE 5
return operand_stack.peek()
if __name__ == "__main__":
equation = "(5 + ((4 * 2) * (2 + 3)))"
# answer = 45
print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
| """
Author: Alexander Joslin
GitHub: github.com/echoaj
Explanation: https://medium.com/@haleesammar/implemented-in-js-dijkstras-2-stack-
algorithm-for-evaluating-mathematical-expressions-fc0837dae1ea
We can use Dijkstra's two stack algorithm to solve an equation
such as: (5 + ((4 * 2) * (2 + 3)))
THESE ARE THE ALGORITHM'S RULES:
RULE 1: Scan the expression from left to right. When an operand is encountered,
push it onto the operand stack.
RULE 2: When an operator is encountered in the expression,
push it onto the operator stack.
RULE 3: When a left parenthesis is encountered in the expression, ignore it.
RULE 4: When a right parenthesis is encountered in the expression,
pop an operator off the operator stack. The two operands it must
operate on must be the last two operands pushed onto the operand stack.
We therefore pop the operand stack twice, perform the operation,
and push the result back onto the operand stack so it will be available
for use as an operand of the next operator popped off the operator stack.
RULE 5: When the entire infix expression has been scanned, the value left on
the operand stack represents the value of the expression.
NOTE: It only works with whole numbers.
"""
__author__ = "Alexander Joslin"
import operator as op
from .stack import Stack
def dijkstras_two_stack_algorithm(equation: str) -> int:
"""
DocTests
>>> dijkstras_two_stack_algorithm("(5 + 3)")
8
>>> dijkstras_two_stack_algorithm("((9 - (2 + 9)) + (8 - 1))")
5
>>> dijkstras_two_stack_algorithm("((((3 - 2) - (2 + 3)) + (2 - 4)) + 3)")
-3
:param equation: a string
:return: result: an integer
"""
operators = {"*": op.mul, "/": op.truediv, "+": op.add, "-": op.sub}
operand_stack: Stack[int] = Stack()
operator_stack: Stack[str] = Stack()
for i in equation:
if i.isdigit():
# RULE 1
operand_stack.push(int(i))
elif i in operators:
# RULE 2
operator_stack.push(i)
elif i == ")":
# RULE 4
opr = operator_stack.peek()
operator_stack.pop()
num1 = operand_stack.peek()
operand_stack.pop()
num2 = operand_stack.peek()
operand_stack.pop()
total = operators[opr](num2, num1)
operand_stack.push(total)
# RULE 5
return operand_stack.peek()
if __name__ == "__main__":
equation = "(5 + ((4 * 2) * (2 + 3)))"
# answer = 45
print(f"{equation} = {dijkstras_two_stack_algorithm(equation)}")
| 1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Given an array-like data structure A[1..n], how many pairs
(i, j) for all 1 <= i < j <= n such that A[i] > A[j]? These pairs are
called inversions. Counting the number of such inversions in an array-like
object is the important. Among other things, counting inversions can help
us determine how close a given array is to being sorted.
In this implementation, I provide two algorithms, a divide-and-conquer
algorithm which runs in nlogn and the brute-force n^2 algorithm.
"""
def count_inversions_bf(arr):
"""
Counts the number of inversions using a a naive brute-force algorithm
Parameters
----------
arr: arr: array-like, the list containing the items for which the number
of inversions is desired. The elements of `arr` must be comparable.
Returns
-------
num_inversions: The total number of inversions in `arr`
Examples
---------
>>> count_inversions_bf([1, 4, 2, 4, 1])
4
>>> count_inversions_bf([1, 1, 2, 4, 4])
0
>>> count_inversions_bf([])
0
"""
num_inversions = 0
n = len(arr)
for i in range(n - 1):
for j in range(i + 1, n):
if arr[i] > arr[j]:
num_inversions += 1
return num_inversions
def count_inversions_recursive(arr):
"""
Counts the number of inversions using a divide-and-conquer algorithm
Parameters
-----------
arr: array-like, the list containing the items for which the number
of inversions is desired. The elements of `arr` must be comparable.
Returns
-------
C: a sorted copy of `arr`.
num_inversions: int, the total number of inversions in 'arr'
Examples
--------
>>> count_inversions_recursive([1, 4, 2, 4, 1])
([1, 1, 2, 4, 4], 4)
>>> count_inversions_recursive([1, 1, 2, 4, 4])
([1, 1, 2, 4, 4], 0)
>>> count_inversions_recursive([])
([], 0)
"""
if len(arr) <= 1:
return arr, 0
mid = len(arr) // 2
P = arr[0:mid]
Q = arr[mid:]
A, inversion_p = count_inversions_recursive(P)
B, inversions_q = count_inversions_recursive(Q)
C, cross_inversions = _count_cross_inversions(A, B)
num_inversions = inversion_p + inversions_q + cross_inversions
return C, num_inversions
def _count_cross_inversions(P, Q):
"""
Counts the inversions across two sorted arrays.
And combine the two arrays into one sorted array
For all 1<= i<=len(P) and for all 1 <= j <= len(Q),
if P[i] > Q[j], then (i, j) is a cross inversion
Parameters
----------
P: array-like, sorted in non-decreasing order
Q: array-like, sorted in non-decreasing order
Returns
------
R: array-like, a sorted array of the elements of `P` and `Q`
num_inversion: int, the number of inversions across `P` and `Q`
Examples
--------
>>> _count_cross_inversions([1, 2, 3], [0, 2, 5])
([0, 1, 2, 2, 3, 5], 4)
>>> _count_cross_inversions([1, 2, 3], [3, 4, 5])
([1, 2, 3, 3, 4, 5], 0)
"""
R = []
i = j = num_inversion = 0
while i < len(P) and j < len(Q):
if P[i] > Q[j]:
# if P[1] > Q[j], then P[k] > Q[k] for all i < k <= len(P)
# These are all inversions. The claim emerges from the
# property that P is sorted.
num_inversion += len(P) - i
R.append(Q[j])
j += 1
else:
R.append(P[i])
i += 1
if i < len(P):
R.extend(P[i:])
else:
R.extend(Q[j:])
return R, num_inversion
def main():
arr_1 = [10, 2, 1, 5, 5, 2, 11]
# this arr has 8 inversions:
# (10, 2), (10, 1), (10, 5), (10, 5), (10, 2), (2, 1), (5, 2), (5, 2)
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 8
print("number of inversions = ", num_inversions_bf)
# testing an array with zero inversion (a sorted arr_1)
arr_1.sort()
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 0
print("number of inversions = ", num_inversions_bf)
# an empty list should also have zero inversions
arr_1 = []
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 0
print("number of inversions = ", num_inversions_bf)
if __name__ == "__main__":
main()
| """
Given an array-like data structure A[1..n], how many pairs
(i, j) for all 1 <= i < j <= n such that A[i] > A[j]? These pairs are
called inversions. Counting the number of such inversions in an array-like
object is the important. Among other things, counting inversions can help
us determine how close a given array is to being sorted.
In this implementation, I provide two algorithms, a divide-and-conquer
algorithm which runs in nlogn and the brute-force n^2 algorithm.
"""
def count_inversions_bf(arr):
"""
Counts the number of inversions using a naive brute-force algorithm
Parameters
----------
arr: arr: array-like, the list containing the items for which the number
of inversions is desired. The elements of `arr` must be comparable.
Returns
-------
num_inversions: The total number of inversions in `arr`
Examples
---------
>>> count_inversions_bf([1, 4, 2, 4, 1])
4
>>> count_inversions_bf([1, 1, 2, 4, 4])
0
>>> count_inversions_bf([])
0
"""
num_inversions = 0
n = len(arr)
for i in range(n - 1):
for j in range(i + 1, n):
if arr[i] > arr[j]:
num_inversions += 1
return num_inversions
def count_inversions_recursive(arr):
"""
Counts the number of inversions using a divide-and-conquer algorithm
Parameters
-----------
arr: array-like, the list containing the items for which the number
of inversions is desired. The elements of `arr` must be comparable.
Returns
-------
C: a sorted copy of `arr`.
num_inversions: int, the total number of inversions in 'arr'
Examples
--------
>>> count_inversions_recursive([1, 4, 2, 4, 1])
([1, 1, 2, 4, 4], 4)
>>> count_inversions_recursive([1, 1, 2, 4, 4])
([1, 1, 2, 4, 4], 0)
>>> count_inversions_recursive([])
([], 0)
"""
if len(arr) <= 1:
return arr, 0
mid = len(arr) // 2
P = arr[0:mid]
Q = arr[mid:]
A, inversion_p = count_inversions_recursive(P)
B, inversions_q = count_inversions_recursive(Q)
C, cross_inversions = _count_cross_inversions(A, B)
num_inversions = inversion_p + inversions_q + cross_inversions
return C, num_inversions
def _count_cross_inversions(P, Q):
"""
Counts the inversions across two sorted arrays.
And combine the two arrays into one sorted array
For all 1<= i<=len(P) and for all 1 <= j <= len(Q),
if P[i] > Q[j], then (i, j) is a cross inversion
Parameters
----------
P: array-like, sorted in non-decreasing order
Q: array-like, sorted in non-decreasing order
Returns
------
R: array-like, a sorted array of the elements of `P` and `Q`
num_inversion: int, the number of inversions across `P` and `Q`
Examples
--------
>>> _count_cross_inversions([1, 2, 3], [0, 2, 5])
([0, 1, 2, 2, 3, 5], 4)
>>> _count_cross_inversions([1, 2, 3], [3, 4, 5])
([1, 2, 3, 3, 4, 5], 0)
"""
R = []
i = j = num_inversion = 0
while i < len(P) and j < len(Q):
if P[i] > Q[j]:
# if P[1] > Q[j], then P[k] > Q[k] for all i < k <= len(P)
# These are all inversions. The claim emerges from the
# property that P is sorted.
num_inversion += len(P) - i
R.append(Q[j])
j += 1
else:
R.append(P[i])
i += 1
if i < len(P):
R.extend(P[i:])
else:
R.extend(Q[j:])
return R, num_inversion
def main():
arr_1 = [10, 2, 1, 5, 5, 2, 11]
# this arr has 8 inversions:
# (10, 2), (10, 1), (10, 5), (10, 5), (10, 2), (2, 1), (5, 2), (5, 2)
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 8
print("number of inversions = ", num_inversions_bf)
# testing an array with zero inversion (a sorted arr_1)
arr_1.sort()
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 0
print("number of inversions = ", num_inversions_bf)
# an empty list should also have zero inversions
arr_1 = []
num_inversions_bf = count_inversions_bf(arr_1)
_, num_inversions_recursive = count_inversions_recursive(arr_1)
assert num_inversions_bf == num_inversions_recursive == 0
print("number of inversions = ", num_inversions_bf)
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Find Volumes of Various Shapes.
Wikipedia reference: https://en.wikipedia.org/wiki/Volume
"""
from __future__ import annotations
from math import pi, pow
def vol_cube(side_length: int | float) -> float:
"""
Calculate the Volume of a Cube.
>>> vol_cube(1)
1.0
>>> vol_cube(3)
27.0
"""
return pow(side_length, 3)
def vol_spherical_cap(height: float, radius: float) -> float:
"""
Calculate the Volume of the spherical cap.
:return 1/3 pi * height ^ 2 * (3 * radius - height)
>>> vol_spherical_cap(1, 2)
5.235987755982988
"""
return 1 / 3 * pi * pow(height, 2) * (3 * radius - height)
def vol_spheres_intersect(
radius_1: float, radius_2: float, centers_distance: float
) -> float:
"""
Calculate the volume of the intersection of two spheres.
The intersection is composed by two spherical caps and therefore its volume is the
sum of the volumes of the spherical caps. First it calculates the heights (h1, h2)
of the the spherical caps, then the two volumes and it returns the sum.
The height formulas are
h1 = (radius_1 - radius_2 + centers_distance)
* (radius_1 + radius_2 - centers_distance)
/ (2 * centers_distance)
h2 = (radius_2 - radius_1 + centers_distance)
* (radius_2 + radius_1 - centers_distance)
/ (2 * centers_distance)
if centers_distance is 0 then it returns the volume of the smallers sphere
:return vol_spherical_cap(h1, radius_2) + vol_spherical_cap(h2, radius_1)
>>> vol_spheres_intersect(2, 2, 1)
21.205750411731103
"""
if centers_distance == 0:
return vol_sphere(min(radius_1, radius_2))
h1 = (
(radius_1 - radius_2 + centers_distance)
* (radius_1 + radius_2 - centers_distance)
/ (2 * centers_distance)
)
h2 = (
(radius_2 - radius_1 + centers_distance)
* (radius_2 + radius_1 - centers_distance)
/ (2 * centers_distance)
)
return vol_spherical_cap(h1, radius_2) + vol_spherical_cap(h2, radius_1)
def vol_cuboid(width: float, height: float, length: float) -> float:
"""
Calculate the Volume of a Cuboid.
:return multiple of width, length and height
>>> vol_cuboid(1, 1, 1)
1.0
>>> vol_cuboid(1, 2, 3)
6.0
"""
return float(width * height * length)
def vol_cone(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Cone.
Wikipedia reference: https://en.wikipedia.org/wiki/Cone
:return (1/3) * area_of_base * height
>>> vol_cone(10, 3)
10.0
>>> vol_cone(1, 1)
0.3333333333333333
"""
return area_of_base * height / 3.0
def vol_right_circ_cone(radius: float, height: float) -> float:
"""
Calculate the Volume of a Right Circular Cone.
Wikipedia reference: https://en.wikipedia.org/wiki/Cone
:return (1/3) * pi * radius^2 * height
>>> vol_right_circ_cone(2, 3)
12.566370614359172
"""
return pi * pow(radius, 2) * height / 3.0
def vol_prism(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Prism.
Wikipedia reference: https://en.wikipedia.org/wiki/Prism_(geometry)
:return V = Bh
>>> vol_prism(10, 2)
20.0
>>> vol_prism(11, 1)
11.0
"""
return float(area_of_base * height)
def vol_pyramid(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Pyramid.
Wikipedia reference: https://en.wikipedia.org/wiki/Pyramid_(geometry)
:return (1/3) * Bh
>>> vol_pyramid(10, 3)
10.0
>>> vol_pyramid(1.5, 3)
1.5
"""
return area_of_base * height / 3.0
def vol_sphere(radius: float) -> float:
"""
Calculate the Volume of a Sphere.
Wikipedia reference: https://en.wikipedia.org/wiki/Sphere
:return (4/3) * pi * r^3
>>> vol_sphere(5)
523.5987755982989
>>> vol_sphere(1)
4.1887902047863905
"""
return 4 / 3 * pi * pow(radius, 3)
def vol_hemisphere(radius: float):
"""Calculate the volume of a hemisphere
Wikipedia reference: https://en.wikipedia.org/wiki/Hemisphere
Other references: https://www.cuemath.com/geometry/hemisphere
:return 2/3 * pi * radius^3
>>> vol_hemisphere(1)
2.0943951023931953
>>> vol_hemisphere(7)
718.3775201208659
"""
return 2 / 3 * pi * pow(radius, 3)
def vol_circular_cylinder(radius: float, height: float) -> float:
"""Calculate the Volume of a Circular Cylinder.
Wikipedia reference: https://en.wikipedia.org/wiki/Cylinder
:return pi * radius^2 * height
>>> vol_circular_cylinder(1, 1)
3.141592653589793
>>> vol_circular_cylinder(4, 3)
150.79644737231007
"""
return pi * pow(radius, 2) * height
def vol_conical_frustum(height: float, radius_1: float, radius_2: float):
"""Calculate the Volume of a Conical Frustum.
Wikipedia reference: https://en.wikipedia.org/wiki/Frustum
:return 1/3 * pi * height * (radius_1^2 + radius_top^2 + radius_1 * radius_2)
>>> vol_conical_frustum(45, 7, 28)
48490.482608158454
>>> vol_conical_frustum(1, 1, 2)
7.330382858376184
"""
return (
1
/ 3
* pi
* height
* (pow(radius_1, 2) + pow(radius_2, 2) + radius_1 * radius_2)
)
def main():
"""Print the Results of Various Volume Calculations."""
print("Volumes:")
print("Cube: " + str(vol_cube(2))) # = 8
print("Cuboid: " + str(vol_cuboid(2, 2, 2))) # = 8
print("Cone: " + str(vol_cone(2, 2))) # ~= 1.33
print("Right Circular Cone: " + str(vol_right_circ_cone(2, 2))) # ~= 8.38
print("Prism: " + str(vol_prism(2, 2))) # = 4
print("Pyramid: " + str(vol_pyramid(2, 2))) # ~= 1.33
print("Sphere: " + str(vol_sphere(2))) # ~= 33.5
print("Hemisphere: " + str(vol_hemisphere(2))) # ~= 16.75
print("Circular Cylinder: " + str(vol_circular_cylinder(2, 2))) # ~= 25.1
print("Conical Frustum: " + str(vol_conical_frustum(2, 2, 4))) # ~= 58.6
print("Spherical cap: " + str(vol_spherical_cap(1, 2))) # ~= 5.24
print("Spheres intersetion: " + str(vol_spheres_intersect(2, 2, 1))) # ~= 21.21
if __name__ == "__main__":
main()
| """
Find Volumes of Various Shapes.
Wikipedia reference: https://en.wikipedia.org/wiki/Volume
"""
from __future__ import annotations
from math import pi, pow
def vol_cube(side_length: int | float) -> float:
"""
Calculate the Volume of a Cube.
>>> vol_cube(1)
1.0
>>> vol_cube(3)
27.0
"""
return pow(side_length, 3)
def vol_spherical_cap(height: float, radius: float) -> float:
"""
Calculate the Volume of the spherical cap.
:return 1/3 pi * height ^ 2 * (3 * radius - height)
>>> vol_spherical_cap(1, 2)
5.235987755982988
"""
return 1 / 3 * pi * pow(height, 2) * (3 * radius - height)
def vol_spheres_intersect(
radius_1: float, radius_2: float, centers_distance: float
) -> float:
"""
Calculate the volume of the intersection of two spheres.
The intersection is composed by two spherical caps and therefore its volume is the
sum of the volumes of the spherical caps. First, it calculates the heights (h1, h2)
of the spherical caps, then the two volumes and it returns the sum.
The height formulas are
h1 = (radius_1 - radius_2 + centers_distance)
* (radius_1 + radius_2 - centers_distance)
/ (2 * centers_distance)
h2 = (radius_2 - radius_1 + centers_distance)
* (radius_2 + radius_1 - centers_distance)
/ (2 * centers_distance)
if centers_distance is 0 then it returns the volume of the smallers sphere
:return vol_spherical_cap(h1, radius_2) + vol_spherical_cap(h2, radius_1)
>>> vol_spheres_intersect(2, 2, 1)
21.205750411731103
"""
if centers_distance == 0:
return vol_sphere(min(radius_1, radius_2))
h1 = (
(radius_1 - radius_2 + centers_distance)
* (radius_1 + radius_2 - centers_distance)
/ (2 * centers_distance)
)
h2 = (
(radius_2 - radius_1 + centers_distance)
* (radius_2 + radius_1 - centers_distance)
/ (2 * centers_distance)
)
return vol_spherical_cap(h1, radius_2) + vol_spherical_cap(h2, radius_1)
def vol_cuboid(width: float, height: float, length: float) -> float:
"""
Calculate the Volume of a Cuboid.
:return multiple of width, length and height
>>> vol_cuboid(1, 1, 1)
1.0
>>> vol_cuboid(1, 2, 3)
6.0
"""
return float(width * height * length)
def vol_cone(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Cone.
Wikipedia reference: https://en.wikipedia.org/wiki/Cone
:return (1/3) * area_of_base * height
>>> vol_cone(10, 3)
10.0
>>> vol_cone(1, 1)
0.3333333333333333
"""
return area_of_base * height / 3.0
def vol_right_circ_cone(radius: float, height: float) -> float:
"""
Calculate the Volume of a Right Circular Cone.
Wikipedia reference: https://en.wikipedia.org/wiki/Cone
:return (1/3) * pi * radius^2 * height
>>> vol_right_circ_cone(2, 3)
12.566370614359172
"""
return pi * pow(radius, 2) * height / 3.0
def vol_prism(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Prism.
Wikipedia reference: https://en.wikipedia.org/wiki/Prism_(geometry)
:return V = Bh
>>> vol_prism(10, 2)
20.0
>>> vol_prism(11, 1)
11.0
"""
return float(area_of_base * height)
def vol_pyramid(area_of_base: float, height: float) -> float:
"""
Calculate the Volume of a Pyramid.
Wikipedia reference: https://en.wikipedia.org/wiki/Pyramid_(geometry)
:return (1/3) * Bh
>>> vol_pyramid(10, 3)
10.0
>>> vol_pyramid(1.5, 3)
1.5
"""
return area_of_base * height / 3.0
def vol_sphere(radius: float) -> float:
"""
Calculate the Volume of a Sphere.
Wikipedia reference: https://en.wikipedia.org/wiki/Sphere
:return (4/3) * pi * r^3
>>> vol_sphere(5)
523.5987755982989
>>> vol_sphere(1)
4.1887902047863905
"""
return 4 / 3 * pi * pow(radius, 3)
def vol_hemisphere(radius: float):
"""Calculate the volume of a hemisphere
Wikipedia reference: https://en.wikipedia.org/wiki/Hemisphere
Other references: https://www.cuemath.com/geometry/hemisphere
:return 2/3 * pi * radius^3
>>> vol_hemisphere(1)
2.0943951023931953
>>> vol_hemisphere(7)
718.3775201208659
"""
return 2 / 3 * pi * pow(radius, 3)
def vol_circular_cylinder(radius: float, height: float) -> float:
"""Calculate the Volume of a Circular Cylinder.
Wikipedia reference: https://en.wikipedia.org/wiki/Cylinder
:return pi * radius^2 * height
>>> vol_circular_cylinder(1, 1)
3.141592653589793
>>> vol_circular_cylinder(4, 3)
150.79644737231007
"""
return pi * pow(radius, 2) * height
def vol_conical_frustum(height: float, radius_1: float, radius_2: float):
"""Calculate the Volume of a Conical Frustum.
Wikipedia reference: https://en.wikipedia.org/wiki/Frustum
:return 1/3 * pi * height * (radius_1^2 + radius_top^2 + radius_1 * radius_2)
>>> vol_conical_frustum(45, 7, 28)
48490.482608158454
>>> vol_conical_frustum(1, 1, 2)
7.330382858376184
"""
return (
1
/ 3
* pi
* height
* (pow(radius_1, 2) + pow(radius_2, 2) + radius_1 * radius_2)
)
def main():
"""Print the Results of Various Volume Calculations."""
print("Volumes:")
print("Cube: " + str(vol_cube(2))) # = 8
print("Cuboid: " + str(vol_cuboid(2, 2, 2))) # = 8
print("Cone: " + str(vol_cone(2, 2))) # ~= 1.33
print("Right Circular Cone: " + str(vol_right_circ_cone(2, 2))) # ~= 8.38
print("Prism: " + str(vol_prism(2, 2))) # = 4
print("Pyramid: " + str(vol_pyramid(2, 2))) # ~= 1.33
print("Sphere: " + str(vol_sphere(2))) # ~= 33.5
print("Hemisphere: " + str(vol_hemisphere(2))) # ~= 16.75
print("Circular Cylinder: " + str(vol_circular_cylinder(2, 2))) # ~= 25.1
print("Conical Frustum: " + str(vol_conical_frustum(2, 2, 4))) # ~= 58.6
print("Spherical cap: " + str(vol_spherical_cap(1, 2))) # ~= 5.24
print("Spheres intersetion: " + str(vol_spheres_intersect(2, 2, 1))) # ~= 21.21
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def palindromic_string(input_string: str) -> str:
"""
>>> palindromic_string('abbbaba')
'abbba'
>>> palindromic_string('ababa')
'ababa'
Manacher’s algorithm which finds Longest palindromic Substring in linear time.
1. first this convert input_string("xyx") into new_string("x|y|x") where odd
positions are actual input characters.
2. for each character in new_string it find corresponding length and store the
length and l,r to store previously calculated info.(please look the explanation
for details)
3. return corresponding output_string by removing all "|"
"""
max_length = 0
# if input_string is "aba" than new_input_string become "a|b|a"
new_input_string = ""
output_string = ""
# append each character + "|" in new_string for range(0, length-1)
for i in input_string[: len(input_string) - 1]:
new_input_string += i + "|"
# append last character
new_input_string += input_string[-1]
# we will store the starting and ending of previous furthest ending palindromic
# substring
l, r = 0, 0
# length[i] shows the length of palindromic substring with center i
length = [1 for i in range(len(new_input_string))]
# for each character in new_string find corresponding palindromic string
start = 0
for j in range(len(new_input_string)):
k = 1 if j > r else min(length[l + r - j] // 2, r - j + 1)
while (
j - k >= 0
and j + k < len(new_input_string)
and new_input_string[k + j] == new_input_string[j - k]
):
k += 1
length[j] = 2 * k - 1
# does this string is ending after the previously explored end (that is r) ?
# if yes the update the new r to the last index of this
if j + k - 1 > r:
l = j - k + 1 # noqa: E741
r = j + k - 1
# update max_length and start position
if max_length < length[j]:
max_length = length[j]
start = j
# create that string
s = new_input_string[start - max_length // 2 : start + max_length // 2 + 1]
for i in s:
if i != "|":
output_string += i
return output_string
if __name__ == "__main__":
import doctest
doctest.testmod()
"""
...a0...a1...a2.....a3......a4...a5...a6....
consider the string for which we are calculating the longest palindromic substring is
shown above where ... are some characters in between and right now we are calculating
the length of palindromic substring with center at a5 with following conditions :
i) we have stored the length of palindromic substring which has center at a3 (starts at
l ends at r) and it is the furthest ending till now, and it has ending after a6
ii) a2 and a4 are equally distant from a3 so char(a2) == char(a4)
iii) a0 and a6 are equally distant from a3 so char(a0) == char(a6)
iv) a1 is corresponding equal character of a5 in palindrome with center a3 (remember
that in below derivation of a4==a6)
now for a5 we will calculate the length of palindromic substring with center as a5 but
can we use previously calculated information in some way?
Yes, look the above string we know that a5 is inside the palindrome with center a3 and
previously we have have calculated that
a0==a2 (palindrome of center a1)
a2==a4 (palindrome of center a3)
a0==a6 (palindrome of center a3)
so a4==a6
so we can say that palindrome at center a5 is at least as long as palindrome at center
a1 but this only holds if a0 and a6 are inside the limits of palindrome centered at a3
so finally ..
len_of_palindrome__at(a5) = min(len_of_palindrome_at(a1), r-a5)
where a3 lies from l to r and we have to keep updating that
and if the a5 lies outside of l,r boundary we calculate length of palindrome with
bruteforce and update l,r.
it gives the linear time complexity just like z-function
"""
| def palindromic_string(input_string: str) -> str:
"""
>>> palindromic_string('abbbaba')
'abbba'
>>> palindromic_string('ababa')
'ababa'
Manacher’s algorithm which finds Longest palindromic Substring in linear time.
1. first this convert input_string("xyx") into new_string("x|y|x") where odd
positions are actual input characters.
2. for each character in new_string it find corresponding length and store the
length and l,r to store previously calculated info.(please look the explanation
for details)
3. return corresponding output_string by removing all "|"
"""
max_length = 0
# if input_string is "aba" than new_input_string become "a|b|a"
new_input_string = ""
output_string = ""
# append each character + "|" in new_string for range(0, length-1)
for i in input_string[: len(input_string) - 1]:
new_input_string += i + "|"
# append last character
new_input_string += input_string[-1]
# we will store the starting and ending of previous furthest ending palindromic
# substring
l, r = 0, 0
# length[i] shows the length of palindromic substring with center i
length = [1 for i in range(len(new_input_string))]
# for each character in new_string find corresponding palindromic string
start = 0
for j in range(len(new_input_string)):
k = 1 if j > r else min(length[l + r - j] // 2, r - j + 1)
while (
j - k >= 0
and j + k < len(new_input_string)
and new_input_string[k + j] == new_input_string[j - k]
):
k += 1
length[j] = 2 * k - 1
# does this string is ending after the previously explored end (that is r) ?
# if yes the update the new r to the last index of this
if j + k - 1 > r:
l = j - k + 1 # noqa: E741
r = j + k - 1
# update max_length and start position
if max_length < length[j]:
max_length = length[j]
start = j
# create that string
s = new_input_string[start - max_length // 2 : start + max_length // 2 + 1]
for i in s:
if i != "|":
output_string += i
return output_string
if __name__ == "__main__":
import doctest
doctest.testmod()
"""
...a0...a1...a2.....a3......a4...a5...a6....
consider the string for which we are calculating the longest palindromic substring is
shown above where ... are some characters in between and right now we are calculating
the length of palindromic substring with center at a5 with following conditions :
i) we have stored the length of palindromic substring which has center at a3 (starts at
l ends at r) and it is the furthest ending till now, and it has ending after a6
ii) a2 and a4 are equally distant from a3 so char(a2) == char(a4)
iii) a0 and a6 are equally distant from a3 so char(a0) == char(a6)
iv) a1 is corresponding equal character of a5 in palindrome with center a3 (remember
that in below derivation of a4==a6)
now for a5 we will calculate the length of palindromic substring with center as a5 but
can we use previously calculated information in some way?
Yes, look the above string we know that a5 is inside the palindrome with center a3 and
previously we have calculated that
a0==a2 (palindrome of center a1)
a2==a4 (palindrome of center a3)
a0==a6 (palindrome of center a3)
so a4==a6
so we can say that palindrome at center a5 is at least as long as palindrome at center
a1 but this only holds if a0 and a6 are inside the limits of palindrome centered at a3
so finally ..
len_of_palindrome__at(a5) = min(len_of_palindrome_at(a1), r-a5)
where a3 lies from l to r and we have to keep updating that
and if the a5 lies outside of l,r boundary we calculate length of palindrome with
bruteforce and update l,r.
it gives the linear time complexity just like z-function
"""
| 1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem Statement:
By starting at the top of the triangle below and moving to adjacent numbers on
the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom in triangle.txt (right click and
'Save Link/Target As...'), a 15K text file containing a triangle with
one-hundred rows.
"""
import os
def solution():
"""
Finds the maximum total in a triangle as described by the problem statement
above.
>>> solution()
7273
"""
script_dir = os.path.dirname(os.path.realpath(__file__))
triangle = os.path.join(script_dir, "triangle.txt")
with open(triangle) as f:
triangle = f.readlines()
a = map(lambda x: x.rstrip("\r\n").split(" "), triangle)
a = list(map(lambda x: list(map(lambda y: int(y), x)), a))
for i in range(1, len(a)):
for j in range(len(a[i])):
if j != len(a[i - 1]):
number1 = a[i - 1][j]
else:
number1 = 0
if j > 0:
number2 = a[i - 1][j - 1]
else:
number2 = 0
a[i][j] += max(number1, number2)
return max(a[-1])
if __name__ == "__main__":
print(solution())
| """
Problem Statement:
By starting at the top of the triangle below and moving to adjacent numbers on
the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom in triangle.txt (right click and
'Save Link/Target As...'), a 15K text file containing a triangle with
one-hundred rows.
"""
import os
def solution():
"""
Finds the maximum total in a triangle as described by the problem statement
above.
>>> solution()
7273
"""
script_dir = os.path.dirname(os.path.realpath(__file__))
triangle = os.path.join(script_dir, "triangle.txt")
with open(triangle) as f:
triangle = f.readlines()
a = map(lambda x: x.rstrip("\r\n").split(" "), triangle)
a = list(map(lambda x: list(map(lambda y: int(y), x)), a))
for i in range(1, len(a)):
for j in range(len(a[i])):
if j != len(a[i - 1]):
number1 = a[i - 1][j]
else:
number1 = 0
if j > 0:
number2 = a[i - 1][j - 1]
else:
number2 = 0
a[i][j] += max(number1, number2)
return max(a[-1])
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections import deque
def _input(message):
return input(message).strip().split(" ")
def initialize_unweighted_directed_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
return graph
def initialize_unweighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
graph[y].append(x)
return graph
def initialize_weighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[tuple[int, int]]]:
graph: dict[int, list[tuple[int, int]]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y, w = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> <weight> "))
graph[x].append((y, w))
graph[y].append((x, w))
return graph
if __name__ == "__main__":
n, m = (int(i) for i in _input("Number of nodes and edges: "))
graph_choice = int(
_input(
"Press 1 or 2 or 3 \n"
"1. Unweighted directed \n"
"2. Unweighted undirected \n"
"3. Weighted undirected \n"
)[0]
)
g = {
1: initialize_unweighted_directed_graph,
2: initialize_unweighted_undirected_graph,
3: initialize_weighted_undirected_graph,
}[graph_choice](n, m)
"""
--------------------------------------------------------------------------------
Depth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
S - Traversal Stack
--------------------------------------------------------------------------------
"""
def dfs(G, s):
vis, S = {s}, [s]
print(s)
while S:
flag = 0
for i in G[S[-1]]:
if i not in vis:
S.append(i)
vis.add(i)
flag = 1
print(i)
break
if not flag:
S.pop()
"""
--------------------------------------------------------------------------------
Breadth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
Q - Traversal Stack
--------------------------------------------------------------------------------
"""
def bfs(G, s):
vis, Q = {s}, deque([s])
print(s)
while Q:
u = Q.popleft()
for v in G[u]:
if v not in vis:
vis.add(v)
Q.append(v)
print(v)
"""
--------------------------------------------------------------------------------
Dijkstra's shortest path Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def dijk(G, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(G) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in G[u]:
if v[0] not in known:
if dist[u] + v[1] < dist.get(v[0], 100000):
dist[v[0]] = dist[u] + v[1]
path[v[0]] = u
for i in dist:
if i != s:
print(dist[i])
"""
--------------------------------------------------------------------------------
Topological Sort
--------------------------------------------------------------------------------
"""
def topo(G, ind=None, Q=None):
if Q is None:
Q = [1]
if ind is None:
ind = [0] * (len(G) + 1) # SInce oth Index is ignored
for u in G:
for v in G[u]:
ind[v] += 1
Q = deque()
for i in G:
if ind[i] == 0:
Q.append(i)
if len(Q) == 0:
return
v = Q.popleft()
print(v)
for w in G[v]:
ind[w] -= 1
if ind[w] == 0:
Q.append(w)
topo(G, ind, Q)
"""
--------------------------------------------------------------------------------
Reading an Adjacency matrix
--------------------------------------------------------------------------------
"""
def adjm():
n = input().strip()
a = []
for i in range(n):
a.append(map(int, input().strip().split()))
return a, n
"""
--------------------------------------------------------------------------------
Floyd Warshall's algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def floy(A_and_n):
(A, n) = A_and_n
dist = list(A)
path = [[0] * n for i in range(n)]
for k in range(n):
for i in range(n):
for j in range(n):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
path[i][k] = k
print(dist)
"""
--------------------------------------------------------------------------------
Prim's MST Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to nearest node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def prim(G, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(G) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in G[u]:
if v[0] not in known:
if v[1] < dist.get(v[0], 100000):
dist[v[0]] = v[1]
path[v[0]] = u
return dist
"""
--------------------------------------------------------------------------------
Accepting Edge list
Vars : n - Number of nodes
m - Number of edges
Returns : l - Edge list
n - Number of Nodes
--------------------------------------------------------------------------------
"""
def edglist():
n, m = map(int, input().split(" "))
edges = []
for i in range(m):
edges.append(map(int, input().split(" ")))
return edges, n
"""
--------------------------------------------------------------------------------
Kruskal's MST Algorithm
Args : E - Edge list
n - Number of Nodes
Vars : s - Set of all nodes as unique disjoint sets (initially)
--------------------------------------------------------------------------------
"""
def krusk(E_and_n):
# Sort edges on the basis of distance
(E, n) = E_and_n
E.sort(reverse=True, key=lambda x: x[2])
s = [{i} for i in range(1, n + 1)]
while True:
if len(s) == 1:
break
print(s)
x = E.pop()
for i in range(len(s)):
if x[0] in s[i]:
break
for j in range(len(s)):
if x[1] in s[j]:
if i == j:
break
s[j].update(s[i])
s.pop(i)
break
# find the isolated node in the graph
def find_isolated_nodes(graph):
isolated = []
for node in graph:
if not graph[node]:
isolated.append(node)
return isolated
| from collections import deque
def _input(message):
return input(message).strip().split(" ")
def initialize_unweighted_directed_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
return graph
def initialize_unweighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
graph[y].append(x)
return graph
def initialize_weighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[tuple[int, int]]]:
graph: dict[int, list[tuple[int, int]]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y, w = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> <weight> "))
graph[x].append((y, w))
graph[y].append((x, w))
return graph
if __name__ == "__main__":
n, m = (int(i) for i in _input("Number of nodes and edges: "))
graph_choice = int(
_input(
"Press 1 or 2 or 3 \n"
"1. Unweighted directed \n"
"2. Unweighted undirected \n"
"3. Weighted undirected \n"
)[0]
)
g = {
1: initialize_unweighted_directed_graph,
2: initialize_unweighted_undirected_graph,
3: initialize_weighted_undirected_graph,
}[graph_choice](n, m)
"""
--------------------------------------------------------------------------------
Depth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
S - Traversal Stack
--------------------------------------------------------------------------------
"""
def dfs(G, s):
vis, S = {s}, [s]
print(s)
while S:
flag = 0
for i in G[S[-1]]:
if i not in vis:
S.append(i)
vis.add(i)
flag = 1
print(i)
break
if not flag:
S.pop()
"""
--------------------------------------------------------------------------------
Breadth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
Q - Traversal Stack
--------------------------------------------------------------------------------
"""
def bfs(G, s):
vis, Q = {s}, deque([s])
print(s)
while Q:
u = Q.popleft()
for v in G[u]:
if v not in vis:
vis.add(v)
Q.append(v)
print(v)
"""
--------------------------------------------------------------------------------
Dijkstra's shortest path Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def dijk(G, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(G) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in G[u]:
if v[0] not in known:
if dist[u] + v[1] < dist.get(v[0], 100000):
dist[v[0]] = dist[u] + v[1]
path[v[0]] = u
for i in dist:
if i != s:
print(dist[i])
"""
--------------------------------------------------------------------------------
Topological Sort
--------------------------------------------------------------------------------
"""
def topo(G, ind=None, Q=None):
if Q is None:
Q = [1]
if ind is None:
ind = [0] * (len(G) + 1) # SInce oth Index is ignored
for u in G:
for v in G[u]:
ind[v] += 1
Q = deque()
for i in G:
if ind[i] == 0:
Q.append(i)
if len(Q) == 0:
return
v = Q.popleft()
print(v)
for w in G[v]:
ind[w] -= 1
if ind[w] == 0:
Q.append(w)
topo(G, ind, Q)
"""
--------------------------------------------------------------------------------
Reading an Adjacency matrix
--------------------------------------------------------------------------------
"""
def adjm():
n = input().strip()
a = []
for i in range(n):
a.append(map(int, input().strip().split()))
return a, n
"""
--------------------------------------------------------------------------------
Floyd Warshall's algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def floy(A_and_n):
(A, n) = A_and_n
dist = list(A)
path = [[0] * n for i in range(n)]
for k in range(n):
for i in range(n):
for j in range(n):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
path[i][k] = k
print(dist)
"""
--------------------------------------------------------------------------------
Prim's MST Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to nearest node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def prim(G, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(G) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in G[u]:
if v[0] not in known:
if v[1] < dist.get(v[0], 100000):
dist[v[0]] = v[1]
path[v[0]] = u
return dist
"""
--------------------------------------------------------------------------------
Accepting Edge list
Vars : n - Number of nodes
m - Number of edges
Returns : l - Edge list
n - Number of Nodes
--------------------------------------------------------------------------------
"""
def edglist():
n, m = map(int, input().split(" "))
edges = []
for i in range(m):
edges.append(map(int, input().split(" ")))
return edges, n
"""
--------------------------------------------------------------------------------
Kruskal's MST Algorithm
Args : E - Edge list
n - Number of Nodes
Vars : s - Set of all nodes as unique disjoint sets (initially)
--------------------------------------------------------------------------------
"""
def krusk(E_and_n):
# Sort edges on the basis of distance
(E, n) = E_and_n
E.sort(reverse=True, key=lambda x: x[2])
s = [{i} for i in range(1, n + 1)]
while True:
if len(s) == 1:
break
print(s)
x = E.pop()
for i in range(len(s)):
if x[0] in s[i]:
break
for j in range(len(s)):
if x[1] in s[j]:
if i == j:
break
s[j].update(s[i])
s.pop(i)
break
# find the isolated node in the graph
def find_isolated_nodes(graph):
isolated = []
for node in graph:
if not graph[node]:
isolated.append(node)
return isolated
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Finding the shortest path in 0-1-graph in O(E + V) which is faster than dijkstra.
0-1-graph is the weighted graph with the weights equal to 0 or 1.
Link: https://codeforces.com/blog/entry/22276
"""
from __future__ import annotations
from collections import deque
from collections.abc import Iterator
from dataclasses import dataclass
@dataclass
class Edge:
"""Weighted directed graph edge."""
destination_vertex: int
weight: int
class AdjacencyList:
"""Graph adjacency list."""
def __init__(self, size: int):
self._graph: list[list[Edge]] = [[] for _ in range(size)]
self._size = size
def __getitem__(self, vertex: int) -> Iterator[Edge]:
"""Get all the vertices adjacent to the given one."""
return iter(self._graph[vertex])
@property
def size(self):
return self._size
def add_edge(self, from_vertex: int, to_vertex: int, weight: int):
"""
>>> g = AdjacencyList(2)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(1, 0, 1)
>>> list(g[0])
[Edge(destination_vertex=1, weight=0)]
>>> list(g[1])
[Edge(destination_vertex=0, weight=1)]
>>> g.add_edge(0, 1, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.add_edge(0, 2, 1)
Traceback (most recent call last):
...
ValueError: Vertex indexes must be in [0; size).
"""
if weight not in (0, 1):
raise ValueError("Edge weight must be either 0 or 1.")
if to_vertex < 0 or to_vertex >= self.size:
raise ValueError("Vertex indexes must be in [0; size).")
self._graph[from_vertex].append(Edge(to_vertex, weight))
def get_shortest_path(self, start_vertex: int, finish_vertex: int) -> int | None:
"""
Return the shortest distance from start_vertex to finish_vertex in 0-1-graph.
1 1 1
0--------->3 6--------7>------->8
| ^ ^ ^ |1
| | | |0 v
0| |0 1| 9-------->10
| | | ^ 1
v | | |0
1--------->2<-------4------->5
0 1 1
>>> g = AdjacencyList(11)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(0, 3, 1)
>>> g.add_edge(1, 2, 0)
>>> g.add_edge(2, 3, 0)
>>> g.add_edge(4, 2, 1)
>>> g.add_edge(4, 5, 1)
>>> g.add_edge(4, 6, 1)
>>> g.add_edge(5, 9, 0)
>>> g.add_edge(6, 7, 1)
>>> g.add_edge(7, 8, 1)
>>> g.add_edge(8, 10, 1)
>>> g.add_edge(9, 7, 0)
>>> g.add_edge(9, 10, 1)
>>> g.add_edge(1, 2, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.get_shortest_path(0, 3)
0
>>> g.get_shortest_path(0, 4)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
>>> g.get_shortest_path(4, 10)
2
>>> g.get_shortest_path(4, 8)
2
>>> g.get_shortest_path(0, 1)
0
>>> g.get_shortest_path(1, 0)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
"""
queue = deque([start_vertex])
distances: list[int | None] = [None] * self.size
distances[start_vertex] = 0
while queue:
current_vertex = queue.popleft()
current_distance = distances[current_vertex]
if current_distance is None:
continue
for edge in self[current_vertex]:
new_distance = current_distance + edge.weight
dest_vertex_distance = distances[edge.destination_vertex]
if (
isinstance(dest_vertex_distance, int)
and new_distance >= dest_vertex_distance
):
continue
distances[edge.destination_vertex] = new_distance
if edge.weight == 0:
queue.appendleft(edge.destination_vertex)
else:
queue.append(edge.destination_vertex)
if distances[finish_vertex] is None:
raise ValueError("No path from start_vertex to finish_vertex.")
return distances[finish_vertex]
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Finding the shortest path in 0-1-graph in O(E + V) which is faster than dijkstra.
0-1-graph is the weighted graph with the weights equal to 0 or 1.
Link: https://codeforces.com/blog/entry/22276
"""
from __future__ import annotations
from collections import deque
from collections.abc import Iterator
from dataclasses import dataclass
@dataclass
class Edge:
"""Weighted directed graph edge."""
destination_vertex: int
weight: int
class AdjacencyList:
"""Graph adjacency list."""
def __init__(self, size: int):
self._graph: list[list[Edge]] = [[] for _ in range(size)]
self._size = size
def __getitem__(self, vertex: int) -> Iterator[Edge]:
"""Get all the vertices adjacent to the given one."""
return iter(self._graph[vertex])
@property
def size(self):
return self._size
def add_edge(self, from_vertex: int, to_vertex: int, weight: int):
"""
>>> g = AdjacencyList(2)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(1, 0, 1)
>>> list(g[0])
[Edge(destination_vertex=1, weight=0)]
>>> list(g[1])
[Edge(destination_vertex=0, weight=1)]
>>> g.add_edge(0, 1, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.add_edge(0, 2, 1)
Traceback (most recent call last):
...
ValueError: Vertex indexes must be in [0; size).
"""
if weight not in (0, 1):
raise ValueError("Edge weight must be either 0 or 1.")
if to_vertex < 0 or to_vertex >= self.size:
raise ValueError("Vertex indexes must be in [0; size).")
self._graph[from_vertex].append(Edge(to_vertex, weight))
def get_shortest_path(self, start_vertex: int, finish_vertex: int) -> int | None:
"""
Return the shortest distance from start_vertex to finish_vertex in 0-1-graph.
1 1 1
0--------->3 6--------7>------->8
| ^ ^ ^ |1
| | | |0 v
0| |0 1| 9-------->10
| | | ^ 1
v | | |0
1--------->2<-------4------->5
0 1 1
>>> g = AdjacencyList(11)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(0, 3, 1)
>>> g.add_edge(1, 2, 0)
>>> g.add_edge(2, 3, 0)
>>> g.add_edge(4, 2, 1)
>>> g.add_edge(4, 5, 1)
>>> g.add_edge(4, 6, 1)
>>> g.add_edge(5, 9, 0)
>>> g.add_edge(6, 7, 1)
>>> g.add_edge(7, 8, 1)
>>> g.add_edge(8, 10, 1)
>>> g.add_edge(9, 7, 0)
>>> g.add_edge(9, 10, 1)
>>> g.add_edge(1, 2, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.get_shortest_path(0, 3)
0
>>> g.get_shortest_path(0, 4)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
>>> g.get_shortest_path(4, 10)
2
>>> g.get_shortest_path(4, 8)
2
>>> g.get_shortest_path(0, 1)
0
>>> g.get_shortest_path(1, 0)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
"""
queue = deque([start_vertex])
distances: list[int | None] = [None] * self.size
distances[start_vertex] = 0
while queue:
current_vertex = queue.popleft()
current_distance = distances[current_vertex]
if current_distance is None:
continue
for edge in self[current_vertex]:
new_distance = current_distance + edge.weight
dest_vertex_distance = distances[edge.destination_vertex]
if (
isinstance(dest_vertex_distance, int)
and new_distance >= dest_vertex_distance
):
continue
distances[edge.destination_vertex] = new_distance
if edge.weight == 0:
queue.appendleft(edge.destination_vertex)
else:
queue.append(edge.destination_vertex)
if distances[finish_vertex] is None:
raise ValueError("No path from start_vertex to finish_vertex.")
return distances[finish_vertex]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
2D Transformations are regularly used in Linear Algebra.
I have added the codes for reflection, projection, scaling and rotation 2D matrices.
scaling(5) = [[5.0, 0.0], [0.0, 5.0]]
rotation(45) = [[0.5253219888177297, -0.8509035245341184],
[0.8509035245341184, 0.5253219888177297]]
projection(45) = [[0.27596319193541496, 0.446998331800279],
[0.446998331800279, 0.7240368080645851]]
reflection(45) = [[0.05064397763545947, 0.893996663600558],
[0.893996663600558, 0.7018070490682369]]
"""
from math import cos, sin
def scaling(scaling_factor: float) -> list[list[float]]:
"""
>>> scaling(5)
[[5.0, 0.0], [0.0, 5.0]]
"""
scaling_factor = float(scaling_factor)
return [[scaling_factor * int(x == y) for x in range(2)] for y in range(2)]
def rotation(angle: float) -> list[list[float]]:
"""
>>> rotation(45) # doctest: +NORMALIZE_WHITESPACE
[[0.5253219888177297, -0.8509035245341184],
[0.8509035245341184, 0.5253219888177297]]
"""
c, s = cos(angle), sin(angle)
return [[c, -s], [s, c]]
def projection(angle: float) -> list[list[float]]:
"""
>>> projection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.27596319193541496, 0.446998331800279],
[0.446998331800279, 0.7240368080645851]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[c * c, cs], [cs, s * s]]
def reflection(angle: float) -> list[list[float]]:
"""
>>> reflection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.05064397763545947, 0.893996663600558],
[0.893996663600558, 0.7018070490682369]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[2 * c - 1, 2 * cs], [2 * cs, 2 * s - 1]]
print(f" {scaling(5) = }")
print(f" {rotation(45) = }")
print(f"{projection(45) = }")
print(f"{reflection(45) = }")
| """
2D Transformations are regularly used in Linear Algebra.
I have added the codes for reflection, projection, scaling and rotation 2D matrices.
scaling(5) = [[5.0, 0.0], [0.0, 5.0]]
rotation(45) = [[0.5253219888177297, -0.8509035245341184],
[0.8509035245341184, 0.5253219888177297]]
projection(45) = [[0.27596319193541496, 0.446998331800279],
[0.446998331800279, 0.7240368080645851]]
reflection(45) = [[0.05064397763545947, 0.893996663600558],
[0.893996663600558, 0.7018070490682369]]
"""
from math import cos, sin
def scaling(scaling_factor: float) -> list[list[float]]:
"""
>>> scaling(5)
[[5.0, 0.0], [0.0, 5.0]]
"""
scaling_factor = float(scaling_factor)
return [[scaling_factor * int(x == y) for x in range(2)] for y in range(2)]
def rotation(angle: float) -> list[list[float]]:
"""
>>> rotation(45) # doctest: +NORMALIZE_WHITESPACE
[[0.5253219888177297, -0.8509035245341184],
[0.8509035245341184, 0.5253219888177297]]
"""
c, s = cos(angle), sin(angle)
return [[c, -s], [s, c]]
def projection(angle: float) -> list[list[float]]:
"""
>>> projection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.27596319193541496, 0.446998331800279],
[0.446998331800279, 0.7240368080645851]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[c * c, cs], [cs, s * s]]
def reflection(angle: float) -> list[list[float]]:
"""
>>> reflection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.05064397763545947, 0.893996663600558],
[0.893996663600558, 0.7018070490682369]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[2 * c - 1, 2 * cs], [2 * cs, 2 * s - 1]]
print(f" {scaling(5) = }")
print(f" {rotation(45) = }")
print(f"{projection(45) = }")
print(f"{reflection(45) = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
from .number_theory.prime_numbers import next_prime
class HashTable:
"""
Basic Hash Table example with open addressing and linear probing
"""
def __init__(self, size_table, charge_factor=None, lim_charge=None):
self.size_table = size_table
self.values = [None] * self.size_table
self.lim_charge = 0.75 if lim_charge is None else lim_charge
self.charge_factor = 1 if charge_factor is None else charge_factor
self.__aux_list = []
self._keys = {}
def keys(self):
return self._keys
def balanced_factor(self):
return sum(1 for slot in self.values if slot is not None) / (
self.size_table * self.charge_factor
)
def hash_function(self, key):
return key % self.size_table
def _step_by_step(self, step_ord):
print(f"step {step_ord}")
print([i for i in range(len(self.values))])
print(self.values)
def bulk_insert(self, values):
i = 1
self.__aux_list = values
for value in values:
self.insert_data(value)
self._step_by_step(i)
i += 1
def _set_value(self, key, data):
self.values[key] = data
self._keys[key] = data
def _collision_resolution(self, key, data=None):
new_key = self.hash_function(key + 1)
while self.values[new_key] is not None and self.values[new_key] != key:
if self.values.count(None) > 0:
new_key = self.hash_function(new_key + 1)
else:
new_key = None
break
return new_key
def rehashing(self):
survivor_values = [value for value in self.values if value is not None]
self.size_table = next_prime(self.size_table, factor=2)
self._keys.clear()
self.values = [None] * self.size_table # hell's pointers D: don't DRY ;/
for value in survivor_values:
self.insert_data(value)
def insert_data(self, data):
key = self.hash_function(data)
if self.values[key] is None:
self._set_value(key, data)
elif self.values[key] == data:
pass
else:
collision_resolution = self._collision_resolution(key, data)
if collision_resolution is not None:
self._set_value(collision_resolution, data)
else:
self.rehashing()
self.insert_data(data)
| #!/usr/bin/env python3
from .number_theory.prime_numbers import next_prime
class HashTable:
"""
Basic Hash Table example with open addressing and linear probing
"""
def __init__(self, size_table, charge_factor=None, lim_charge=None):
self.size_table = size_table
self.values = [None] * self.size_table
self.lim_charge = 0.75 if lim_charge is None else lim_charge
self.charge_factor = 1 if charge_factor is None else charge_factor
self.__aux_list = []
self._keys = {}
def keys(self):
return self._keys
def balanced_factor(self):
return sum(1 for slot in self.values if slot is not None) / (
self.size_table * self.charge_factor
)
def hash_function(self, key):
return key % self.size_table
def _step_by_step(self, step_ord):
print(f"step {step_ord}")
print([i for i in range(len(self.values))])
print(self.values)
def bulk_insert(self, values):
i = 1
self.__aux_list = values
for value in values:
self.insert_data(value)
self._step_by_step(i)
i += 1
def _set_value(self, key, data):
self.values[key] = data
self._keys[key] = data
def _collision_resolution(self, key, data=None):
new_key = self.hash_function(key + 1)
while self.values[new_key] is not None and self.values[new_key] != key:
if self.values.count(None) > 0:
new_key = self.hash_function(new_key + 1)
else:
new_key = None
break
return new_key
def rehashing(self):
survivor_values = [value for value in self.values if value is not None]
self.size_table = next_prime(self.size_table, factor=2)
self._keys.clear()
self.values = [None] * self.size_table # hell's pointers D: don't DRY ;/
for value in survivor_values:
self.insert_data(value)
def insert_data(self, data):
key = self.hash_function(data)
if self.values[key] is None:
self._set_value(key, data)
elif self.values[key] == data:
pass
else:
collision_resolution = self._collision_resolution(key, data)
if collision_resolution is not None:
self._set_value(collision_resolution, data)
else:
self.rehashing()
self.insert_data(data)
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return len(tuple(iter(self)))
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for i in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return len(tuple(iter(self)))
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for i in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
assert False # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The nested brackets problem is a problem that determines if a sequence of
brackets are properly nested. A sequence of brackets s is considered properly nested
if any of the following conditions are true:
- s is empty
- s has the form (U) or [U] or {U} where U is a properly nested string
- s has the form VW where V and W are properly nested strings
For example, the string "()()[()]" is properly nested but "[(()]" is not.
The function called is_balanced takes as input a string S which is a sequence of
brackets and returns true if S is nested and false otherwise.
"""
def is_balanced(S):
stack = []
open_brackets = set({"(", "[", "{"})
closed_brackets = set({")", "]", "}"})
open_to_closed = dict({"{": "}", "[": "]", "(": ")"})
for i in range(len(S)):
if S[i] in open_brackets:
stack.append(S[i])
elif S[i] in closed_brackets:
if len(stack) == 0 or (
len(stack) > 0 and open_to_closed[stack.pop()] != S[i]
):
return False
return len(stack) == 0
def main():
s = input("Enter sequence of brackets: ")
if is_balanced(s):
print(s, "is balanced")
else:
print(s, "is not balanced")
if __name__ == "__main__":
main()
| """
The nested brackets problem is a problem that determines if a sequence of
brackets are properly nested. A sequence of brackets s is considered properly nested
if any of the following conditions are true:
- s is empty
- s has the form (U) or [U] or {U} where U is a properly nested string
- s has the form VW where V and W are properly nested strings
For example, the string "()()[()]" is properly nested but "[(()]" is not.
The function called is_balanced takes as input a string S which is a sequence of
brackets and returns true if S is nested and false otherwise.
"""
def is_balanced(S):
stack = []
open_brackets = set({"(", "[", "{"})
closed_brackets = set({")", "]", "}"})
open_to_closed = dict({"{": "}", "[": "]", "(": ")"})
for i in range(len(S)):
if S[i] in open_brackets:
stack.append(S[i])
elif S[i] in closed_brackets:
if len(stack) == 0 or (
len(stack) > 0 and open_to_closed[stack.pop()] != S[i]
):
return False
return len(stack) == 0
def main():
s = input("Enter sequence of brackets: ")
if is_balanced(s):
print(s, "is balanced")
else:
print(s, "is not balanced")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/python
"""Author Anurag Kumar | [email protected] | git/anuragkumarak95
Simple example of Fractal generation using recursive function.
What is Sierpinski Triangle?
>>The Sierpinski triangle (also with the original orthography Sierpinski), also called
the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set
with the overall shape of an equilateral triangle, subdivided recursively into smaller
equilateral triangles. Originally constructed as a curve, this is one of the basic
examples of self-similar sets, i.e., it is a mathematically generated pattern that can
be reproducible at any magnification or reduction. It is named after the Polish
mathematician Wacław Sierpinski, but appeared as a decorative pattern many centuries
prior to the work of Sierpinski.
Requirements(pip):
- turtle
Python:
- 2.6
Usage:
- $python sierpinski_triangle.py <int:depth_for_fractal>
Credits: This code was written by editing the code from
http://www.riannetrujillo.com/blog/python-fractal/
"""
import sys
import turtle
PROGNAME = "Sierpinski Triangle"
points = [[-175, -125], [0, 175], [175, -125]] # size of triangle
def getMid(p1, p2):
return ((p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2) # find midpoint
def triangle(points, depth):
myPen.up()
myPen.goto(points[0][0], points[0][1])
myPen.down()
myPen.goto(points[1][0], points[1][1])
myPen.goto(points[2][0], points[2][1])
myPen.goto(points[0][0], points[0][1])
if depth > 0:
triangle(
[points[0], getMid(points[0], points[1]), getMid(points[0], points[2])],
depth - 1,
)
triangle(
[points[1], getMid(points[0], points[1]), getMid(points[1], points[2])],
depth - 1,
)
triangle(
[points[2], getMid(points[2], points[1]), getMid(points[0], points[2])],
depth - 1,
)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError(
"right format for using this script: "
"$python fractals.py <int:depth_for_fractal>"
)
myPen = turtle.Turtle()
myPen.ht()
myPen.speed(5)
myPen.pencolor("red")
triangle(points, int(sys.argv[1]))
| #!/usr/bin/python
"""Author Anurag Kumar | [email protected] | git/anuragkumarak95
Simple example of Fractal generation using recursive function.
What is Sierpinski Triangle?
>>The Sierpinski triangle (also with the original orthography Sierpinski), also called
the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set
with the overall shape of an equilateral triangle, subdivided recursively into smaller
equilateral triangles. Originally constructed as a curve, this is one of the basic
examples of self-similar sets, i.e., it is a mathematically generated pattern that can
be reproducible at any magnification or reduction. It is named after the Polish
mathematician Wacław Sierpinski, but appeared as a decorative pattern many centuries
prior to the work of Sierpinski.
Requirements(pip):
- turtle
Python:
- 2.6
Usage:
- $python sierpinski_triangle.py <int:depth_for_fractal>
Credits: This code was written by editing the code from
http://www.riannetrujillo.com/blog/python-fractal/
"""
import sys
import turtle
PROGNAME = "Sierpinski Triangle"
points = [[-175, -125], [0, 175], [175, -125]] # size of triangle
def getMid(p1, p2):
return ((p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2) # find midpoint
def triangle(points, depth):
myPen.up()
myPen.goto(points[0][0], points[0][1])
myPen.down()
myPen.goto(points[1][0], points[1][1])
myPen.goto(points[2][0], points[2][1])
myPen.goto(points[0][0], points[0][1])
if depth > 0:
triangle(
[points[0], getMid(points[0], points[1]), getMid(points[0], points[2])],
depth - 1,
)
triangle(
[points[1], getMid(points[0], points[1]), getMid(points[1], points[2])],
depth - 1,
)
triangle(
[points[2], getMid(points[2], points[1]), getMid(points[0], points[2])],
depth - 1,
)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError(
"right format for using this script: "
"$python fractals.py <int:depth_for_fractal>"
)
myPen = turtle.Turtle()
myPen.ht()
myPen.speed(5)
myPen.pencolor("red")
triangle(points, int(sys.argv[1]))
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return int(factorial(n) / ((factorial(k)) * (factorial(n - k))))
if __name__ == "__main__":
print(
"\nThe number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}",
)
print(
"\nIf a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.\n",
)
| """
https://en.wikipedia.org/wiki/Combination
"""
from math import factorial
def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
return int(factorial(n) / ((factorial(k)) * (factorial(n - k))))
if __name__ == "__main__":
print(
"\nThe number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}",
)
print(
"\nIf a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.\n",
)
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| __author__ = "Tobias Carryer"
from time import time
class LinearCongruentialGenerator:
"""
A pseudorandom number generator.
"""
# The default value for **seed** is the result of a function call which is not
# normally recommended and causes flake8-bugbear to raise a B008 error. However,
# in this case, it is accptable because `LinearCongruentialGenerator.__init__()`
# will only be called once per instance and it ensures that each instance will
# generate a unique sequence of numbers.
def __init__(self, multiplier, increment, modulo, seed=int(time())): # noqa: B008
"""
These parameters are saved and used when nextNumber() is called.
modulo is the largest number that can be generated (exclusive). The most
efficient values are powers of 2. 2^32 is a common value.
"""
self.multiplier = multiplier
self.increment = increment
self.modulo = modulo
self.seed = seed
def next_number(self):
"""
The smallest number that can be generated is zero.
The largest number that can be generated is modulo-1. modulo is set in the
constructor.
"""
self.seed = (self.multiplier * self.seed + self.increment) % self.modulo
return self.seed
if __name__ == "__main__":
# Show the LCG in action.
lcg = LinearCongruentialGenerator(1664525, 1013904223, 2 << 31)
while True:
print(lcg.next_number())
| __author__ = "Tobias Carryer"
from time import time
class LinearCongruentialGenerator:
"""
A pseudorandom number generator.
"""
# The default value for **seed** is the result of a function call which is not
# normally recommended and causes flake8-bugbear to raise a B008 error. However,
# in this case, it is accptable because `LinearCongruentialGenerator.__init__()`
# will only be called once per instance and it ensures that each instance will
# generate a unique sequence of numbers.
def __init__(self, multiplier, increment, modulo, seed=int(time())): # noqa: B008
"""
These parameters are saved and used when nextNumber() is called.
modulo is the largest number that can be generated (exclusive). The most
efficient values are powers of 2. 2^32 is a common value.
"""
self.multiplier = multiplier
self.increment = increment
self.modulo = modulo
self.seed = seed
def next_number(self):
"""
The smallest number that can be generated is zero.
The largest number that can be generated is modulo-1. modulo is set in the
constructor.
"""
self.seed = (self.multiplier * self.seed + self.increment) % self.modulo
return self.seed
if __name__ == "__main__":
# Show the LCG in action.
lcg = LinearCongruentialGenerator(1664525, 1013904223, 2 << 31)
while True:
print(lcg.next_number())
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of fibonacci search.
Resources used:
https://en.wikipedia.org/wiki/Fibonacci_search_technique
For doctests run following command:
python3 -m doctest -v fibonacci_search.py
For manual testing run:
python3 fibonacci_search.py
"""
from functools import lru_cache
@lru_cache
def fibonacci(k: int) -> int:
"""Finds fibonacci number in index k.
Parameters
----------
k :
Index of fibonacci.
Returns
-------
int
Fibonacci number in position k.
>>> fibonacci(0)
0
>>> fibonacci(2)
1
>>> fibonacci(5)
5
>>> fibonacci(15)
610
>>> fibonacci('a')
Traceback (most recent call last):
TypeError: k must be an integer.
>>> fibonacci(-5)
Traceback (most recent call last):
ValueError: k integer must be greater or equal to zero.
"""
if not isinstance(k, int):
raise TypeError("k must be an integer.")
if k < 0:
raise ValueError("k integer must be greater or equal to zero.")
if k == 0:
return 0
elif k == 1:
return 1
else:
return fibonacci(k - 1) + fibonacci(k - 2)
def fibonacci_search(arr: list, val: int) -> int:
"""A pure Python implementation of a fibonacci search algorithm.
Parameters
----------
arr
List of sorted elements.
val
Element to search in list.
Returns
-------
int
The index of the element in the array.
-1 if the element is not found.
>>> fibonacci_search([4, 5, 6, 7], 4)
0
>>> fibonacci_search([4, 5, 6, 7], -10)
-1
>>> fibonacci_search([-18, 2], -18)
0
>>> fibonacci_search([5], 5)
0
>>> fibonacci_search(['a', 'c', 'd'], 'c')
1
>>> fibonacci_search(['a', 'c', 'd'], 'f')
-1
>>> fibonacci_search([], 1)
-1
>>> fibonacci_search([.1, .4 , 7], .4)
1
>>> fibonacci_search([], 9)
-1
>>> fibonacci_search(list(range(100)), 63)
63
>>> fibonacci_search(list(range(100)), 99)
99
>>> fibonacci_search(list(range(-100, 100, 3)), -97)
1
>>> fibonacci_search(list(range(-100, 100, 3)), 0)
-1
>>> fibonacci_search(list(range(-100, 100, 5)), 0)
20
>>> fibonacci_search(list(range(-100, 100, 5)), 95)
39
"""
len_list = len(arr)
# Find m such that F_m >= n where F_i is the i_th fibonacci number.
i = 0
while True:
if fibonacci(i) >= len_list:
fibb_k = i
break
i += 1
offset = 0
while fibb_k > 0:
index_k = min(
offset + fibonacci(fibb_k - 1), len_list - 1
) # Prevent out of range
item_k_1 = arr[index_k]
if item_k_1 == val:
return index_k
elif val < item_k_1:
fibb_k -= 1
elif val > item_k_1:
offset += fibonacci(fibb_k - 1)
fibb_k -= 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
This is pure Python implementation of fibonacci search.
Resources used:
https://en.wikipedia.org/wiki/Fibonacci_search_technique
For doctests run following command:
python3 -m doctest -v fibonacci_search.py
For manual testing run:
python3 fibonacci_search.py
"""
from functools import lru_cache
@lru_cache
def fibonacci(k: int) -> int:
"""Finds fibonacci number in index k.
Parameters
----------
k :
Index of fibonacci.
Returns
-------
int
Fibonacci number in position k.
>>> fibonacci(0)
0
>>> fibonacci(2)
1
>>> fibonacci(5)
5
>>> fibonacci(15)
610
>>> fibonacci('a')
Traceback (most recent call last):
TypeError: k must be an integer.
>>> fibonacci(-5)
Traceback (most recent call last):
ValueError: k integer must be greater or equal to zero.
"""
if not isinstance(k, int):
raise TypeError("k must be an integer.")
if k < 0:
raise ValueError("k integer must be greater or equal to zero.")
if k == 0:
return 0
elif k == 1:
return 1
else:
return fibonacci(k - 1) + fibonacci(k - 2)
def fibonacci_search(arr: list, val: int) -> int:
"""A pure Python implementation of a fibonacci search algorithm.
Parameters
----------
arr
List of sorted elements.
val
Element to search in list.
Returns
-------
int
The index of the element in the array.
-1 if the element is not found.
>>> fibonacci_search([4, 5, 6, 7], 4)
0
>>> fibonacci_search([4, 5, 6, 7], -10)
-1
>>> fibonacci_search([-18, 2], -18)
0
>>> fibonacci_search([5], 5)
0
>>> fibonacci_search(['a', 'c', 'd'], 'c')
1
>>> fibonacci_search(['a', 'c', 'd'], 'f')
-1
>>> fibonacci_search([], 1)
-1
>>> fibonacci_search([.1, .4 , 7], .4)
1
>>> fibonacci_search([], 9)
-1
>>> fibonacci_search(list(range(100)), 63)
63
>>> fibonacci_search(list(range(100)), 99)
99
>>> fibonacci_search(list(range(-100, 100, 3)), -97)
1
>>> fibonacci_search(list(range(-100, 100, 3)), 0)
-1
>>> fibonacci_search(list(range(-100, 100, 5)), 0)
20
>>> fibonacci_search(list(range(-100, 100, 5)), 95)
39
"""
len_list = len(arr)
# Find m such that F_m >= n where F_i is the i_th fibonacci number.
i = 0
while True:
if fibonacci(i) >= len_list:
fibb_k = i
break
i += 1
offset = 0
while fibb_k > 0:
index_k = min(
offset + fibonacci(fibb_k - 1), len_list - 1
) # Prevent out of range
item_k_1 = arr[index_k]
if item_k_1 == val:
return index_k
elif val < item_k_1:
fibb_k -= 1
elif val > item_k_1:
offset += fibonacci(fibb_k - 1)
fibb_k -= 2
else:
return -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Disjoint set.
Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
"""
class Node:
def __init__(self, data: int) -> None:
self.data = data
self.rank: int
self.parent: Node
def make_set(x: Node) -> None:
"""
Make x as a set.
"""
# rank is the distance from x to its' parent
# root's rank is 0
x.rank = 0
x.parent = x
def union_set(x: Node, y: Node) -> None:
"""
Union of two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x == y:
return
elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
def find_set(x: Node) -> Node:
"""
Return the parent of x
"""
if x != x.parent:
x.parent = find_set(x.parent)
return x.parent
def find_python_set(node: Node) -> set:
"""
Return a Python Standard Library set that contains i.
"""
sets = ({0, 1, 2}, {3, 4, 5})
for s in sets:
if node.data in s:
return s
raise ValueError(f"{node.data} is not in {sets}")
def test_disjoint_set() -> None:
"""
>>> test_disjoint_set()
"""
vertex = [Node(i) for i in range(6)]
for v in vertex:
make_set(v)
union_set(vertex[0], vertex[1])
union_set(vertex[1], vertex[2])
union_set(vertex[3], vertex[4])
union_set(vertex[3], vertex[5])
for node0 in vertex:
for node1 in vertex:
if find_python_set(node0).isdisjoint(find_python_set(node1)):
assert find_set(node0) != find_set(node1)
else:
assert find_set(node0) == find_set(node1)
if __name__ == "__main__":
test_disjoint_set()
| """
Disjoint set.
Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
"""
class Node:
def __init__(self, data: int) -> None:
self.data = data
self.rank: int
self.parent: Node
def make_set(x: Node) -> None:
"""
Make x as a set.
"""
# rank is the distance from x to its' parent
# root's rank is 0
x.rank = 0
x.parent = x
def union_set(x: Node, y: Node) -> None:
"""
Union of two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x == y:
return
elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
def find_set(x: Node) -> Node:
"""
Return the parent of x
"""
if x != x.parent:
x.parent = find_set(x.parent)
return x.parent
def find_python_set(node: Node) -> set:
"""
Return a Python Standard Library set that contains i.
"""
sets = ({0, 1, 2}, {3, 4, 5})
for s in sets:
if node.data in s:
return s
raise ValueError(f"{node.data} is not in {sets}")
def test_disjoint_set() -> None:
"""
>>> test_disjoint_set()
"""
vertex = [Node(i) for i in range(6)]
for v in vertex:
make_set(v)
union_set(vertex[0], vertex[1])
union_set(vertex[1], vertex[2])
union_set(vertex[3], vertex[4])
union_set(vertex[3], vertex[5])
for node0 in vertex:
for node1 in vertex:
if find_python_set(node0).isdisjoint(find_python_set(node1)):
assert find_set(node0) != find_set(node1)
else:
assert find_set(node0) == find_set(node1)
if __name__ == "__main__":
test_disjoint_set()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementation of Circular Queue using linked lists
# https://en.wikipedia.org/wiki/Circular_buffer
from __future__ import annotations
from typing import Any
class CircularQueueLinkedList:
"""
Circular FIFO list with the given capacity (default queue length : 6)
>>> cq = CircularQueueLinkedList(2)
>>> cq.enqueue('a')
>>> cq.enqueue('b')
>>> cq.enqueue('c')
Traceback (most recent call last):
...
Exception: Full Queue
"""
def __init__(self, initial_capacity: int = 6) -> None:
self.front: Node | None = None
self.rear: Node | None = None
self.create_linked_list(initial_capacity)
def create_linked_list(self, initial_capacity: int) -> None:
current_node = Node()
self.front = current_node
self.rear = current_node
previous_node = current_node
for _ in range(1, initial_capacity):
current_node = Node()
previous_node.next = current_node
current_node.prev = previous_node
previous_node = current_node
previous_node.next = self.front
self.front.prev = previous_node
def is_empty(self) -> bool:
"""
Checks where the queue is empty or not
>>> cq = CircularQueueLinkedList()
>>> cq.is_empty()
True
>>> cq.enqueue('a')
>>> cq.is_empty()
False
>>> cq.dequeue()
'a'
>>> cq.is_empty()
True
"""
return (
self.front == self.rear
and self.front is not None
and self.front.data is None
)
def first(self) -> Any | None:
"""
Returns the first element of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.first()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('a')
>>> cq.first()
'a'
>>> cq.dequeue()
'a'
>>> cq.first()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('b')
>>> cq.enqueue('c')
>>> cq.first()
'b'
"""
self.check_can_perform_operation()
return self.front.data if self.front else None
def enqueue(self, data: Any) -> None:
"""
Saves data at the end of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.enqueue('a')
>>> cq.enqueue('b')
>>> cq.dequeue()
'a'
>>> cq.dequeue()
'b'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
"""
if self.rear is None:
return
self.check_is_full()
if not self.is_empty():
self.rear = self.rear.next
if self.rear:
self.rear.data = data
def dequeue(self) -> Any:
"""
Removes and retrieves the first element of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('a')
>>> cq.dequeue()
'a'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
"""
self.check_can_perform_operation()
if self.rear is None or self.front is None:
return
if self.front == self.rear:
data = self.front.data
self.front.data = None
return data
old_front = self.front
self.front = old_front.next
data = old_front.data
old_front.data = None
return data
def check_can_perform_operation(self) -> None:
if self.is_empty():
raise Exception("Empty Queue")
def check_is_full(self) -> None:
if self.rear and self.rear.next == self.front:
raise Exception("Full Queue")
class Node:
def __init__(self) -> None:
self.data: Any | None = None
self.next: Node | None = None
self.prev: Node | None = None
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Implementation of Circular Queue using linked lists
# https://en.wikipedia.org/wiki/Circular_buffer
from __future__ import annotations
from typing import Any
class CircularQueueLinkedList:
"""
Circular FIFO list with the given capacity (default queue length : 6)
>>> cq = CircularQueueLinkedList(2)
>>> cq.enqueue('a')
>>> cq.enqueue('b')
>>> cq.enqueue('c')
Traceback (most recent call last):
...
Exception: Full Queue
"""
def __init__(self, initial_capacity: int = 6) -> None:
self.front: Node | None = None
self.rear: Node | None = None
self.create_linked_list(initial_capacity)
def create_linked_list(self, initial_capacity: int) -> None:
current_node = Node()
self.front = current_node
self.rear = current_node
previous_node = current_node
for _ in range(1, initial_capacity):
current_node = Node()
previous_node.next = current_node
current_node.prev = previous_node
previous_node = current_node
previous_node.next = self.front
self.front.prev = previous_node
def is_empty(self) -> bool:
"""
Checks where the queue is empty or not
>>> cq = CircularQueueLinkedList()
>>> cq.is_empty()
True
>>> cq.enqueue('a')
>>> cq.is_empty()
False
>>> cq.dequeue()
'a'
>>> cq.is_empty()
True
"""
return (
self.front == self.rear
and self.front is not None
and self.front.data is None
)
def first(self) -> Any | None:
"""
Returns the first element of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.first()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('a')
>>> cq.first()
'a'
>>> cq.dequeue()
'a'
>>> cq.first()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('b')
>>> cq.enqueue('c')
>>> cq.first()
'b'
"""
self.check_can_perform_operation()
return self.front.data if self.front else None
def enqueue(self, data: Any) -> None:
"""
Saves data at the end of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.enqueue('a')
>>> cq.enqueue('b')
>>> cq.dequeue()
'a'
>>> cq.dequeue()
'b'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
"""
if self.rear is None:
return
self.check_is_full()
if not self.is_empty():
self.rear = self.rear.next
if self.rear:
self.rear.data = data
def dequeue(self) -> Any:
"""
Removes and retrieves the first element of the queue
>>> cq = CircularQueueLinkedList()
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
>>> cq.enqueue('a')
>>> cq.dequeue()
'a'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: Empty Queue
"""
self.check_can_perform_operation()
if self.rear is None or self.front is None:
return
if self.front == self.rear:
data = self.front.data
self.front.data = None
return data
old_front = self.front
self.front = old_front.next
data = old_front.data
old_front.data = None
return data
def check_can_perform_operation(self) -> None:
if self.is_empty():
raise Exception("Empty Queue")
def check_is_full(self) -> None:
if self.rear and self.rear.next == self.front:
raise Exception("Full Queue")
class Node:
def __init__(self) -> None:
self.data: Any | None = None
self.next: Node | None = None
self.prev: Node | None = None
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Conway's Game Of Life, Author Anurag Kumar(mailto:[email protected])
Requirements:
- numpy
- random
- time
- matplotlib
Python:
- 3.5
Usage:
- $python3 game_o_life <canvas_size:int>
Game-Of-Life Rules:
1.
Any live cell with fewer than two live neighbours
dies, as if caused by under-population.
2.
Any live cell with two or three live neighbours lives
on to the next generation.
3.
Any live cell with more than three live neighbours
dies, as if by over-population.
4.
Any dead cell with exactly three live neighbours be-
comes a live cell, as if by reproduction.
"""
import random
import sys
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
usage_doc = "Usage of script: script_nama <size_of_canvas:int>"
choice = [0] * 100 + [1] * 10
random.shuffle(choice)
def create_canvas(size: int) -> list[list[bool]]:
canvas = [[False for i in range(size)] for j in range(size)]
return canvas
def seed(canvas: list[list[bool]]) -> None:
for i, row in enumerate(canvas):
for j, _ in enumerate(row):
canvas[i][j] = bool(random.getrandbits(1))
def run(canvas: list[list[bool]]) -> list[list[bool]]:
"""This function runs the rules of game through all points, and changes their
status accordingly.(in the same canvas)
@Args:
--
canvas : canvas of population to run the rules on.
@returns:
--
None
"""
current_canvas = np.array(canvas)
next_gen_canvas = np.array(create_canvas(current_canvas.shape[0]))
for r, row in enumerate(current_canvas):
for c, pt in enumerate(row):
# print(r-1,r+2,c-1,c+2)
next_gen_canvas[r][c] = __judge_point(
pt, current_canvas[r - 1 : r + 2, c - 1 : c + 2]
)
current_canvas = next_gen_canvas
del next_gen_canvas # cleaning memory as we move on.
return_canvas: list[list[bool]] = current_canvas.tolist()
return return_canvas
def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
dead = 0
alive = 0
# finding dead or alive neighbours count.
for i in neighbours:
for status in i:
if status:
alive += 1
else:
dead += 1
# handling duplicate entry for focus pt.
if pt:
alive -= 1
else:
dead -= 1
# running the rules of game here.
state = pt
if pt:
if alive < 2:
state = False
elif alive == 2 or alive == 3:
state = True
elif alive > 3:
state = False
else:
if alive == 3:
state = True
return state
if __name__ == "__main__":
if len(sys.argv) != 2:
raise Exception(usage_doc)
canvas_size = int(sys.argv[1])
# main working structure of this module.
c = create_canvas(canvas_size)
seed(c)
fig, ax = plt.subplots()
fig.show()
cmap = ListedColormap(["w", "k"])
try:
while True:
c = run(c)
ax.matshow(c, cmap=cmap)
fig.canvas.draw()
ax.cla()
except KeyboardInterrupt:
# do nothing.
pass
| """Conway's Game Of Life, Author Anurag Kumar(mailto:[email protected])
Requirements:
- numpy
- random
- time
- matplotlib
Python:
- 3.5
Usage:
- $python3 game_o_life <canvas_size:int>
Game-Of-Life Rules:
1.
Any live cell with fewer than two live neighbours
dies, as if caused by under-population.
2.
Any live cell with two or three live neighbours lives
on to the next generation.
3.
Any live cell with more than three live neighbours
dies, as if by over-population.
4.
Any dead cell with exactly three live neighbours be-
comes a live cell, as if by reproduction.
"""
import random
import sys
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
usage_doc = "Usage of script: script_nama <size_of_canvas:int>"
choice = [0] * 100 + [1] * 10
random.shuffle(choice)
def create_canvas(size: int) -> list[list[bool]]:
canvas = [[False for i in range(size)] for j in range(size)]
return canvas
def seed(canvas: list[list[bool]]) -> None:
for i, row in enumerate(canvas):
for j, _ in enumerate(row):
canvas[i][j] = bool(random.getrandbits(1))
def run(canvas: list[list[bool]]) -> list[list[bool]]:
"""This function runs the rules of game through all points, and changes their
status accordingly.(in the same canvas)
@Args:
--
canvas : canvas of population to run the rules on.
@returns:
--
None
"""
current_canvas = np.array(canvas)
next_gen_canvas = np.array(create_canvas(current_canvas.shape[0]))
for r, row in enumerate(current_canvas):
for c, pt in enumerate(row):
# print(r-1,r+2,c-1,c+2)
next_gen_canvas[r][c] = __judge_point(
pt, current_canvas[r - 1 : r + 2, c - 1 : c + 2]
)
current_canvas = next_gen_canvas
del next_gen_canvas # cleaning memory as we move on.
return_canvas: list[list[bool]] = current_canvas.tolist()
return return_canvas
def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
dead = 0
alive = 0
# finding dead or alive neighbours count.
for i in neighbours:
for status in i:
if status:
alive += 1
else:
dead += 1
# handling duplicate entry for focus pt.
if pt:
alive -= 1
else:
dead -= 1
# running the rules of game here.
state = pt
if pt:
if alive < 2:
state = False
elif alive == 2 or alive == 3:
state = True
elif alive > 3:
state = False
else:
if alive == 3:
state = True
return state
if __name__ == "__main__":
if len(sys.argv) != 2:
raise Exception(usage_doc)
canvas_size = int(sys.argv[1])
# main working structure of this module.
c = create_canvas(canvas_size)
seed(c)
fig, ax = plt.subplots()
fig.show()
cmap = ListedColormap(["w", "k"])
try:
while True:
c = run(c)
ax.matshow(c, cmap=cmap)
fig.canvas.draw()
ax.cla()
except KeyboardInterrupt:
# do nothing.
pass
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def factorial(n: int) -> int:
"""
Calculate the factorial of a positive integer
https://en.wikipedia.org/wiki/Factorial
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(20))
True
>>> factorial(0.1)
Traceback (most recent call last):
...
ValueError: factorial() only accepts integral values
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("factorial() only accepts integral values")
if n < 0:
raise ValueError("factorial() not defined for negative values")
return 1 if n == 0 or n == 1 else n * factorial(n - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| def factorial(n: int) -> int:
"""
Calculate the factorial of a positive integer
https://en.wikipedia.org/wiki/Factorial
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(20))
True
>>> factorial(0.1)
Traceback (most recent call last):
...
ValueError: factorial() only accepts integral values
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("factorial() only accepts integral values")
if n < 0:
raise ValueError("factorial() not defined for negative values")
return 1 if n == 0 or n == 1 else n * factorial(n - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Absolute Value."""
def abs_val(num):
"""
Find the absolute value of a number.
>>> abs_val(-5.1)
5.1
>>> abs_val(-5) == abs_val(5)
True
>>> abs_val(0)
0
"""
return -num if num < 0 else num
def test_abs_val():
"""
>>> test_abs_val()
"""
assert 0 == abs_val(0)
assert 34 == abs_val(34)
assert 100000000000 == abs_val(-100000000000)
if __name__ == "__main__":
print(abs_val(-34)) # --> 34
| """Absolute Value."""
def abs_val(num):
"""
Find the absolute value of a number.
>>> abs_val(-5.1)
5.1
>>> abs_val(-5) == abs_val(5)
True
>>> abs_val(0)
0
"""
return -num if num < 0 else num
def test_abs_val():
"""
>>> test_abs_val()
"""
assert 0 == abs_val(0)
assert 34 == abs_val(34)
assert 100000000000 == abs_val(-100000000000)
if __name__ == "__main__":
print(abs_val(-34)) # --> 34
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from collections import Counter
from random import random
class MarkovChainGraphUndirectedUnweighted:
"""
Undirected Unweighted Graph for running Markov Chain Algorithm
"""
def __init__(self):
self.connections = {}
def add_node(self, node: str) -> None:
self.connections[node] = {}
def add_transition_probability(
self, node1: str, node2: str, probability: float
) -> None:
if node1 not in self.connections:
self.add_node(node1)
if node2 not in self.connections:
self.add_node(node2)
self.connections[node1][node2] = probability
def get_nodes(self) -> list[str]:
return list(self.connections)
def transition(self, node: str) -> str:
current_probability = 0
random_value = random()
for dest in self.connections[node]:
current_probability += self.connections[node][dest]
if current_probability > random_value:
return dest
return ""
def get_transitions(
start: str, transitions: list[tuple[str, str, float]], steps: int
) -> dict[str, int]:
"""
Running Markov Chain algorithm and calculating the number of times each node is
visited
>>> transitions = [
... ('a', 'a', 0.9),
... ('a', 'b', 0.075),
... ('a', 'c', 0.025),
... ('b', 'a', 0.15),
... ('b', 'b', 0.8),
... ('b', 'c', 0.05),
... ('c', 'a', 0.25),
... ('c', 'b', 0.25),
... ('c', 'c', 0.5)
... ]
>>> result = get_transitions('a', transitions, 5000)
>>> result['a'] > result['b'] > result['c']
True
"""
graph = MarkovChainGraphUndirectedUnweighted()
for node1, node2, probability in transitions:
graph.add_transition_probability(node1, node2, probability)
visited = Counter(graph.get_nodes())
node = start
for _ in range(steps):
node = graph.transition(node)
visited[node] += 1
return visited
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections import Counter
from random import random
class MarkovChainGraphUndirectedUnweighted:
"""
Undirected Unweighted Graph for running Markov Chain Algorithm
"""
def __init__(self):
self.connections = {}
def add_node(self, node: str) -> None:
self.connections[node] = {}
def add_transition_probability(
self, node1: str, node2: str, probability: float
) -> None:
if node1 not in self.connections:
self.add_node(node1)
if node2 not in self.connections:
self.add_node(node2)
self.connections[node1][node2] = probability
def get_nodes(self) -> list[str]:
return list(self.connections)
def transition(self, node: str) -> str:
current_probability = 0
random_value = random()
for dest in self.connections[node]:
current_probability += self.connections[node][dest]
if current_probability > random_value:
return dest
return ""
def get_transitions(
start: str, transitions: list[tuple[str, str, float]], steps: int
) -> dict[str, int]:
"""
Running Markov Chain algorithm and calculating the number of times each node is
visited
>>> transitions = [
... ('a', 'a', 0.9),
... ('a', 'b', 0.075),
... ('a', 'c', 0.025),
... ('b', 'a', 0.15),
... ('b', 'b', 0.8),
... ('b', 'c', 0.05),
... ('c', 'a', 0.25),
... ('c', 'b', 0.25),
... ('c', 'c', 0.5)
... ]
>>> result = get_transitions('a', transitions, 5000)
>>> result['a'] > result['b'] > result['c']
True
"""
graph = MarkovChainGraphUndirectedUnweighted()
for node1, node2, probability in transitions:
graph.add_transition_probability(node1, node2, probability)
visited = Counter(graph.get_nodes())
node = start
for _ in range(steps):
node = graph.transition(node)
visited[node] += 1
return visited
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Source: https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
This is a non-parallelized implementation of odd-even transposition sort.
Normally the swaps in each set happen simultaneously, without that the algorithm
is no better than bubble sort.
"""
def odd_even_transposition(arr: list) -> list:
"""
>>> odd_even_transposition([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> odd_even_transposition([13, 11, 18, 0, -1])
[-1, 0, 11, 13, 18]
>>> odd_even_transposition([-.1, 1.1, .1, -2.9])
[-2.9, -0.1, 0.1, 1.1]
"""
arr_size = len(arr)
for _ in range(arr_size):
for i in range(_ % 2, arr_size - 1, 2):
if arr[i + 1] < arr[i]:
arr[i], arr[i + 1] = arr[i + 1], arr[i]
return arr
if __name__ == "__main__":
arr = list(range(10, 0, -1))
print(f"Original: {arr}. Sorted: {odd_even_transposition(arr)}")
| """
Source: https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
This is a non-parallelized implementation of odd-even transposition sort.
Normally the swaps in each set happen simultaneously, without that the algorithm
is no better than bubble sort.
"""
def odd_even_transposition(arr: list) -> list:
"""
>>> odd_even_transposition([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> odd_even_transposition([13, 11, 18, 0, -1])
[-1, 0, 11, 13, 18]
>>> odd_even_transposition([-.1, 1.1, .1, -2.9])
[-2.9, -0.1, 0.1, 1.1]
"""
arr_size = len(arr)
for _ in range(arr_size):
for i in range(_ % 2, arr_size - 1, 2):
if arr[i + 1] < arr[i]:
arr[i], arr[i + 1] = arr[i + 1], arr[i]
return arr
if __name__ == "__main__":
arr = list(range(10, 0, -1))
print(f"Original: {arr}. Sorted: {odd_even_transposition(arr)}")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from typing import Sequence
def compare_string(string1: str, string2: str) -> str:
"""
>>> compare_string('0010','0110')
'0_10'
>>> compare_string('0110','1101')
'X'
"""
list1 = list(string1)
list2 = list(string2)
count = 0
for i in range(len(list1)):
if list1[i] != list2[i]:
count += 1
list1[i] = "_"
if count > 1:
return "X"
else:
return "".join(list1)
def check(binary: list[str]) -> list[str]:
"""
>>> check(['0.00.01.5'])
['0.00.01.5']
"""
pi = []
while True:
check1 = ["$"] * len(binary)
temp = []
for i in range(len(binary)):
for j in range(i + 1, len(binary)):
k = compare_string(binary[i], binary[j])
if k != "X":
check1[i] = "*"
check1[j] = "*"
temp.append(k)
for i in range(len(binary)):
if check1[i] == "$":
pi.append(binary[i])
if len(temp) == 0:
return pi
binary = list(set(temp))
def decimal_to_binary(no_of_variable: int, minterms: Sequence[float]) -> list[str]:
"""
>>> decimal_to_binary(3,[1.5])
['0.00.01.5']
"""
temp = []
for minterm in minterms:
string = ""
for i in range(no_of_variable):
string = str(minterm % 2) + string
minterm //= 2
temp.append(string)
return temp
def is_for_table(string1: str, string2: str, count: int) -> bool:
"""
>>> is_for_table('__1','011',2)
True
>>> is_for_table('01_','001',1)
False
"""
list1 = list(string1)
list2 = list(string2)
count_n = 0
for i in range(len(list1)):
if list1[i] != list2[i]:
count_n += 1
return count_n == count
def selection(chart: list[list[int]], prime_implicants: list[str]) -> list[str]:
"""
>>> selection([[1]],['0.00.01.5'])
['0.00.01.5']
>>> selection([[1]],['0.00.01.5'])
['0.00.01.5']
"""
temp = []
select = [0] * len(chart)
for i in range(len(chart[0])):
count = 0
rem = -1
for j in range(len(chart)):
if chart[j][i] == 1:
count += 1
rem = j
if count == 1:
select[rem] = 1
for i in range(len(select)):
if select[i] == 1:
for j in range(len(chart[0])):
if chart[i][j] == 1:
for k in range(len(chart)):
chart[k][j] = 0
temp.append(prime_implicants[i])
while True:
max_n = 0
rem = -1
count_n = 0
for i in range(len(chart)):
count_n = chart[i].count(1)
if count_n > max_n:
max_n = count_n
rem = i
if max_n == 0:
return temp
temp.append(prime_implicants[rem])
for i in range(len(chart[0])):
if chart[rem][i] == 1:
for j in range(len(chart)):
chart[j][i] = 0
def prime_implicant_chart(
prime_implicants: list[str], binary: list[str]
) -> list[list[int]]:
"""
>>> prime_implicant_chart(['0.00.01.5'],['0.00.01.5'])
[[1]]
"""
chart = [[0 for x in range(len(binary))] for x in range(len(prime_implicants))]
for i in range(len(prime_implicants)):
count = prime_implicants[i].count("_")
for j in range(len(binary)):
if is_for_table(prime_implicants[i], binary[j], count):
chart[i][j] = 1
return chart
def main() -> None:
no_of_variable = int(input("Enter the no. of variables\n"))
minterms = [
float(x)
for x in input(
"Enter the decimal representation of Minterms 'Spaces Separated'\n"
).split()
]
binary = decimal_to_binary(no_of_variable, minterms)
prime_implicants = check(binary)
print("Prime Implicants are:")
print(prime_implicants)
chart = prime_implicant_chart(prime_implicants, binary)
essential_prime_implicants = selection(chart, prime_implicants)
print("Essential Prime Implicants are:")
print(essential_prime_implicants)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| from __future__ import annotations
from typing import Sequence
def compare_string(string1: str, string2: str) -> str:
"""
>>> compare_string('0010','0110')
'0_10'
>>> compare_string('0110','1101')
'X'
"""
list1 = list(string1)
list2 = list(string2)
count = 0
for i in range(len(list1)):
if list1[i] != list2[i]:
count += 1
list1[i] = "_"
if count > 1:
return "X"
else:
return "".join(list1)
def check(binary: list[str]) -> list[str]:
"""
>>> check(['0.00.01.5'])
['0.00.01.5']
"""
pi = []
while True:
check1 = ["$"] * len(binary)
temp = []
for i in range(len(binary)):
for j in range(i + 1, len(binary)):
k = compare_string(binary[i], binary[j])
if k != "X":
check1[i] = "*"
check1[j] = "*"
temp.append(k)
for i in range(len(binary)):
if check1[i] == "$":
pi.append(binary[i])
if len(temp) == 0:
return pi
binary = list(set(temp))
def decimal_to_binary(no_of_variable: int, minterms: Sequence[float]) -> list[str]:
"""
>>> decimal_to_binary(3,[1.5])
['0.00.01.5']
"""
temp = []
for minterm in minterms:
string = ""
for i in range(no_of_variable):
string = str(minterm % 2) + string
minterm //= 2
temp.append(string)
return temp
def is_for_table(string1: str, string2: str, count: int) -> bool:
"""
>>> is_for_table('__1','011',2)
True
>>> is_for_table('01_','001',1)
False
"""
list1 = list(string1)
list2 = list(string2)
count_n = 0
for i in range(len(list1)):
if list1[i] != list2[i]:
count_n += 1
return count_n == count
def selection(chart: list[list[int]], prime_implicants: list[str]) -> list[str]:
"""
>>> selection([[1]],['0.00.01.5'])
['0.00.01.5']
>>> selection([[1]],['0.00.01.5'])
['0.00.01.5']
"""
temp = []
select = [0] * len(chart)
for i in range(len(chart[0])):
count = 0
rem = -1
for j in range(len(chart)):
if chart[j][i] == 1:
count += 1
rem = j
if count == 1:
select[rem] = 1
for i in range(len(select)):
if select[i] == 1:
for j in range(len(chart[0])):
if chart[i][j] == 1:
for k in range(len(chart)):
chart[k][j] = 0
temp.append(prime_implicants[i])
while True:
max_n = 0
rem = -1
count_n = 0
for i in range(len(chart)):
count_n = chart[i].count(1)
if count_n > max_n:
max_n = count_n
rem = i
if max_n == 0:
return temp
temp.append(prime_implicants[rem])
for i in range(len(chart[0])):
if chart[rem][i] == 1:
for j in range(len(chart)):
chart[j][i] = 0
def prime_implicant_chart(
prime_implicants: list[str], binary: list[str]
) -> list[list[int]]:
"""
>>> prime_implicant_chart(['0.00.01.5'],['0.00.01.5'])
[[1]]
"""
chart = [[0 for x in range(len(binary))] for x in range(len(prime_implicants))]
for i in range(len(prime_implicants)):
count = prime_implicants[i].count("_")
for j in range(len(binary)):
if is_for_table(prime_implicants[i], binary[j], count):
chart[i][j] = 1
return chart
def main() -> None:
no_of_variable = int(input("Enter the no. of variables\n"))
minterms = [
float(x)
for x in input(
"Enter the decimal representation of Minterms 'Spaces Separated'\n"
).split()
]
binary = decimal_to_binary(no_of_variable, minterms)
prime_implicants = check(binary)
print("Prime Implicants are:")
print(prime_implicants)
chart = prime_implicant_chart(prime_implicants, binary)
essential_prime_implicants = selection(chart, prime_implicants)
print("Essential Prime Implicants are:")
print(essential_prime_implicants)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ A Stack using a linked list like structure """
from __future__ import annotations
from collections.abc import Iterator
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data
self.next: Node[T] | None = None
def __str__(self) -> str:
return f"{self.data}"
class LinkedStack(Generic[T]):
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top: Node[T] | None = None
def __iter__(self) -> Iterator[T]:
node = self.top
while node:
yield node.data
node = node.next
def __str__(self) -> str:
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self) -> int:
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: T) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
assert self.top is not None
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| """ A Stack using a linked list like structure """
from __future__ import annotations
from collections.abc import Iterator
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data
self.next: Node[T] | None = None
def __str__(self) -> str:
return f"{self.data}"
class LinkedStack(Generic[T]):
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top: Node[T] | None = None
def __iter__(self) -> Iterator[T]:
node = self.top
while node:
yield node.data
node = node.next
def __str__(self) -> str:
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self) -> int:
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: T) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
assert self.top is not None
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
What is the greatest product of four adjacent numbers (horizontally,
vertically, or diagonally) in this 20x20 array?
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
"""
import os
def largest_product(grid):
nColumns = len(grid[0])
nRows = len(grid)
largest = 0
lrDiagProduct = 0
rlDiagProduct = 0
# Check vertically, horizontally, diagonally at the same time (only works
# for nxn grid)
for i in range(nColumns):
for j in range(nRows - 3):
vertProduct = grid[j][i] * grid[j + 1][i] * grid[j + 2][i] * grid[j + 3][i]
horzProduct = grid[i][j] * grid[i][j + 1] * grid[i][j + 2] * grid[i][j + 3]
# Left-to-right diagonal (\) product
if i < nColumns - 3:
lrDiagProduct = (
grid[i][j]
* grid[i + 1][j + 1]
* grid[i + 2][j + 2]
* grid[i + 3][j + 3]
)
# Right-to-left diagonal(/) product
if i > 2:
rlDiagProduct = (
grid[i][j]
* grid[i - 1][j + 1]
* grid[i - 2][j + 2]
* grid[i - 3][j + 3]
)
maxProduct = max(vertProduct, horzProduct, lrDiagProduct, rlDiagProduct)
if maxProduct > largest:
largest = maxProduct
return largest
def solution():
"""Returns the greatest product of four adjacent numbers (horizontally,
vertically, or diagonally).
>>> solution()
70600674
"""
grid = []
with open(os.path.dirname(__file__) + "/grid.txt") as file:
for line in file:
grid.append(line.strip("\n").split(" "))
grid = [[int(i) for i in grid[j]] for j in range(len(grid))]
return largest_product(grid)
if __name__ == "__main__":
print(solution())
| """
What is the greatest product of four adjacent numbers (horizontally,
vertically, or diagonally) in this 20x20 array?
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
"""
import os
def largest_product(grid):
nColumns = len(grid[0])
nRows = len(grid)
largest = 0
lrDiagProduct = 0
rlDiagProduct = 0
# Check vertically, horizontally, diagonally at the same time (only works
# for nxn grid)
for i in range(nColumns):
for j in range(nRows - 3):
vertProduct = grid[j][i] * grid[j + 1][i] * grid[j + 2][i] * grid[j + 3][i]
horzProduct = grid[i][j] * grid[i][j + 1] * grid[i][j + 2] * grid[i][j + 3]
# Left-to-right diagonal (\) product
if i < nColumns - 3:
lrDiagProduct = (
grid[i][j]
* grid[i + 1][j + 1]
* grid[i + 2][j + 2]
* grid[i + 3][j + 3]
)
# Right-to-left diagonal(/) product
if i > 2:
rlDiagProduct = (
grid[i][j]
* grid[i - 1][j + 1]
* grid[i - 2][j + 2]
* grid[i - 3][j + 3]
)
maxProduct = max(vertProduct, horzProduct, lrDiagProduct, rlDiagProduct)
if maxProduct > largest:
largest = maxProduct
return largest
def solution():
"""Returns the greatest product of four adjacent numbers (horizontally,
vertically, or diagonally).
>>> solution()
70600674
"""
grid = []
with open(os.path.dirname(__file__) + "/grid.txt") as file:
for line in file:
grid.append(line.strip("\n").split(" "))
grid = [[int(i) for i in grid[j]] for j in range(len(grid))]
return largest_product(grid)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 70: https://projecteuler.net/problem=70
Euler's Totient function, φ(n) [sometimes called the phi function], is used to
determine the number of positive numbers less than or equal to n which are
relatively prime to n. For example, as 1, 2, 4, 5, 7, and 8, are all less than
nine and relatively prime to nine, φ(9)=6.
The number 1 is considered to be relatively prime to every positive number, so
φ(1)=1.
Interestingly, φ(87109)=79180, and it can be seen that 87109 is a permutation
of 79180.
Find the value of n, 1 < n < 10^7, for which φ(n) is a permutation of n and
the ratio n/φ(n) produces a minimum.
-----
This is essentially brute force. Calculate all totients up to 10^7 and
find the minimum ratio of n/φ(n) that way. To minimize the ratio, we want
to minimize n and maximize φ(n) as much as possible, so we can store the
minimum fraction's numerator and denominator and calculate new fractions
with each totient to compare against. To avoid dividing by zero, I opt to
use cross multiplication.
References:
Finding totients
https://en.wikipedia.org/wiki/Euler's_totient_function#Euler's_product_formula
"""
from __future__ import annotations
def get_totients(max_one: int) -> list[int]:
"""
Calculates a list of totients from 0 to max_one exclusive, using the
definition of Euler's product formula.
>>> get_totients(5)
[0, 1, 1, 2, 2]
>>> get_totients(10)
[0, 1, 1, 2, 2, 4, 2, 6, 4, 6]
"""
totients = [0] * max_one
for i in range(0, max_one):
totients[i] = i
for i in range(2, max_one):
if totients[i] == i:
for j in range(i, max_one, i):
totients[j] -= totients[j] // i
return totients
def has_same_digits(num1: int, num2: int) -> bool:
"""
Return True if num1 and num2 have the same frequency of every digit, False
otherwise.
>>> has_same_digits(123456789, 987654321)
True
>>> has_same_digits(123, 23)
False
>>> has_same_digits(1234566, 123456)
False
"""
return sorted(str(num1)) == sorted(str(num2))
def solution(max: int = 10000000) -> int:
"""
Finds the value of n from 1 to max such that n/φ(n) produces a minimum.
>>> solution(100)
21
>>> solution(10000)
4435
"""
min_numerator = 1 # i
min_denominator = 0 # φ(i)
totients = get_totients(max + 1)
for i in range(2, max + 1):
t = totients[i]
if i * min_denominator < min_numerator * t and has_same_digits(i, t):
min_numerator = i
min_denominator = t
return min_numerator
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 70: https://projecteuler.net/problem=70
Euler's Totient function, φ(n) [sometimes called the phi function], is used to
determine the number of positive numbers less than or equal to n which are
relatively prime to n. For example, as 1, 2, 4, 5, 7, and 8, are all less than
nine and relatively prime to nine, φ(9)=6.
The number 1 is considered to be relatively prime to every positive number, so
φ(1)=1.
Interestingly, φ(87109)=79180, and it can be seen that 87109 is a permutation
of 79180.
Find the value of n, 1 < n < 10^7, for which φ(n) is a permutation of n and
the ratio n/φ(n) produces a minimum.
-----
This is essentially brute force. Calculate all totients up to 10^7 and
find the minimum ratio of n/φ(n) that way. To minimize the ratio, we want
to minimize n and maximize φ(n) as much as possible, so we can store the
minimum fraction's numerator and denominator and calculate new fractions
with each totient to compare against. To avoid dividing by zero, I opt to
use cross multiplication.
References:
Finding totients
https://en.wikipedia.org/wiki/Euler's_totient_function#Euler's_product_formula
"""
from __future__ import annotations
def get_totients(max_one: int) -> list[int]:
"""
Calculates a list of totients from 0 to max_one exclusive, using the
definition of Euler's product formula.
>>> get_totients(5)
[0, 1, 1, 2, 2]
>>> get_totients(10)
[0, 1, 1, 2, 2, 4, 2, 6, 4, 6]
"""
totients = [0] * max_one
for i in range(0, max_one):
totients[i] = i
for i in range(2, max_one):
if totients[i] == i:
for j in range(i, max_one, i):
totients[j] -= totients[j] // i
return totients
def has_same_digits(num1: int, num2: int) -> bool:
"""
Return True if num1 and num2 have the same frequency of every digit, False
otherwise.
>>> has_same_digits(123456789, 987654321)
True
>>> has_same_digits(123, 23)
False
>>> has_same_digits(1234566, 123456)
False
"""
return sorted(str(num1)) == sorted(str(num2))
def solution(max: int = 10000000) -> int:
"""
Finds the value of n from 1 to max such that n/φ(n) produces a minimum.
>>> solution(100)
21
>>> solution(10000)
4435
"""
min_numerator = 1 # i
min_denominator = 0 # φ(i)
totients = get_totients(max + 1)
for i in range(2, max + 1):
t = totients[i]
if i * min_denominator < min_numerator * t and has_same_digits(i, t):
min_numerator = i
min_denominator = t
return min_numerator
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """A merge sort which accepts an array as input and recursively
splits an array in half and sorts and combines them.
"""
"""https://en.wikipedia.org/wiki/Merge_sort """
def merge(arr: list[int]) -> list[int]:
"""Return a sorted array.
>>> merge([10,9,8,7,6,5,4,3,2,1])
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> merge([1,2,3,4,5,6,7,8,9,10])
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> merge([10,22,1,2,3,9,15,23])
[1, 2, 3, 9, 10, 15, 22, 23]
>>> merge([100])
[100]
>>> merge([])
[]
"""
if len(arr) > 1:
middle_length = len(arr) // 2 # Finds the middle of the array
left_array = arr[
:middle_length
] # Creates an array of the elements in the first half.
right_array = arr[
middle_length:
] # Creates an array of the elements in the second half.
left_size = len(left_array)
right_size = len(right_array)
merge(left_array) # Starts sorting the left.
merge(right_array) # Starts sorting the right
left_index = 0 # Left Counter
right_index = 0 # Right Counter
index = 0 # Position Counter
while (
left_index < left_size and right_index < right_size
): # Runs until the lowers size of the left and right are sorted.
if left_array[left_index] < right_array[right_index]:
arr[index] = left_array[left_index]
left_index += 1
else:
arr[index] = right_array[right_index]
right_index += 1
index += 1
while (
left_index < left_size
): # Adds the left over elements in the left half of the array
arr[index] = left_array[left_index]
left_index += 1
index += 1
while (
right_index < right_size
): # Adds the left over elements in the right half of the array
arr[index] = right_array[right_index]
right_index += 1
index += 1
return arr
if __name__ == "__main__":
import doctest
doctest.testmod()
| """A merge sort which accepts an array as input and recursively
splits an array in half and sorts and combines them.
"""
"""https://en.wikipedia.org/wiki/Merge_sort """
def merge(arr: list[int]) -> list[int]:
"""Return a sorted array.
>>> merge([10,9,8,7,6,5,4,3,2,1])
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> merge([1,2,3,4,5,6,7,8,9,10])
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> merge([10,22,1,2,3,9,15,23])
[1, 2, 3, 9, 10, 15, 22, 23]
>>> merge([100])
[100]
>>> merge([])
[]
"""
if len(arr) > 1:
middle_length = len(arr) // 2 # Finds the middle of the array
left_array = arr[
:middle_length
] # Creates an array of the elements in the first half.
right_array = arr[
middle_length:
] # Creates an array of the elements in the second half.
left_size = len(left_array)
right_size = len(right_array)
merge(left_array) # Starts sorting the left.
merge(right_array) # Starts sorting the right
left_index = 0 # Left Counter
right_index = 0 # Right Counter
index = 0 # Position Counter
while (
left_index < left_size and right_index < right_size
): # Runs until the lowers size of the left and right are sorted.
if left_array[left_index] < right_array[right_index]:
arr[index] = left_array[left_index]
left_index += 1
else:
arr[index] = right_array[right_index]
right_index += 1
index += 1
while (
left_index < left_size
): # Adds the left over elements in the left half of the array
arr[index] = left_array[left_index]
left_index += 1
index += 1
while (
right_index < right_size
): # Adds the left over elements in the right half of the array
arr[index] = right_array[right_index]
right_index += 1
index += 1
return arr
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def isSumSubset(arr, arrLen, requiredSum):
"""
>>> isSumSubset([2, 4, 6, 8], 4, 5)
False
>>> isSumSubset([2, 4, 6, 8], 4, 14)
True
"""
# a subset value says 1 if that subset sum can be formed else 0
# initially no subsets can be formed hence False/0
subset = [[False for i in range(requiredSum + 1)] for i in range(arrLen + 1)]
# for each arr value, a sum of zero(0) can be formed by not taking any element
# hence True/1
for i in range(arrLen + 1):
subset[i][0] = True
# sum is not zero and set is empty then false
for i in range(1, requiredSum + 1):
subset[0][i] = False
for i in range(1, arrLen + 1):
for j in range(1, requiredSum + 1):
if arr[i - 1] > j:
subset[i][j] = subset[i - 1][j]
if arr[i - 1] <= j:
subset[i][j] = subset[i - 1][j] or subset[i - 1][j - arr[i - 1]]
# uncomment to print the subset
# for i in range(arrLen+1):
# print(subset[i])
print(subset[arrLen][requiredSum])
if __name__ == "__main__":
import doctest
doctest.testmod()
| def isSumSubset(arr, arrLen, requiredSum):
"""
>>> isSumSubset([2, 4, 6, 8], 4, 5)
False
>>> isSumSubset([2, 4, 6, 8], 4, 14)
True
"""
# a subset value says 1 if that subset sum can be formed else 0
# initially no subsets can be formed hence False/0
subset = [[False for i in range(requiredSum + 1)] for i in range(arrLen + 1)]
# for each arr value, a sum of zero(0) can be formed by not taking any element
# hence True/1
for i in range(arrLen + 1):
subset[i][0] = True
# sum is not zero and set is empty then false
for i in range(1, requiredSum + 1):
subset[0][i] = False
for i in range(1, arrLen + 1):
for j in range(1, requiredSum + 1):
if arr[i - 1] > j:
subset[i][j] = subset[i - 1][j]
if arr[i - 1] <= j:
subset[i][j] = subset[i - 1][j] or subset[i - 1][j - arr[i - 1]]
# uncomment to print the subset
# for i in range(arrLen+1):
# print(subset[i])
print(subset[arrLen][requiredSum])
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 7: https://projecteuler.net/problem=7
10001st prime
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we
can see that the 6th prime is 13.
What is the 10001st prime number?
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
def is_prime(number: int) -> bool:
"""
Determines whether the given number is prime or not
>>> is_prime(2)
True
>>> is_prime(15)
False
>>> is_prime(29)
True
"""
for i in range(2, int(number**0.5) + 1):
if number % i == 0:
return False
return True
def solution(nth: int = 10001) -> int:
"""
Returns the n-th prime number.
>>> solution(6)
13
>>> solution(1)
2
>>> solution(3)
5
>>> solution(20)
71
>>> solution(50)
229
>>> solution(100)
541
>>> solution(3.4)
5
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter nth must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter nth must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter nth must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter nth must be int or castable to int.
"""
try:
nth = int(nth)
except (TypeError, ValueError):
raise TypeError("Parameter nth must be int or castable to int.") from None
if nth <= 0:
raise ValueError("Parameter nth must be greater than or equal to one.")
primes: list[int] = []
num = 2
while len(primes) < nth:
if is_prime(num):
primes.append(num)
num += 1
else:
num += 1
return primes[len(primes) - 1]
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 7: https://projecteuler.net/problem=7
10001st prime
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we
can see that the 6th prime is 13.
What is the 10001st prime number?
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
def is_prime(number: int) -> bool:
"""
Determines whether the given number is prime or not
>>> is_prime(2)
True
>>> is_prime(15)
False
>>> is_prime(29)
True
"""
for i in range(2, int(number**0.5) + 1):
if number % i == 0:
return False
return True
def solution(nth: int = 10001) -> int:
"""
Returns the n-th prime number.
>>> solution(6)
13
>>> solution(1)
2
>>> solution(3)
5
>>> solution(20)
71
>>> solution(50)
229
>>> solution(100)
541
>>> solution(3.4)
5
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter nth must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter nth must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter nth must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter nth must be int or castable to int.
"""
try:
nth = int(nth)
except (TypeError, ValueError):
raise TypeError("Parameter nth must be int or castable to int.") from None
if nth <= 0:
raise ValueError("Parameter nth must be greater than or equal to one.")
primes: list[int] = []
num = 2
while len(primes) < nth:
if is_prime(num):
primes.append(num)
num += 1
else:
num += 1
return primes[len(primes) - 1]
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementation of First Come First Served scheduling algorithm
# In this Algorithm we just care about the order that the processes arrived
# without carring about their duration time
# https://en.wikipedia.org/wiki/Scheduling_(computing)#First_come,_first_served
from __future__ import annotations
def calculate_waiting_times(duration_times: list[int]) -> list[int]:
"""
This function calculates the waiting time of some processes that have a
specified duration time.
Return: The waiting time for each process.
>>> calculate_waiting_times([5, 10, 15])
[0, 5, 15]
>>> calculate_waiting_times([1, 2, 3, 4, 5])
[0, 1, 3, 6, 10]
>>> calculate_waiting_times([10, 3])
[0, 10]
"""
waiting_times = [0] * len(duration_times)
for i in range(1, len(duration_times)):
waiting_times[i] = duration_times[i - 1] + waiting_times[i - 1]
return waiting_times
def calculate_turnaround_times(
duration_times: list[int], waiting_times: list[int]
) -> list[int]:
"""
This function calculates the turnaround time of some processes.
Return: The time difference between the completion time and the
arrival time.
Practically waiting_time + duration_time
>>> calculate_turnaround_times([5, 10, 15], [0, 5, 15])
[5, 15, 30]
>>> calculate_turnaround_times([1, 2, 3, 4, 5], [0, 1, 3, 6, 10])
[1, 3, 6, 10, 15]
>>> calculate_turnaround_times([10, 3], [0, 10])
[10, 13]
"""
return [
duration_time + waiting_times[i]
for i, duration_time in enumerate(duration_times)
]
def calculate_average_turnaround_time(turnaround_times: list[int]) -> float:
"""
This function calculates the average of the turnaround times
Return: The average of the turnaround times.
>>> calculate_average_turnaround_time([0, 5, 16])
7.0
>>> calculate_average_turnaround_time([1, 5, 8, 12])
6.5
>>> calculate_average_turnaround_time([10, 24])
17.0
"""
return sum(turnaround_times) / len(turnaround_times)
def calculate_average_waiting_time(waiting_times: list[int]) -> float:
"""
This function calculates the average of the waiting times
Return: The average of the waiting times.
>>> calculate_average_waiting_time([0, 5, 16])
7.0
>>> calculate_average_waiting_time([1, 5, 8, 12])
6.5
>>> calculate_average_waiting_time([10, 24])
17.0
"""
return sum(waiting_times) / len(waiting_times)
if __name__ == "__main__":
# process id's
processes = [1, 2, 3]
# ensure that we actually have processes
if len(processes) == 0:
print("Zero amount of processes")
exit()
# duration time of all processes
duration_times = [19, 8, 9]
# ensure we can match each id to a duration time
if len(duration_times) != len(processes):
print("Unable to match all id's with their duration time")
exit()
# get the waiting times and the turnaround times
waiting_times = calculate_waiting_times(duration_times)
turnaround_times = calculate_turnaround_times(duration_times, waiting_times)
# get the average times
average_waiting_time = calculate_average_waiting_time(waiting_times)
average_turnaround_time = calculate_average_turnaround_time(turnaround_times)
# print all the results
print("Process ID\tDuration Time\tWaiting Time\tTurnaround Time")
for i, process in enumerate(processes):
print(
f"{process}\t\t{duration_times[i]}\t\t{waiting_times[i]}\t\t"
f"{turnaround_times[i]}"
)
print(f"Average waiting time = {average_waiting_time}")
print(f"Average turn around time = {average_turnaround_time}")
| # Implementation of First Come First Served scheduling algorithm
# In this Algorithm we just care about the order that the processes arrived
# without carring about their duration time
# https://en.wikipedia.org/wiki/Scheduling_(computing)#First_come,_first_served
from __future__ import annotations
def calculate_waiting_times(duration_times: list[int]) -> list[int]:
"""
This function calculates the waiting time of some processes that have a
specified duration time.
Return: The waiting time for each process.
>>> calculate_waiting_times([5, 10, 15])
[0, 5, 15]
>>> calculate_waiting_times([1, 2, 3, 4, 5])
[0, 1, 3, 6, 10]
>>> calculate_waiting_times([10, 3])
[0, 10]
"""
waiting_times = [0] * len(duration_times)
for i in range(1, len(duration_times)):
waiting_times[i] = duration_times[i - 1] + waiting_times[i - 1]
return waiting_times
def calculate_turnaround_times(
duration_times: list[int], waiting_times: list[int]
) -> list[int]:
"""
This function calculates the turnaround time of some processes.
Return: The time difference between the completion time and the
arrival time.
Practically waiting_time + duration_time
>>> calculate_turnaround_times([5, 10, 15], [0, 5, 15])
[5, 15, 30]
>>> calculate_turnaround_times([1, 2, 3, 4, 5], [0, 1, 3, 6, 10])
[1, 3, 6, 10, 15]
>>> calculate_turnaround_times([10, 3], [0, 10])
[10, 13]
"""
return [
duration_time + waiting_times[i]
for i, duration_time in enumerate(duration_times)
]
def calculate_average_turnaround_time(turnaround_times: list[int]) -> float:
"""
This function calculates the average of the turnaround times
Return: The average of the turnaround times.
>>> calculate_average_turnaround_time([0, 5, 16])
7.0
>>> calculate_average_turnaround_time([1, 5, 8, 12])
6.5
>>> calculate_average_turnaround_time([10, 24])
17.0
"""
return sum(turnaround_times) / len(turnaround_times)
def calculate_average_waiting_time(waiting_times: list[int]) -> float:
"""
This function calculates the average of the waiting times
Return: The average of the waiting times.
>>> calculate_average_waiting_time([0, 5, 16])
7.0
>>> calculate_average_waiting_time([1, 5, 8, 12])
6.5
>>> calculate_average_waiting_time([10, 24])
17.0
"""
return sum(waiting_times) / len(waiting_times)
if __name__ == "__main__":
# process id's
processes = [1, 2, 3]
# ensure that we actually have processes
if len(processes) == 0:
print("Zero amount of processes")
exit()
# duration time of all processes
duration_times = [19, 8, 9]
# ensure we can match each id to a duration time
if len(duration_times) != len(processes):
print("Unable to match all id's with their duration time")
exit()
# get the waiting times and the turnaround times
waiting_times = calculate_waiting_times(duration_times)
turnaround_times = calculate_turnaround_times(duration_times, waiting_times)
# get the average times
average_waiting_time = calculate_average_waiting_time(waiting_times)
average_turnaround_time = calculate_average_turnaround_time(turnaround_times)
# print all the results
print("Process ID\tDuration Time\tWaiting Time\tTurnaround Time")
for i, process in enumerate(processes):
print(
f"{process}\t\t{duration_times[i]}\t\t{waiting_times[i]}\t\t"
f"{turnaround_times[i]}"
)
print(f"Average waiting time = {average_waiting_time}")
print(f"Average turn around time = {average_turnaround_time}")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Create a Long Short Term Memory (LSTM) network model
An LSTM is a type of Recurrent Neural Network (RNN) as discussed at:
* http://colah.github.io/posts/2015-08-Understanding-LSTMs
* https://en.wikipedia.org/wiki/Long_short-term_memory
"""
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.models import Sequential
if __name__ == "__main__":
"""
First part of building a model is to get the data and prepare
it for our model. You can use any dataset for stock prediction
make sure you set the price column on line number 21. Here we
use a dataset which have the price on 3rd column.
"""
df = pd.read_csv("sample_data.csv", header=None)
len_data = df.shape[:1][0]
# If you're using some other dataset input the target column
actual_data = df.iloc[:, 1:2]
actual_data = actual_data.values.reshape(len_data, 1)
actual_data = MinMaxScaler().fit_transform(actual_data)
look_back = 10
forward_days = 5
periods = 20
division = len_data - periods * look_back
train_data = actual_data[:division]
test_data = actual_data[division - look_back :]
train_x, train_y = [], []
test_x, test_y = [], []
for i in range(0, len(train_data) - forward_days - look_back + 1):
train_x.append(train_data[i : i + look_back])
train_y.append(train_data[i + look_back : i + look_back + forward_days])
for i in range(0, len(test_data) - forward_days - look_back + 1):
test_x.append(test_data[i : i + look_back])
test_y.append(test_data[i + look_back : i + look_back + forward_days])
x_train = np.array(train_x)
x_test = np.array(test_x)
y_train = np.array([list(i.ravel()) for i in train_y])
y_test = np.array([list(i.ravel()) for i in test_y])
model = Sequential()
model.add(LSTM(128, input_shape=(look_back, 1), return_sequences=True))
model.add(LSTM(64, input_shape=(128, 1)))
model.add(Dense(forward_days))
model.compile(loss="mean_squared_error", optimizer="adam")
history = model.fit(
x_train, y_train, epochs=150, verbose=1, shuffle=True, batch_size=4
)
pred = model.predict(x_test)
| """
Create a Long Short Term Memory (LSTM) network model
An LSTM is a type of Recurrent Neural Network (RNN) as discussed at:
* http://colah.github.io/posts/2015-08-Understanding-LSTMs
* https://en.wikipedia.org/wiki/Long_short-term_memory
"""
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.models import Sequential
if __name__ == "__main__":
"""
First part of building a model is to get the data and prepare
it for our model. You can use any dataset for stock prediction
make sure you set the price column on line number 21. Here we
use a dataset which have the price on 3rd column.
"""
df = pd.read_csv("sample_data.csv", header=None)
len_data = df.shape[:1][0]
# If you're using some other dataset input the target column
actual_data = df.iloc[:, 1:2]
actual_data = actual_data.values.reshape(len_data, 1)
actual_data = MinMaxScaler().fit_transform(actual_data)
look_back = 10
forward_days = 5
periods = 20
division = len_data - periods * look_back
train_data = actual_data[:division]
test_data = actual_data[division - look_back :]
train_x, train_y = [], []
test_x, test_y = [], []
for i in range(0, len(train_data) - forward_days - look_back + 1):
train_x.append(train_data[i : i + look_back])
train_y.append(train_data[i + look_back : i + look_back + forward_days])
for i in range(0, len(test_data) - forward_days - look_back + 1):
test_x.append(test_data[i : i + look_back])
test_y.append(test_data[i + look_back : i + look_back + forward_days])
x_train = np.array(train_x)
x_test = np.array(test_x)
y_train = np.array([list(i.ravel()) for i in train_y])
y_test = np.array([list(i.ravel()) for i in test_y])
model = Sequential()
model.add(LSTM(128, input_shape=(look_back, 1), return_sequences=True))
model.add(LSTM(64, input_shape=(128, 1)))
model.add(Dense(forward_days))
model.compile(loss="mean_squared_error", optimizer="adam")
history = model.fit(
x_train, y_train, epochs=150, verbose=1, shuffle=True, batch_size=4
)
pred = model.predict(x_test)
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Frequency Finder
# frequency taken from http://en.wikipedia.org/wiki/Letter_frequency
englishLetterFreq = {
"E": 12.70,
"T": 9.06,
"A": 8.17,
"O": 7.51,
"I": 6.97,
"N": 6.75,
"S": 6.33,
"H": 6.09,
"R": 5.99,
"D": 4.25,
"L": 4.03,
"C": 2.78,
"U": 2.76,
"M": 2.41,
"W": 2.36,
"F": 2.23,
"G": 2.02,
"Y": 1.97,
"P": 1.93,
"B": 1.29,
"V": 0.98,
"K": 0.77,
"J": 0.15,
"X": 0.15,
"Q": 0.10,
"Z": 0.07,
}
ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ"
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def getLetterCount(message):
letterCount = {
"A": 0,
"B": 0,
"C": 0,
"D": 0,
"E": 0,
"F": 0,
"G": 0,
"H": 0,
"I": 0,
"J": 0,
"K": 0,
"L": 0,
"M": 0,
"N": 0,
"O": 0,
"P": 0,
"Q": 0,
"R": 0,
"S": 0,
"T": 0,
"U": 0,
"V": 0,
"W": 0,
"X": 0,
"Y": 0,
"Z": 0,
}
for letter in message.upper():
if letter in LETTERS:
letterCount[letter] += 1
return letterCount
def getItemAtIndexZero(x):
return x[0]
def getFrequencyOrder(message):
letterToFreq = getLetterCount(message)
freqToLetter = {}
for letter in LETTERS:
if letterToFreq[letter] not in freqToLetter:
freqToLetter[letterToFreq[letter]] = [letter]
else:
freqToLetter[letterToFreq[letter]].append(letter)
for freq in freqToLetter:
freqToLetter[freq].sort(key=ETAOIN.find, reverse=True)
freqToLetter[freq] = "".join(freqToLetter[freq])
freqPairs = list(freqToLetter.items())
freqPairs.sort(key=getItemAtIndexZero, reverse=True)
freqOrder = []
for freqPair in freqPairs:
freqOrder.append(freqPair[1])
return "".join(freqOrder)
def englishFreqMatchScore(message):
"""
>>> englishFreqMatchScore('Hello World')
1
"""
freqOrder = getFrequencyOrder(message)
matchScore = 0
for commonLetter in ETAOIN[:6]:
if commonLetter in freqOrder[:6]:
matchScore += 1
for uncommonLetter in ETAOIN[-6:]:
if uncommonLetter in freqOrder[-6:]:
matchScore += 1
return matchScore
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Frequency Finder
# frequency taken from http://en.wikipedia.org/wiki/Letter_frequency
englishLetterFreq = {
"E": 12.70,
"T": 9.06,
"A": 8.17,
"O": 7.51,
"I": 6.97,
"N": 6.75,
"S": 6.33,
"H": 6.09,
"R": 5.99,
"D": 4.25,
"L": 4.03,
"C": 2.78,
"U": 2.76,
"M": 2.41,
"W": 2.36,
"F": 2.23,
"G": 2.02,
"Y": 1.97,
"P": 1.93,
"B": 1.29,
"V": 0.98,
"K": 0.77,
"J": 0.15,
"X": 0.15,
"Q": 0.10,
"Z": 0.07,
}
ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ"
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def getLetterCount(message):
letterCount = {
"A": 0,
"B": 0,
"C": 0,
"D": 0,
"E": 0,
"F": 0,
"G": 0,
"H": 0,
"I": 0,
"J": 0,
"K": 0,
"L": 0,
"M": 0,
"N": 0,
"O": 0,
"P": 0,
"Q": 0,
"R": 0,
"S": 0,
"T": 0,
"U": 0,
"V": 0,
"W": 0,
"X": 0,
"Y": 0,
"Z": 0,
}
for letter in message.upper():
if letter in LETTERS:
letterCount[letter] += 1
return letterCount
def getItemAtIndexZero(x):
return x[0]
def getFrequencyOrder(message):
letterToFreq = getLetterCount(message)
freqToLetter = {}
for letter in LETTERS:
if letterToFreq[letter] not in freqToLetter:
freqToLetter[letterToFreq[letter]] = [letter]
else:
freqToLetter[letterToFreq[letter]].append(letter)
for freq in freqToLetter:
freqToLetter[freq].sort(key=ETAOIN.find, reverse=True)
freqToLetter[freq] = "".join(freqToLetter[freq])
freqPairs = list(freqToLetter.items())
freqPairs.sort(key=getItemAtIndexZero, reverse=True)
freqOrder = []
for freqPair in freqPairs:
freqOrder.append(freqPair[1])
return "".join(freqOrder)
def englishFreqMatchScore(message):
"""
>>> englishFreqMatchScore('Hello World')
1
"""
freqOrder = getFrequencyOrder(message)
matchScore = 0
for commonLetter in ETAOIN[:6]:
if commonLetter in freqOrder[:6]:
matchScore += 1
for uncommonLetter in ETAOIN[-6:]:
if uncommonLetter in freqOrder[-6:]:
matchScore += 1
return matchScore
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Heap's (iterative) algorithm returns the list of all permutations possible from a list.
It minimizes movement by generating each permutation from the previous one
by swapping only two elements.
More information:
https://en.wikipedia.org/wiki/Heap%27s_algorithm.
"""
def heaps(arr: list) -> list:
"""
Pure python implementation of the iterative Heap's algorithm,
returning all permutations of a list.
>>> heaps([])
[()]
>>> heaps([0])
[(0,)]
>>> heaps([-1, 1])
[(-1, 1), (1, -1)]
>>> heaps([1, 2, 3])
[(1, 2, 3), (2, 1, 3), (3, 1, 2), (1, 3, 2), (2, 3, 1), (3, 2, 1)]
>>> from itertools import permutations
>>> sorted(heaps([1,2,3])) == sorted(permutations([1,2,3]))
True
>>> all(sorted(heaps(x)) == sorted(permutations(x))
... for x in ([], [0], [-1, 1], [1, 2, 3]))
True
"""
if len(arr) <= 1:
return [tuple(arr)]
res = []
def generate(n: int, arr: list):
c = [0] * n
res.append(tuple(arr))
i = 0
while i < n:
if c[i] < i:
if i % 2 == 0:
arr[0], arr[i] = arr[i], arr[0]
else:
arr[c[i]], arr[i] = arr[i], arr[c[i]]
res.append(tuple(arr))
c[i] += 1
i = 0
else:
c[i] = 0
i += 1
generate(len(arr), arr)
return res
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
print(heaps(arr))
| """
Heap's (iterative) algorithm returns the list of all permutations possible from a list.
It minimizes movement by generating each permutation from the previous one
by swapping only two elements.
More information:
https://en.wikipedia.org/wiki/Heap%27s_algorithm.
"""
def heaps(arr: list) -> list:
"""
Pure python implementation of the iterative Heap's algorithm,
returning all permutations of a list.
>>> heaps([])
[()]
>>> heaps([0])
[(0,)]
>>> heaps([-1, 1])
[(-1, 1), (1, -1)]
>>> heaps([1, 2, 3])
[(1, 2, 3), (2, 1, 3), (3, 1, 2), (1, 3, 2), (2, 3, 1), (3, 2, 1)]
>>> from itertools import permutations
>>> sorted(heaps([1,2,3])) == sorted(permutations([1,2,3]))
True
>>> all(sorted(heaps(x)) == sorted(permutations(x))
... for x in ([], [0], [-1, 1], [1, 2, 3]))
True
"""
if len(arr) <= 1:
return [tuple(arr)]
res = []
def generate(n: int, arr: list):
c = [0] * n
res.append(tuple(arr))
i = 0
while i < n:
if c[i] < i:
if i % 2 == 0:
arr[0], arr[i] = arr[i], arr[0]
else:
arr[c[i]], arr[i] = arr[i], arr[c[i]]
res.append(tuple(arr))
c[i] += 1
i = 0
else:
c[i] = 0
i += 1
generate(len(arr), arr)
return res
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
print(heaps(arr))
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
We shall say that an n-digit number is pandigital if it makes use of all the
digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through
5 pandigital.
The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing
multiplicand, multiplier, and product is 1 through 9 pandigital.
Find the sum of all products whose multiplicand/multiplier/product identity can
be written as a 1 through 9 pandigital.
HINT: Some products can be obtained in more than one way so be sure to only
include it once in your sum.
"""
import itertools
def isCombinationValid(combination):
"""
Checks if a combination (a tuple of 9 digits)
is a valid product equation.
>>> isCombinationValid(('3', '9', '1', '8', '6', '7', '2', '5', '4'))
True
>>> isCombinationValid(('1', '2', '3', '4', '5', '6', '7', '8', '9'))
False
"""
return (
int("".join(combination[0:2])) * int("".join(combination[2:5]))
== int("".join(combination[5:9]))
) or (
int("".join(combination[0])) * int("".join(combination[1:5]))
== int("".join(combination[5:9]))
)
def solution():
"""
Finds the sum of all products whose multiplicand/multiplier/product identity
can be written as a 1 through 9 pandigital
>>> solution()
45228
"""
return sum(
{
int("".join(pandigital[5:9]))
for pandigital in itertools.permutations("123456789")
if isCombinationValid(pandigital)
}
)
if __name__ == "__main__":
print(solution())
| """
We shall say that an n-digit number is pandigital if it makes use of all the
digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through
5 pandigital.
The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing
multiplicand, multiplier, and product is 1 through 9 pandigital.
Find the sum of all products whose multiplicand/multiplier/product identity can
be written as a 1 through 9 pandigital.
HINT: Some products can be obtained in more than one way so be sure to only
include it once in your sum.
"""
import itertools
def isCombinationValid(combination):
"""
Checks if a combination (a tuple of 9 digits)
is a valid product equation.
>>> isCombinationValid(('3', '9', '1', '8', '6', '7', '2', '5', '4'))
True
>>> isCombinationValid(('1', '2', '3', '4', '5', '6', '7', '8', '9'))
False
"""
return (
int("".join(combination[0:2])) * int("".join(combination[2:5]))
== int("".join(combination[5:9]))
) or (
int("".join(combination[0])) * int("".join(combination[1:5]))
== int("".join(combination[5:9]))
)
def solution():
"""
Finds the sum of all products whose multiplicand/multiplier/product identity
can be written as a 1 through 9 pandigital
>>> solution()
45228
"""
return sum(
{
int("".join(pandigital[5:9]))
for pandigital in itertools.permutations("123456789")
if isCombinationValid(pandigital)
}
)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 14: https://projecteuler.net/problem=14
Collatz conjecture: start with any positive integer n. Next term obtained from
the previous term as follows:
If the previous term is even, the next term is one half the previous term.
If the previous term is odd, the next term is 3 times the previous term plus 1.
The conjecture states the sequence will always reach 1 regardless of starting
n.
Problem Statement:
The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains
10 terms. Although it has not been proved yet (Collatz Problem), it is thought
that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
"""
from __future__ import annotations
COLLATZ_SEQUENCE_LENGTHS = {1: 1}
def collatz_sequence_length(n: int) -> int:
"""Returns the Collatz sequence length for n."""
if n in COLLATZ_SEQUENCE_LENGTHS:
return COLLATZ_SEQUENCE_LENGTHS[n]
if n % 2 == 0:
next_n = n // 2
else:
next_n = 3 * n + 1
sequence_length = collatz_sequence_length(next_n) + 1
COLLATZ_SEQUENCE_LENGTHS[n] = sequence_length
return sequence_length
def solution(n: int = 1000000) -> int:
"""Returns the number under n that generates the longest Collatz sequence.
>>> solution(1000000)
837799
>>> solution(200)
171
>>> solution(5000)
3711
>>> solution(15000)
13255
"""
result = max((collatz_sequence_length(i), i) for i in range(1, n))
return result[1]
if __name__ == "__main__":
print(solution(int(input().strip())))
| """
Problem 14: https://projecteuler.net/problem=14
Collatz conjecture: start with any positive integer n. Next term obtained from
the previous term as follows:
If the previous term is even, the next term is one half the previous term.
If the previous term is odd, the next term is 3 times the previous term plus 1.
The conjecture states the sequence will always reach 1 regardless of starting
n.
Problem Statement:
The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains
10 terms. Although it has not been proved yet (Collatz Problem), it is thought
that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
"""
from __future__ import annotations
COLLATZ_SEQUENCE_LENGTHS = {1: 1}
def collatz_sequence_length(n: int) -> int:
"""Returns the Collatz sequence length for n."""
if n in COLLATZ_SEQUENCE_LENGTHS:
return COLLATZ_SEQUENCE_LENGTHS[n]
if n % 2 == 0:
next_n = n // 2
else:
next_n = 3 * n + 1
sequence_length = collatz_sequence_length(next_n) + 1
COLLATZ_SEQUENCE_LENGTHS[n] = sequence_length
return sequence_length
def solution(n: int = 1000000) -> int:
"""Returns the number under n that generates the longest Collatz sequence.
>>> solution(1000000)
837799
>>> solution(200)
171
>>> solution(5000)
3711
>>> solution(15000)
13255
"""
result = max((collatz_sequence_length(i), i) for i in range(1, n))
return result[1]
if __name__ == "__main__":
print(solution(int(input().strip())))
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| ### Interest
* Compound Interest: "Compound interest is calculated by multiplying the initial principal amount by one plus the annual interest rate raised to the number of compound periods minus one." [Compound Interest](https://www.investopedia.com/)
* Simple Interest: "Simple interest paid or received over a certain period is a fixed percentage of the principal amount that was borrowed or lent. " [Simple Interest](https://www.investopedia.com/)
| ### Interest
* Compound Interest: "Compound interest is calculated by multiplying the initial principal amount by one plus the annual interest rate raised to the number of compound periods minus one." [Compound Interest](https://www.investopedia.com/)
* Simple Interest: "Simple interest paid or received over a certain period is a fixed percentage of the principal amount that was borrowed or lent. " [Simple Interest](https://www.investopedia.com/)
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| """
This is pure Python implementation of linear search algorithm
For doctests run following command:
python3 -m doctest -v linear_search.py
For manual testing run:
python3 linear_search.py
"""
def linear_search(sequence: list, target: int) -> int:
"""A pure Python implementation of a linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
:return: index of found item or None if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
0
>>> linear_search([0, 5, 7, 10, 15], 15)
4
>>> linear_search([0, 5, 7, 10, 15], 5)
1
>>> linear_search([0, 5, 7, 10, 15], 6)
-1
"""
for index, item in enumerate(sequence):
if item == target:
return index
return -1
def rec_linear_search(sequence: list, low: int, high: int, target: int) -> int:
"""
A pure Python implementation of a recursive linear search algorithm
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param low: Lower bound of the array
:param high: Higher bound of the array
:param target: The element to be found
:return: Index of the key or -1 if key not found
Examples:
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 0)
0
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 700)
4
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, 30)
1
>>> rec_linear_search([0, 30, 500, 100, 700], 0, 4, -6)
-1
"""
if not (0 <= high < len(sequence) and 0 <= low < len(sequence)):
raise Exception("Invalid upper or lower bound!")
if high < low:
return -1
if sequence[low] == target:
return low
if sequence[high] == target:
return high
return rec_linear_search(sequence, low + 1, high - 1, target)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter a single number to be found in the list:\n").strip())
result = linear_search(sequence, target)
if result != -1:
print(f"linear_search({sequence}, {target}) = {result}")
else:
print(f"{target} was not found in {sequence}")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 3: https://projecteuler.net/problem=3
Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143?
References:
- https://en.wikipedia.org/wiki/Prime_number#Unique_factorization
"""
def solution(n: int = 600851475143) -> int:
"""
Returns the largest prime factor of a given number n.
>>> solution(13195)
29
>>> solution(10)
5
>>> solution(17)
17
>>> solution(3.4)
3
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 2
ans = 0
if n == 2:
return 2
while n > 2:
while n % i != 0:
i += 1
ans = i
while n % i == 0:
n = n // i
i += 1
return int(ans)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 3: https://projecteuler.net/problem=3
Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143?
References:
- https://en.wikipedia.org/wiki/Prime_number#Unique_factorization
"""
def solution(n: int = 600851475143) -> int:
"""
Returns the largest prime factor of a given number n.
>>> solution(13195)
29
>>> solution(10)
5
>>> solution(17)
17
>>> solution(3.4)
3
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 2
ans = 0
if n == 2:
return 2
while n > 2:
while n % i != 0:
i += 1
ans = i
while n % i == 0:
n = n // i
i += 1
return int(ans)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Wavelet tree is a data-structure designed to efficiently answer various range queries
for arrays. Wavelets trees are different from other binary trees in the sense that
the nodes are split based on the actual values of the elements and not on indices,
such as the with segment trees or fenwick trees. You can read more about them here:
1. https://users.dcc.uchile.cl/~jperez/papers/ioiconf16.pdf
2. https://www.youtube.com/watch?v=4aSv9PcecDw&t=811s
3. https://www.youtube.com/watch?v=CybAgVF-MMc&t=1178s
"""
from __future__ import annotations
test_array = [2, 1, 4, 5, 6, 0, 8, 9, 1, 2, 0, 6, 4, 2, 0, 6, 5, 3, 2, 7]
class Node:
def __init__(self, length: int) -> None:
self.minn: int = -1
self.maxx: int = -1
self.map_left: list[int] = [-1] * length
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
"""
>>> node = Node(length=27)
>>> repr(node)
'min_value: -1, max_value: -1'
>>> repr(node) == str(node)
True
"""
return f"min_value: {self.minn}, max_value: {self.maxx}"
def build_tree(arr: list[int]) -> Node | None:
"""
Builds the tree for arr and returns the root
of the constructed tree
>>> build_tree(test_array)
min_value: 0, max_value: 9
"""
root = Node(len(arr))
root.minn, root.maxx = min(arr), max(arr)
# Leaf node case where the node contains only one unique value
if root.minn == root.maxx:
return root
"""
Take the mean of min and max element of arr as the pivot and
partition arr into left_arr and right_arr with all elements <= pivot in the
left_arr and the rest in right_arr, maintaining the order of the elements,
then recursively build trees for left_arr and right_arr
"""
pivot = (root.minn + root.maxx) // 2
left_arr: list[int] = []
right_arr: list[int] = []
for index, num in enumerate(arr):
if num <= pivot:
left_arr.append(num)
else:
right_arr.append(num)
root.map_left[index] = len(left_arr)
root.left = build_tree(left_arr)
root.right = build_tree(right_arr)
return root
def rank_till_index(node: Node | None, num: int, index: int) -> int:
"""
Returns the number of occurrences of num in interval [0, index] in the list
>>> root = build_tree(test_array)
>>> rank_till_index(root, 6, 6)
1
>>> rank_till_index(root, 2, 0)
1
>>> rank_till_index(root, 1, 10)
2
>>> rank_till_index(root, 17, 7)
0
>>> rank_till_index(root, 0, 9)
1
"""
if index < 0 or node is None:
return 0
# Leaf node cases
if node.minn == node.maxx:
return index + 1 if node.minn == num else 0
pivot = (node.minn + node.maxx) // 2
if num <= pivot:
# go the left subtree and map index to the left subtree
return rank_till_index(node.left, num, node.map_left[index] - 1)
else:
# go to the right subtree and map index to the right subtree
return rank_till_index(node.right, num, index - node.map_left[index])
def rank(node: Node | None, num: int, start: int, end: int) -> int:
"""
Returns the number of occurrences of num in interval [start, end] in the list
>>> root = build_tree(test_array)
>>> rank(root, 6, 3, 13)
2
>>> rank(root, 2, 0, 19)
4
>>> rank(root, 9, 2 ,2)
0
>>> rank(root, 0, 5, 10)
2
"""
if start > end:
return 0
rank_till_end = rank_till_index(node, num, end)
rank_before_start = rank_till_index(node, num, start - 1)
return rank_till_end - rank_before_start
def quantile(node: Node | None, index: int, start: int, end: int) -> int:
"""
Returns the index'th smallest element in interval [start, end] in the list
index is 0-indexed
>>> root = build_tree(test_array)
>>> quantile(root, 2, 2, 5)
5
>>> quantile(root, 5, 2, 13)
4
>>> quantile(root, 0, 6, 6)
8
>>> quantile(root, 4, 2, 5)
-1
"""
if index > (end - start) or start > end or node is None:
return -1
# Leaf node case
if node.minn == node.maxx:
return node.minn
# Number of elements in the left subtree in interval [start, end]
num_elements_in_left_tree = node.map_left[end] - (
node.map_left[start - 1] if start else 0
)
if num_elements_in_left_tree > index:
return quantile(
node.left,
index,
(node.map_left[start - 1] if start else 0),
node.map_left[end] - 1,
)
else:
return quantile(
node.right,
index - num_elements_in_left_tree,
start - (node.map_left[start - 1] if start else 0),
end - node.map_left[end],
)
def range_counting(
node: Node | None, start: int, end: int, start_num: int, end_num: int
) -> int:
"""
Returns the number of elements in range [start_num, end_num]
in interval [start, end] in the list
>>> root = build_tree(test_array)
>>> range_counting(root, 1, 10, 3, 7)
3
>>> range_counting(root, 2, 2, 1, 4)
1
>>> range_counting(root, 0, 19, 0, 100)
20
>>> range_counting(root, 1, 0, 1, 100)
0
>>> range_counting(root, 0, 17, 100, 1)
0
"""
if (
start > end
or node is None
or start_num > end_num
or node.minn > end_num
or node.maxx < start_num
):
return 0
if start_num <= node.minn and node.maxx <= end_num:
return end - start + 1
left = range_counting(
node.left,
(node.map_left[start - 1] if start else 0),
node.map_left[end] - 1,
start_num,
end_num,
)
right = range_counting(
node.right,
start - (node.map_left[start - 1] if start else 0),
end - node.map_left[end],
start_num,
end_num,
)
return left + right
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Wavelet tree is a data-structure designed to efficiently answer various range queries
for arrays. Wavelets trees are different from other binary trees in the sense that
the nodes are split based on the actual values of the elements and not on indices,
such as the with segment trees or fenwick trees. You can read more about them here:
1. https://users.dcc.uchile.cl/~jperez/papers/ioiconf16.pdf
2. https://www.youtube.com/watch?v=4aSv9PcecDw&t=811s
3. https://www.youtube.com/watch?v=CybAgVF-MMc&t=1178s
"""
from __future__ import annotations
test_array = [2, 1, 4, 5, 6, 0, 8, 9, 1, 2, 0, 6, 4, 2, 0, 6, 5, 3, 2, 7]
class Node:
def __init__(self, length: int) -> None:
self.minn: int = -1
self.maxx: int = -1
self.map_left: list[int] = [-1] * length
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
"""
>>> node = Node(length=27)
>>> repr(node)
'min_value: -1, max_value: -1'
>>> repr(node) == str(node)
True
"""
return f"min_value: {self.minn}, max_value: {self.maxx}"
def build_tree(arr: list[int]) -> Node | None:
"""
Builds the tree for arr and returns the root
of the constructed tree
>>> build_tree(test_array)
min_value: 0, max_value: 9
"""
root = Node(len(arr))
root.minn, root.maxx = min(arr), max(arr)
# Leaf node case where the node contains only one unique value
if root.minn == root.maxx:
return root
"""
Take the mean of min and max element of arr as the pivot and
partition arr into left_arr and right_arr with all elements <= pivot in the
left_arr and the rest in right_arr, maintaining the order of the elements,
then recursively build trees for left_arr and right_arr
"""
pivot = (root.minn + root.maxx) // 2
left_arr: list[int] = []
right_arr: list[int] = []
for index, num in enumerate(arr):
if num <= pivot:
left_arr.append(num)
else:
right_arr.append(num)
root.map_left[index] = len(left_arr)
root.left = build_tree(left_arr)
root.right = build_tree(right_arr)
return root
def rank_till_index(node: Node | None, num: int, index: int) -> int:
"""
Returns the number of occurrences of num in interval [0, index] in the list
>>> root = build_tree(test_array)
>>> rank_till_index(root, 6, 6)
1
>>> rank_till_index(root, 2, 0)
1
>>> rank_till_index(root, 1, 10)
2
>>> rank_till_index(root, 17, 7)
0
>>> rank_till_index(root, 0, 9)
1
"""
if index < 0 or node is None:
return 0
# Leaf node cases
if node.minn == node.maxx:
return index + 1 if node.minn == num else 0
pivot = (node.minn + node.maxx) // 2
if num <= pivot:
# go the left subtree and map index to the left subtree
return rank_till_index(node.left, num, node.map_left[index] - 1)
else:
# go to the right subtree and map index to the right subtree
return rank_till_index(node.right, num, index - node.map_left[index])
def rank(node: Node | None, num: int, start: int, end: int) -> int:
"""
Returns the number of occurrences of num in interval [start, end] in the list
>>> root = build_tree(test_array)
>>> rank(root, 6, 3, 13)
2
>>> rank(root, 2, 0, 19)
4
>>> rank(root, 9, 2 ,2)
0
>>> rank(root, 0, 5, 10)
2
"""
if start > end:
return 0
rank_till_end = rank_till_index(node, num, end)
rank_before_start = rank_till_index(node, num, start - 1)
return rank_till_end - rank_before_start
def quantile(node: Node | None, index: int, start: int, end: int) -> int:
"""
Returns the index'th smallest element in interval [start, end] in the list
index is 0-indexed
>>> root = build_tree(test_array)
>>> quantile(root, 2, 2, 5)
5
>>> quantile(root, 5, 2, 13)
4
>>> quantile(root, 0, 6, 6)
8
>>> quantile(root, 4, 2, 5)
-1
"""
if index > (end - start) or start > end or node is None:
return -1
# Leaf node case
if node.minn == node.maxx:
return node.minn
# Number of elements in the left subtree in interval [start, end]
num_elements_in_left_tree = node.map_left[end] - (
node.map_left[start - 1] if start else 0
)
if num_elements_in_left_tree > index:
return quantile(
node.left,
index,
(node.map_left[start - 1] if start else 0),
node.map_left[end] - 1,
)
else:
return quantile(
node.right,
index - num_elements_in_left_tree,
start - (node.map_left[start - 1] if start else 0),
end - node.map_left[end],
)
def range_counting(
node: Node | None, start: int, end: int, start_num: int, end_num: int
) -> int:
"""
Returns the number of elements in range [start_num, end_num]
in interval [start, end] in the list
>>> root = build_tree(test_array)
>>> range_counting(root, 1, 10, 3, 7)
3
>>> range_counting(root, 2, 2, 1, 4)
1
>>> range_counting(root, 0, 19, 0, 100)
20
>>> range_counting(root, 1, 0, 1, 100)
0
>>> range_counting(root, 0, 17, 100, 1)
0
"""
if (
start > end
or node is None
or start_num > end_num
or node.minn > end_num
or node.maxx < start_num
):
return 0
if start_num <= node.minn and node.maxx <= end_num:
return end - start + 1
left = range_counting(
node.left,
(node.map_left[start - 1] if start else 0),
node.map_left[end] - 1,
start_num,
end_num,
)
right = range_counting(
node.right,
start - (node.map_left[start - 1] if start else 0),
end - node.map_left[end],
start_num,
end_num,
)
return left + right
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parrameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
>>> pass # A doctest is not possible for this function.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
>>> pass # A doctest is not possible for this function.
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
>>> pass # A doctest is not possible for this function.
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if 0 < filter_scale:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parrameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
>>> pass # A doctest is not possible for this function.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
>>> pass # A doctest is not possible for this function.
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
>>> pass # A doctest is not possible for this function.
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if 0 < filter_scale:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE ✅")
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #
| #
| -1 |
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,113 | Fix some typos. | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Yulv-git | "2022-04-28T10:35:00Z" | "2022-05-01T10:44:23Z" | a7e4b2326a74067404339b1147c1ff40568ee4c0 | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | Fix some typos.. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Hey, we are going to find an exciting number called Catalan number which is use to find
the number of possible binary search trees from tree of a given number of nodes.
We will use the formula: t(n) = SUMMATION(i = 1 to n)t(i-1)t(n-i)
Further details at Wikipedia: https://en.wikipedia.org/wiki/Catalan_number
"""
"""
Our Contribution:
Basically we Create the 2 function:
1. catalan_number(node_count: int) -> int
Returns the number of possible binary search trees for n nodes.
2. binary_tree_count(node_count: int) -> int
Returns the number of possible binary trees for n nodes.
"""
def binomial_coefficient(n: int, k: int) -> int:
"""
Since Here we Find the Binomial Coefficient:
https://en.wikipedia.org/wiki/Binomial_coefficient
C(n,k) = n! / k!(n-k)!
:param n: 2 times of Number of nodes
:param k: Number of nodes
:return: Integer Value
>>> binomial_coefficient(4, 2)
6
"""
result = 1 # To kept the Calculated Value
# Since C(n, k) = C(n, n-k)
if k > (n - k):
k = n - k
# Calculate C(n,k)
for i in range(k):
result *= n - i
result //= i + 1
return result
def catalan_number(node_count: int) -> int:
"""
We can find Catalan number many ways but here we use Binomial Coefficient because it
does the job in O(n)
return the Catalan number of n using 2nCn/(n+1).
:param n: number of nodes
:return: Catalan number of n nodes
>>> catalan_number(5)
42
>>> catalan_number(6)
132
"""
return binomial_coefficient(2 * node_count, node_count) // (node_count + 1)
def factorial(n: int) -> int:
"""
Return the factorial of a number.
:param n: Number to find the Factorial of.
:return: Factorial of n.
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(10))
True
>>> factorial(-5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if n < 0:
raise ValueError("factorial() not defined for negative values")
result = 1
for i in range(1, n + 1):
result *= i
return result
def binary_tree_count(node_count: int) -> int:
"""
Return the number of possible of binary trees.
:param n: number of nodes
:return: Number of possible binary trees
>>> binary_tree_count(5)
5040
>>> binary_tree_count(6)
95040
"""
return catalan_number(node_count) * factorial(node_count)
if __name__ == "__main__":
node_count = int(input("Enter the number of nodes: ").strip() or 0)
if node_count <= 0:
raise ValueError("We need some nodes to work with.")
print(
f"Given {node_count} nodes, there are {binary_tree_count(node_count)} "
f"binary trees and {catalan_number(node_count)} binary search trees."
)
| """
Hey, we are going to find an exciting number called Catalan number which is use to find
the number of possible binary search trees from tree of a given number of nodes.
We will use the formula: t(n) = SUMMATION(i = 1 to n)t(i-1)t(n-i)
Further details at Wikipedia: https://en.wikipedia.org/wiki/Catalan_number
"""
"""
Our Contribution:
Basically we Create the 2 function:
1. catalan_number(node_count: int) -> int
Returns the number of possible binary search trees for n nodes.
2. binary_tree_count(node_count: int) -> int
Returns the number of possible binary trees for n nodes.
"""
def binomial_coefficient(n: int, k: int) -> int:
"""
Since Here we Find the Binomial Coefficient:
https://en.wikipedia.org/wiki/Binomial_coefficient
C(n,k) = n! / k!(n-k)!
:param n: 2 times of Number of nodes
:param k: Number of nodes
:return: Integer Value
>>> binomial_coefficient(4, 2)
6
"""
result = 1 # To kept the Calculated Value
# Since C(n, k) = C(n, n-k)
if k > (n - k):
k = n - k
# Calculate C(n,k)
for i in range(k):
result *= n - i
result //= i + 1
return result
def catalan_number(node_count: int) -> int:
"""
We can find Catalan number many ways but here we use Binomial Coefficient because it
does the job in O(n)
return the Catalan number of n using 2nCn/(n+1).
:param n: number of nodes
:return: Catalan number of n nodes
>>> catalan_number(5)
42
>>> catalan_number(6)
132
"""
return binomial_coefficient(2 * node_count, node_count) // (node_count + 1)
def factorial(n: int) -> int:
"""
Return the factorial of a number.
:param n: Number to find the Factorial of.
:return: Factorial of n.
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(10))
True
>>> factorial(-5) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if n < 0:
raise ValueError("factorial() not defined for negative values")
result = 1
for i in range(1, n + 1):
result *= i
return result
def binary_tree_count(node_count: int) -> int:
"""
Return the number of possible of binary trees.
:param n: number of nodes
:return: Number of possible binary trees
>>> binary_tree_count(5)
5040
>>> binary_tree_count(6)
95040
"""
return catalan_number(node_count) * factorial(node_count)
if __name__ == "__main__":
node_count = int(input("Enter the number of nodes: ").strip() or 0)
if node_count <= 0:
raise ValueError("We need some nodes to work with.")
print(
f"Given {node_count} nodes, there are {binary_tree_count(node_count)} "
f"binary trees and {catalan_number(node_count)} binary search trees."
)
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
def retroactive_resolution(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
sum = 0
for col in range(row + 1, columns):
sum += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - sum) / coefficients[row, row]
return x
def gaussian_elimination(coefficients: np.matrix, vector: np.ndarray) -> np.ndarray:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
sum = 0
for col in range(row + 1, columns):
sum += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - sum) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Checks if a system of forces is in static equilibrium.
"""
from __future__ import annotations
from numpy import array, cos, cross, ndarray, radians, sin
def polar_force(
magnitude: float, angle: float, radian_mode: bool = False
) -> list[float]:
"""
Resolves force along rectangular components.
(force, angle) => (force_x, force_y)
>>> import math
>>> force = polar_force(10, 45)
>>> math.isclose(force[0], 7.071067811865477)
True
>>> math.isclose(force[1], 7.0710678118654755)
True
>>> polar_force(10, 3.14, radian_mode=True)
[-9.999987317275396, 0.01592652916486828]
"""
if radian_mode:
return [magnitude * cos(angle), magnitude * sin(angle)]
return [magnitude * cos(radians(angle)), magnitude * sin(radians(angle))]
def in_static_equilibrium(
forces: ndarray, location: ndarray, eps: float = 10**-1
) -> bool:
"""
Check if a system is in equilibrium.
It takes two numpy.array objects.
forces ==> [
[force1_x, force1_y],
[force2_x, force2_y],
....]
location ==> [
[x1, y1],
[x2, y2],
....]
>>> force = array([[1, 1], [-1, 2]])
>>> location = array([[1, 0], [10, 0]])
>>> in_static_equilibrium(force, location)
False
"""
# summation of moments is zero
moments: ndarray = cross(location, forces)
sum_moments: float = sum(moments)
return abs(sum_moments) < eps
if __name__ == "__main__":
# Test to check if it works
forces = array(
[
polar_force(718.4, 180 - 30),
polar_force(879.54, 45),
polar_force(100, -90),
]
)
location = array([[0, 0], [0, 0], [0, 0]])
assert in_static_equilibrium(forces, location)
# Problem 1 in image_data/2D_problems.jpg
forces = array(
[
polar_force(30 * 9.81, 15),
polar_force(215, 180 - 45),
polar_force(264, 90 - 30),
]
)
location = array([[0, 0], [0, 0], [0, 0]])
assert in_static_equilibrium(forces, location)
# Problem in image_data/2D_problems_1.jpg
forces = array([[0, -2000], [0, -1200], [0, 15600], [0, -12400]])
location = array([[0, 0], [6, 0], [10, 0], [12, 0]])
assert in_static_equilibrium(forces, location)
import doctest
doctest.testmod()
| """
Checks if a system of forces is in static equilibrium.
"""
from __future__ import annotations
from numpy import array, cos, cross, float64, radians, sin
from numpy.typing import NDArray
def polar_force(
magnitude: float, angle: float, radian_mode: bool = False
) -> list[float]:
"""
Resolves force along rectangular components.
(force, angle) => (force_x, force_y)
>>> import math
>>> force = polar_force(10, 45)
>>> math.isclose(force[0], 7.071067811865477)
True
>>> math.isclose(force[1], 7.0710678118654755)
True
>>> polar_force(10, 3.14, radian_mode=True)
[-9.999987317275396, 0.01592652916486828]
"""
if radian_mode:
return [magnitude * cos(angle), magnitude * sin(angle)]
return [magnitude * cos(radians(angle)), magnitude * sin(radians(angle))]
def in_static_equilibrium(
forces: NDArray[float64], location: NDArray[float64], eps: float = 10**-1
) -> bool:
"""
Check if a system is in equilibrium.
It takes two numpy.array objects.
forces ==> [
[force1_x, force1_y],
[force2_x, force2_y],
....]
location ==> [
[x1, y1],
[x2, y2],
....]
>>> force = array([[1, 1], [-1, 2]])
>>> location = array([[1, 0], [10, 0]])
>>> in_static_equilibrium(force, location)
False
"""
# summation of moments is zero
moments: NDArray[float64] = cross(location, forces)
sum_moments: float = sum(moments)
return abs(sum_moments) < eps
if __name__ == "__main__":
# Test to check if it works
forces = array(
[
polar_force(718.4, 180 - 30),
polar_force(879.54, 45),
polar_force(100, -90),
]
)
location: NDArray[float64] = array([[0, 0], [0, 0], [0, 0]])
assert in_static_equilibrium(forces, location)
# Problem 1 in image_data/2D_problems.jpg
forces = array(
[
polar_force(30 * 9.81, 15),
polar_force(215, 180 - 45),
polar_force(264, 90 - 30),
]
)
location = array([[0, 0], [0, 0], [0, 0]])
assert in_static_equilibrium(forces, location)
# Problem in image_data/2D_problems_1.jpg
forces = array([[0, -2000], [0, -1200], [0, 15600], [0, -12400]])
location = array([[0, 0], [6, 0], [10, 0], [12, 0]])
assert in_static_equilibrium(forces, location)
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Jacobi Iteration Method - https://en.wikipedia.org/wiki/Jacobi_method
"""
from __future__ import annotations
import numpy as np
# Method to find solution of system of linear equations
def jacobi_iteration_method(
coefficient_matrix: np.ndarray,
constant_matrix: np.ndarray,
init_val: list,
iterations: int,
) -> list[float]:
"""
Jacobi Iteration Method:
An iterative algorithm to determine the solutions of strictly diagonally dominant
system of linear equations
4x1 + x2 + x3 = 2
x1 + 5x2 + 2x3 = -6
x1 + 2x2 + 4x3 = -4
x_init = [0.5, -0.5 , -0.5]
Examples:
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
[0.909375, -1.14375, -0.7484375]
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Coefficient matrix dimensions must be nxn but received 2x3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but
received 3x3 and 2x1
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Number of initial values must be equal to number of rows in coefficient
matrix but received 2 and 3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 0
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Iterations must be at least 1
"""
rows1, cols1 = coefficient_matrix.shape
rows2, cols2 = constant_matrix.shape
if rows1 != cols1:
raise ValueError(
f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
)
if cols2 != 1:
raise ValueError(f"Constant matrix must be nx1 but received {rows2}x{cols2}")
if rows1 != rows2:
raise ValueError(
f"""Coefficient and constant matrices dimensions must be nxn and nx1 but
received {rows1}x{cols1} and {rows2}x{cols2}"""
)
if len(init_val) != rows1:
raise ValueError(
f"""Number of initial values must be equal to number of rows in coefficient
matrix but received {len(init_val)} and {rows1}"""
)
if iterations <= 0:
raise ValueError("Iterations must be at least 1")
table = np.concatenate((coefficient_matrix, constant_matrix), axis=1)
rows, cols = table.shape
strictly_diagonally_dominant(table)
# Iterates the whole matrix for given number of times
for i in range(iterations):
new_val = []
for row in range(rows):
temp = 0
for col in range(cols):
if col == row:
denom = table[row][col]
elif col == cols - 1:
val = table[row][col]
else:
temp += (-1) * table[row][col] * init_val[col]
temp = (temp + val) / denom
new_val.append(temp)
init_val = new_val
return [float(i) for i in new_val]
# Checks if the given matrix is strictly diagonally dominant
def strictly_diagonally_dominant(table: np.ndarray) -> bool:
"""
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 4, -4]])
>>> strictly_diagonally_dominant(table)
True
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 3, -4]])
>>> strictly_diagonally_dominant(table)
Traceback (most recent call last):
...
ValueError: Coefficient matrix is not strictly diagonally dominant
"""
rows, cols = table.shape
is_diagonally_dominant = True
for i in range(0, rows):
sum = 0
for j in range(0, cols - 1):
if i == j:
continue
else:
sum += table[i][j]
if table[i][i] <= sum:
raise ValueError("Coefficient matrix is not strictly diagonally dominant")
return is_diagonally_dominant
# Test Cases
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Jacobi Iteration Method - https://en.wikipedia.org/wiki/Jacobi_method
"""
from __future__ import annotations
import numpy as np
from numpy import float64
from numpy.typing import NDArray
# Method to find solution of system of linear equations
def jacobi_iteration_method(
coefficient_matrix: NDArray[float64],
constant_matrix: NDArray[float64],
init_val: list[int],
iterations: int,
) -> list[float]:
"""
Jacobi Iteration Method:
An iterative algorithm to determine the solutions of strictly diagonally dominant
system of linear equations
4x1 + x2 + x3 = 2
x1 + 5x2 + 2x3 = -6
x1 + 2x2 + 4x3 = -4
x_init = [0.5, -0.5 , -0.5]
Examples:
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
[0.909375, -1.14375, -0.7484375]
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Coefficient matrix dimensions must be nxn but received 2x3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but
received 3x3 and 2x1
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Number of initial values must be equal to number of rows in coefficient
matrix but received 2 and 3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 0
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Iterations must be at least 1
"""
rows1, cols1 = coefficient_matrix.shape
rows2, cols2 = constant_matrix.shape
if rows1 != cols1:
raise ValueError(
f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
)
if cols2 != 1:
raise ValueError(f"Constant matrix must be nx1 but received {rows2}x{cols2}")
if rows1 != rows2:
raise ValueError(
f"""Coefficient and constant matrices dimensions must be nxn and nx1 but
received {rows1}x{cols1} and {rows2}x{cols2}"""
)
if len(init_val) != rows1:
raise ValueError(
f"""Number of initial values must be equal to number of rows in coefficient
matrix but received {len(init_val)} and {rows1}"""
)
if iterations <= 0:
raise ValueError("Iterations must be at least 1")
table: NDArray[float64] = np.concatenate(
(coefficient_matrix, constant_matrix), axis=1
)
rows, cols = table.shape
strictly_diagonally_dominant(table)
# Iterates the whole matrix for given number of times
for i in range(iterations):
new_val = []
for row in range(rows):
temp = 0
for col in range(cols):
if col == row:
denom = table[row][col]
elif col == cols - 1:
val = table[row][col]
else:
temp += (-1) * table[row][col] * init_val[col]
temp = (temp + val) / denom
new_val.append(temp)
init_val = new_val
return [float(i) for i in new_val]
# Checks if the given matrix is strictly diagonally dominant
def strictly_diagonally_dominant(table: NDArray[float64]) -> bool:
"""
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 4, -4]])
>>> strictly_diagonally_dominant(table)
True
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 3, -4]])
>>> strictly_diagonally_dominant(table)
Traceback (most recent call last):
...
ValueError: Coefficient matrix is not strictly diagonally dominant
"""
rows, cols = table.shape
is_diagonally_dominant = True
for i in range(0, rows):
sum = 0
for j in range(0, cols - 1):
if i == j:
continue
else:
sum += table[i][j]
if table[i][i] <= sum:
raise ValueError("Coefficient matrix is not strictly diagonally dominant")
return is_diagonally_dominant
# Test Cases
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Lower-Upper (LU) Decomposition.
Reference:
- https://en.wikipedia.org/wiki/LU_decomposition
"""
from __future__ import annotations
import numpy as np
def lower_upper_decomposition(table: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
"""Lower-Upper (LU) Decomposition
Example:
>>> matrix = np.array([[2, -2, 1], [0, 1, 2], [5, 3, 1]])
>>> outcome = lower_upper_decomposition(matrix)
>>> outcome[0]
array([[1. , 0. , 0. ],
[0. , 1. , 0. ],
[2.5, 8. , 1. ]])
>>> outcome[1]
array([[ 2. , -2. , 1. ],
[ 0. , 1. , 2. ],
[ 0. , 0. , -17.5]])
>>> matrix = np.array([[2, -2, 1], [0, 1, 2]])
>>> lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ValueError: 'table' has to be of square shaped array but got a 2x3 array:
[[ 2 -2 1]
[ 0 1 2]]
"""
# Table that contains our data
# Table has to be a square array so we need to check first
rows, columns = np.shape(table)
if rows != columns:
raise ValueError(
f"'table' has to be of square shaped array but got a {rows}x{columns} "
+ f"array:\n{table}"
)
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))
for i in range(columns):
for j in range(i):
total = 0
for k in range(j):
total += lower[i][k] * upper[k][j]
lower[i][j] = (table[i][j] - total) / upper[j][j]
lower[i][i] = 1
for j in range(i, columns):
total = 0
for k in range(i):
total += lower[i][k] * upper[k][j]
upper[i][j] = table[i][j] - total
return lower, upper
if __name__ == "__main__":
import doctest
doctest.testmod()
| """Lower-Upper (LU) Decomposition.
Reference:
- https://en.wikipedia.org/wiki/LU_decomposition
"""
from __future__ import annotations
import numpy as np
import numpy.typing as NDArray
from numpy import float64
def lower_upper_decomposition(
table: NDArray[float64],
) -> tuple[NDArray[float64], NDArray[float64]]:
"""Lower-Upper (LU) Decomposition
Example:
>>> matrix = np.array([[2, -2, 1], [0, 1, 2], [5, 3, 1]])
>>> outcome = lower_upper_decomposition(matrix)
>>> outcome[0]
array([[1. , 0. , 0. ],
[0. , 1. , 0. ],
[2.5, 8. , 1. ]])
>>> outcome[1]
array([[ 2. , -2. , 1. ],
[ 0. , 1. , 2. ],
[ 0. , 0. , -17.5]])
>>> matrix = np.array([[2, -2, 1], [0, 1, 2]])
>>> lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ValueError: 'table' has to be of square shaped array but got a 2x3 array:
[[ 2 -2 1]
[ 0 1 2]]
"""
# Table that contains our data
# Table has to be a square array so we need to check first
rows, columns = np.shape(table)
if rows != columns:
raise ValueError(
f"'table' has to be of square shaped array but got a {rows}x{columns} "
+ f"array:\n{table}"
)
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))
for i in range(columns):
for j in range(i):
total = 0
for k in range(j):
total += lower[i][k] * upper[k][j]
lower[i][j] = (table[i][j] - total) / upper[j][j]
lower[i][i] = 1
for j in range(i, columns):
total = 0
for k in range(i):
total += lower[i][k] * upper[k][j]
upper[i][j] = table[i][j] - total
return lower, upper
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Test cases:
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make in Indian Currency: 987
Following is minimal change for 987 :
500 100 100 100 100 50 20 10 5 2
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:10
1
5
10
20
50
100
200
500
1000
2000
Enter the change you want to make: 18745
Following is minimal change for 18745 :
2000 2000 2000 2000 2000 2000 2000 2000 2000 500 200 20 20 5
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: 0
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: -98
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:5
1
5
100
500
1000
Enter the change you want to make: 456
Following is minimal change for 456 :
100 100 100 100 5 5 5 5 5 5 5 5 5 5 5 1
"""
def find_minimum_change(denominations: list[int], value: str) -> list[int]:
"""
Find the minimum change from the given denominations and value
>>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745)
[2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 987)
[500, 100, 100, 100, 100, 50, 20, 10, 5, 2]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 0)
[]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], -98)
[]
>>> find_minimum_change([1, 5, 100, 500, 1000], 456)
[100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1]
"""
total_value = int(value)
# Initialize Result
answer = []
# Traverse through all denomination
for denomination in reversed(denominations):
# Find denominations
while int(total_value) >= int(denomination):
total_value -= int(denomination)
answer.append(denomination) # Append the "answers" array
return answer
# Driver Code
if __name__ == "__main__":
denominations = list()
value = "0"
if (
input("Do you want to enter your denominations ? (yY/n): ").strip().lower()
== "y"
):
n = int(input("Enter the number of denominations you want to add: ").strip())
for i in range(0, n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
# All denominations of Indian Currency if user does not enter
denominations = [1, 2, 5, 10, 20, 50, 100, 500, 2000]
value = input("Enter the change you want to make: ").strip()
if int(value) == 0 or int(value) < 0:
print("The total value cannot be zero or negative.")
else:
print(f"Following is minimal change for {value}: ")
answer = find_minimum_change(denominations, value)
# Print result
for i in range(len(answer)):
print(answer[i], end=" ")
| """
Test cases:
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make in Indian Currency: 987
Following is minimal change for 987 :
500 100 100 100 100 50 20 10 5 2
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:10
1
5
10
20
50
100
200
500
1000
2000
Enter the change you want to make: 18745
Following is minimal change for 18745 :
2000 2000 2000 2000 2000 2000 2000 2000 2000 500 200 20 20 5
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: 0
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: -98
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:5
1
5
100
500
1000
Enter the change you want to make: 456
Following is minimal change for 456 :
100 100 100 100 5 5 5 5 5 5 5 5 5 5 5 1
"""
def find_minimum_change(denominations: list[int], value: str) -> list[int]:
"""
Find the minimum change from the given denominations and value
>>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745)
[2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 987)
[500, 100, 100, 100, 100, 50, 20, 10, 5, 2]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 0)
[]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], -98)
[]
>>> find_minimum_change([1, 5, 100, 500, 1000], 456)
[100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1]
"""
total_value = int(value)
# Initialize Result
answer = []
# Traverse through all denomination
for denomination in reversed(denominations):
# Find denominations
while int(total_value) >= int(denomination):
total_value -= int(denomination)
answer.append(denomination) # Append the "answers" array
return answer
# Driver Code
if __name__ == "__main__":
denominations = list()
value = "0"
if (
input("Do you want to enter your denominations ? (yY/n): ").strip().lower()
== "y"
):
n = int(input("Enter the number of denominations you want to add: ").strip())
for i in range(0, n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
# All denominations of Indian Currency if user does not enter
denominations = [1, 2, 5, 10, 20, 50, 100, 500, 2000]
value = input("Enter the change you want to make: ").strip()
if int(value) == 0 or int(value) < 0:
print("The total value cannot be zero or negative.")
else:
print(f"Following is minimal change for {value}: ")
answer = find_minimum_change(denominations, value)
# Print result
for i in range(len(answer)):
print(answer[i], end=" ")
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Prime Check."""
import math
import unittest
def prime_check(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> prime_check(0)
False
>>> prime_check(1)
False
>>> prime_check(2)
True
>>> prime_check(3)
True
>>> prime_check(27)
False
>>> prime_check(87)
False
>>> prime_check(563)
True
>>> prime_check(2999)
True
>>> prime_check(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or not number % 2:
# Negatives, 0, 1 and all even numbers are not primes
return False
odd_numbers = range(3, int(math.sqrt(number) + 1), 2)
return not any(not number % i for i in odd_numbers)
class Test(unittest.TestCase):
def test_primes(self):
self.assertTrue(prime_check(2))
self.assertTrue(prime_check(3))
self.assertTrue(prime_check(5))
self.assertTrue(prime_check(7))
self.assertTrue(prime_check(11))
self.assertTrue(prime_check(13))
self.assertTrue(prime_check(17))
self.assertTrue(prime_check(19))
self.assertTrue(prime_check(23))
self.assertTrue(prime_check(29))
def test_not_primes(self):
self.assertFalse(
prime_check(-19),
"Negative numbers are excluded by definition of prime numbers.",
)
self.assertFalse(
prime_check(0),
"Zero doesn't have any positive factors, primes must have exactly two.",
)
self.assertFalse(
prime_check(1),
"One only has 1 positive factor, primes must have exactly two.",
)
self.assertFalse(prime_check(2 * 2))
self.assertFalse(prime_check(2 * 3))
self.assertFalse(prime_check(3 * 3))
self.assertFalse(prime_check(3 * 5))
self.assertFalse(prime_check(3 * 5 * 7))
if __name__ == "__main__":
unittest.main()
| """Prime Check."""
import math
import unittest
def prime_check(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> prime_check(0)
False
>>> prime_check(1)
False
>>> prime_check(2)
True
>>> prime_check(3)
True
>>> prime_check(27)
False
>>> prime_check(87)
False
>>> prime_check(563)
True
>>> prime_check(2999)
True
>>> prime_check(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or not number % 2:
# Negatives, 0, 1 and all even numbers are not primes
return False
odd_numbers = range(3, int(math.sqrt(number) + 1), 2)
return not any(not number % i for i in odd_numbers)
class Test(unittest.TestCase):
def test_primes(self):
self.assertTrue(prime_check(2))
self.assertTrue(prime_check(3))
self.assertTrue(prime_check(5))
self.assertTrue(prime_check(7))
self.assertTrue(prime_check(11))
self.assertTrue(prime_check(13))
self.assertTrue(prime_check(17))
self.assertTrue(prime_check(19))
self.assertTrue(prime_check(23))
self.assertTrue(prime_check(29))
def test_not_primes(self):
self.assertFalse(
prime_check(-19),
"Negative numbers are excluded by definition of prime numbers.",
)
self.assertFalse(
prime_check(0),
"Zero doesn't have any positive factors, primes must have exactly two.",
)
self.assertFalse(
prime_check(1),
"One only has 1 positive factor, primes must have exactly two.",
)
self.assertFalse(prime_check(2 * 2))
self.assertFalse(prime_check(2 * 3))
self.assertFalse(prime_check(3 * 3))
self.assertFalse(prime_check(3 * 5))
self.assertFalse(prime_check(3 * 5 * 7))
if __name__ == "__main__":
unittest.main()
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
class Node:
"""
A Node has data variable and pointers to Nodes to its left and right.
"""
def __init__(self, data: int) -> None:
self.data = data
self.left: Node | None = None
self.right: Node | None = None
def display(tree: Node | None) -> None: # In Order traversal of the tree
"""
>>> root = Node(1)
>>> root.left = Node(0)
>>> root.right = Node(2)
>>> display(root)
0
1
2
>>> display(root.right)
2
"""
if tree:
display(tree.left)
print(tree.data)
display(tree.right)
def depth_of_tree(tree: Node | None) -> int:
"""
Recursive function that returns the depth of a binary tree.
>>> root = Node(0)
>>> depth_of_tree(root)
1
>>> root.left = Node(0)
>>> depth_of_tree(root)
2
>>> root.right = Node(0)
>>> depth_of_tree(root)
2
>>> root.left.right = Node(0)
>>> depth_of_tree(root)
3
>>> depth_of_tree(root.left)
2
"""
return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0
def is_full_binary_tree(tree: Node) -> bool:
"""
Returns True if this is a full binary tree
>>> root = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right.right = Node(0)
>>> is_full_binary_tree(root)
False
"""
if not tree:
return True
if tree.left and tree.right:
return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right)
else:
return not tree.left and not tree.right
def main() -> None: # Main function for testing.
tree = Node(1)
tree.left = Node(2)
tree.right = Node(3)
tree.left.left = Node(4)
tree.left.right = Node(5)
tree.left.right.left = Node(6)
tree.right.left = Node(7)
tree.right.left.left = Node(8)
tree.right.left.left.right = Node(9)
print(is_full_binary_tree(tree))
print(depth_of_tree(tree))
print("Tree is: ")
display(tree)
if __name__ == "__main__":
main()
| from __future__ import annotations
class Node:
"""
A Node has data variable and pointers to Nodes to its left and right.
"""
def __init__(self, data: int) -> None:
self.data = data
self.left: Node | None = None
self.right: Node | None = None
def display(tree: Node | None) -> None: # In Order traversal of the tree
"""
>>> root = Node(1)
>>> root.left = Node(0)
>>> root.right = Node(2)
>>> display(root)
0
1
2
>>> display(root.right)
2
"""
if tree:
display(tree.left)
print(tree.data)
display(tree.right)
def depth_of_tree(tree: Node | None) -> int:
"""
Recursive function that returns the depth of a binary tree.
>>> root = Node(0)
>>> depth_of_tree(root)
1
>>> root.left = Node(0)
>>> depth_of_tree(root)
2
>>> root.right = Node(0)
>>> depth_of_tree(root)
2
>>> root.left.right = Node(0)
>>> depth_of_tree(root)
3
>>> depth_of_tree(root.left)
2
"""
return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0
def is_full_binary_tree(tree: Node) -> bool:
"""
Returns True if this is a full binary tree
>>> root = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right.right = Node(0)
>>> is_full_binary_tree(root)
False
"""
if not tree:
return True
if tree.left and tree.right:
return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right)
else:
return not tree.left and not tree.right
def main() -> None: # Main function for testing.
tree = Node(1)
tree.left = Node(2)
tree.right = Node(3)
tree.left.left = Node(4)
tree.left.right = Node(5)
tree.left.right.left = Node(6)
tree.right.left = Node(7)
tree.right.left.left = Node(8)
tree.right.left.left.right = Node(9)
print(is_full_binary_tree(tree))
print(depth_of_tree(tree))
print("Tree is: ")
display(tree)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def findMin(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| """
Partition a set into two subsets such that the difference of subset sums is minimum
"""
def findMin(arr):
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
for i in range(1, n + 1):
dp[i][0] = True
for i in range(1, s + 1):
dp[0][i] = False
for i in range(1, n + 1):
for j in range(1, s + 1):
dp[i][j] = dp[i][j - 1]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
for j in range(int(s / 2), -1, -1):
if dp[n][j] is True:
diff = s - 2 * j
break
return diff
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Highly divisible triangular numbers
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So
the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten
terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred
divisors?
"""
def count_divisors(n):
nDivisors = 1
i = 2
while i * i <= n:
multiplicity = 0
while n % i == 0:
n //= i
multiplicity += 1
nDivisors *= multiplicity + 1
i += 1
if n > 1:
nDivisors *= 2
return nDivisors
def solution():
"""Returns the value of the first triangle number to have over five hundred
divisors.
>>> solution()
76576500
"""
tNum = 1
i = 1
while True:
i += 1
tNum += i
if count_divisors(tNum) > 500:
break
return tNum
if __name__ == "__main__":
print(solution())
| """
Highly divisible triangular numbers
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So
the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten
terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred
divisors?
"""
def count_divisors(n):
nDivisors = 1
i = 2
while i * i <= n:
multiplicity = 0
while n % i == 0:
n //= i
multiplicity += 1
nDivisors *= multiplicity + 1
i += 1
if n > 1:
nDivisors *= 2
return nDivisors
def solution():
"""Returns the value of the first triangle number to have over five hundred
divisors.
>>> solution()
76576500
"""
tNum = 1
i = 1
while True:
i += 1
tNum += i
if count_divisors(tNum) > 500:
break
return tNum
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
import random
class Dice:
NUM_SIDES = 6
def __init__(self):
"""Initialize a six sided dice"""
self.sides = list(range(1, Dice.NUM_SIDES + 1))
def roll(self):
return random.choice(self.sides)
def _str_(self):
return "Fair Dice"
def throw_dice(num_throws: int, num_dice: int = 2) -> list[float]:
"""
Return probability list of all possible sums when throwing dice.
>>> random.seed(0)
>>> throw_dice(10, 1)
[10.0, 0.0, 30.0, 50.0, 10.0, 0.0]
>>> throw_dice(100, 1)
[19.0, 17.0, 17.0, 11.0, 23.0, 13.0]
>>> throw_dice(1000, 1)
[18.8, 15.5, 16.3, 17.6, 14.2, 17.6]
>>> throw_dice(10000, 1)
[16.35, 16.89, 16.93, 16.6, 16.52, 16.71]
>>> throw_dice(10000, 2)
[2.74, 5.6, 7.99, 11.26, 13.92, 16.7, 14.44, 10.63, 8.05, 5.92, 2.75]
"""
dices = [Dice() for i in range(num_dice)]
count_of_sum = [0] * (len(dices) * Dice.NUM_SIDES + 1)
for i in range(num_throws):
count_of_sum[sum(dice.roll() for dice in dices)] += 1
probability = [round((count * 100) / num_throws, 2) for count in count_of_sum]
return probability[num_dice:] # remove probability of sums that never appear
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
import random
class Dice:
NUM_SIDES = 6
def __init__(self):
"""Initialize a six sided dice"""
self.sides = list(range(1, Dice.NUM_SIDES + 1))
def roll(self):
return random.choice(self.sides)
def _str_(self):
return "Fair Dice"
def throw_dice(num_throws: int, num_dice: int = 2) -> list[float]:
"""
Return probability list of all possible sums when throwing dice.
>>> random.seed(0)
>>> throw_dice(10, 1)
[10.0, 0.0, 30.0, 50.0, 10.0, 0.0]
>>> throw_dice(100, 1)
[19.0, 17.0, 17.0, 11.0, 23.0, 13.0]
>>> throw_dice(1000, 1)
[18.8, 15.5, 16.3, 17.6, 14.2, 17.6]
>>> throw_dice(10000, 1)
[16.35, 16.89, 16.93, 16.6, 16.52, 16.71]
>>> throw_dice(10000, 2)
[2.74, 5.6, 7.99, 11.26, 13.92, 16.7, 14.44, 10.63, 8.05, 5.92, 2.75]
"""
dices = [Dice() for i in range(num_dice)]
count_of_sum = [0] * (len(dices) * Dice.NUM_SIDES + 1)
for i in range(num_throws):
count_of_sum[sum(dice.roll() for dice in dices)] += 1
probability = [round((count * 100) / num_throws, 2) for count in count_of_sum]
return probability[num_dice:] # remove probability of sums that never appear
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def max_subarray_sum(nums: list) -> int:
"""
>>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
17
"""
if not nums:
return 0
n = len(nums)
res, s, s_pre = nums[0], nums[0], nums[0]
for i in range(1, n):
s = max(nums[i], s_pre + nums[i])
s_pre = s
res = max(res, s)
return res
if __name__ == "__main__":
nums = [6, 9, -1, 3, -7, -5, 10]
print(max_subarray_sum(nums))
| def max_subarray_sum(nums: list) -> int:
"""
>>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
17
"""
if not nums:
return 0
n = len(nums)
res, s, s_pre = nums[0], nums[0], nums[0]
for i in range(1, n):
s = max(nums[i], s_pre + nums[i])
s_pre = s
res = max(res, s)
return res
if __name__ == "__main__":
nums = [6, 9, -1, 3, -7, -5, 10]
print(max_subarray_sum(nums))
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gamma function is a very useful tool in math and physics.
It helps calculating complex integral in a convenient way.
for more info: https://en.wikipedia.org/wiki/Gamma_function
Python's Standard Library math.gamma() function overflows around gamma(171.624).
"""
from math import pi, sqrt
def gamma(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
Implemented using recursion
Examples:
>>> from math import isclose, gamma as math_gamma
>>> gamma(0.5)
1.7724538509055159
>>> gamma(2)
1.0
>>> gamma(3.5)
3.3233509704478426
>>> gamma(171.5)
9.483367566824795e+307
>>> all(isclose(gamma(num), math_gamma(num)) for num in (0.5, 2, 3.5, 171.5))
True
>>> gamma(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-1.1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-4)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(172)
Traceback (most recent call last):
...
OverflowError: math range error
>>> gamma(1.1)
Traceback (most recent call last):
...
NotImplementedError: num must be an integer or a half-integer
"""
if num <= 0:
raise ValueError("math domain error")
if num > 171.5:
raise OverflowError("math range error")
elif num - int(num) not in (0, 0.5):
raise NotImplementedError("num must be an integer or a half-integer")
elif num == 0.5:
return sqrt(pi)
else:
return 1.0 if num == 1 else (num - 1) * gamma(num - 1)
def test_gamma() -> None:
"""
>>> test_gamma()
"""
assert gamma(0.5) == sqrt(pi)
assert gamma(1) == 1.0
assert gamma(2) == 1.0
if __name__ == "__main__":
from doctest import testmod
testmod()
num = 1.0
while num:
num = float(input("Gamma of: "))
print(f"gamma({num}) = {gamma(num)}")
print("\nEnter 0 to exit...")
| """
Gamma function is a very useful tool in math and physics.
It helps calculating complex integral in a convenient way.
for more info: https://en.wikipedia.org/wiki/Gamma_function
Python's Standard Library math.gamma() function overflows around gamma(171.624).
"""
from math import pi, sqrt
def gamma(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
Implemented using recursion
Examples:
>>> from math import isclose, gamma as math_gamma
>>> gamma(0.5)
1.7724538509055159
>>> gamma(2)
1.0
>>> gamma(3.5)
3.3233509704478426
>>> gamma(171.5)
9.483367566824795e+307
>>> all(isclose(gamma(num), math_gamma(num)) for num in (0.5, 2, 3.5, 171.5))
True
>>> gamma(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-1.1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-4)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(172)
Traceback (most recent call last):
...
OverflowError: math range error
>>> gamma(1.1)
Traceback (most recent call last):
...
NotImplementedError: num must be an integer or a half-integer
"""
if num <= 0:
raise ValueError("math domain error")
if num > 171.5:
raise OverflowError("math range error")
elif num - int(num) not in (0, 0.5):
raise NotImplementedError("num must be an integer or a half-integer")
elif num == 0.5:
return sqrt(pi)
else:
return 1.0 if num == 1 else (num - 1) * gamma(num - 1)
def test_gamma() -> None:
"""
>>> test_gamma()
"""
assert gamma(0.5) == sqrt(pi)
assert gamma(1) == 1.0
assert gamma(2) == 1.0
if __name__ == "__main__":
from doctest import testmod
testmod()
num = 1.0
while num:
num = float(input("Gamma of: "))
print(f"gamma({num}) = {gamma(num)}")
print("\nEnter 0 to exit...")
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Round Robin is a scheduling algorithm.
In Round Robin each process is assigned a fixed time slot in a cyclic way.
https://en.wikipedia.org/wiki/Round-robin_scheduling
"""
from __future__ import annotations
from statistics import mean
def calculate_waiting_times(burst_times: list[int]) -> list[int]:
"""
Calculate the waiting times of a list of processes that have a specified duration.
Return: The waiting time for each process.
>>> calculate_waiting_times([10, 5, 8])
[13, 10, 13]
>>> calculate_waiting_times([4, 6, 3, 1])
[5, 8, 9, 6]
>>> calculate_waiting_times([12, 2, 10])
[12, 2, 12]
"""
quantum = 2
rem_burst_times = list(burst_times)
waiting_times = [0] * len(burst_times)
t = 0
while True:
done = True
for i, burst_time in enumerate(burst_times):
if rem_burst_times[i] > 0:
done = False
if rem_burst_times[i] > quantum:
t += quantum
rem_burst_times[i] -= quantum
else:
t += rem_burst_times[i]
waiting_times[i] = t - burst_time
rem_burst_times[i] = 0
if done is True:
return waiting_times
def calculate_turn_around_times(
burst_times: list[int], waiting_times: list[int]
) -> list[int]:
"""
>>> calculate_turn_around_times([1, 2, 3, 4], [0, 1, 3])
[1, 3, 6]
>>> calculate_turn_around_times([10, 3, 7], [10, 6, 11])
[20, 9, 18]
"""
return [burst + waiting for burst, waiting in zip(burst_times, waiting_times)]
if __name__ == "__main__":
burst_times = [3, 5, 7]
waiting_times = calculate_waiting_times(burst_times)
turn_around_times = calculate_turn_around_times(burst_times, waiting_times)
print("Process ID \tBurst Time \tWaiting Time \tTurnaround Time")
for i, burst_time in enumerate(burst_times):
print(
f" {i + 1}\t\t {burst_time}\t\t {waiting_times[i]}\t\t "
f"{turn_around_times[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_times):.5f}")
print(f"Average turn around time = {mean(turn_around_times):.5f}")
| """
Round Robin is a scheduling algorithm.
In Round Robin each process is assigned a fixed time slot in a cyclic way.
https://en.wikipedia.org/wiki/Round-robin_scheduling
"""
from __future__ import annotations
from statistics import mean
def calculate_waiting_times(burst_times: list[int]) -> list[int]:
"""
Calculate the waiting times of a list of processes that have a specified duration.
Return: The waiting time for each process.
>>> calculate_waiting_times([10, 5, 8])
[13, 10, 13]
>>> calculate_waiting_times([4, 6, 3, 1])
[5, 8, 9, 6]
>>> calculate_waiting_times([12, 2, 10])
[12, 2, 12]
"""
quantum = 2
rem_burst_times = list(burst_times)
waiting_times = [0] * len(burst_times)
t = 0
while True:
done = True
for i, burst_time in enumerate(burst_times):
if rem_burst_times[i] > 0:
done = False
if rem_burst_times[i] > quantum:
t += quantum
rem_burst_times[i] -= quantum
else:
t += rem_burst_times[i]
waiting_times[i] = t - burst_time
rem_burst_times[i] = 0
if done is True:
return waiting_times
def calculate_turn_around_times(
burst_times: list[int], waiting_times: list[int]
) -> list[int]:
"""
>>> calculate_turn_around_times([1, 2, 3, 4], [0, 1, 3])
[1, 3, 6]
>>> calculate_turn_around_times([10, 3, 7], [10, 6, 11])
[20, 9, 18]
"""
return [burst + waiting for burst, waiting in zip(burst_times, waiting_times)]
if __name__ == "__main__":
burst_times = [3, 5, 7]
waiting_times = calculate_waiting_times(burst_times)
turn_around_times = calculate_turn_around_times(burst_times, waiting_times)
print("Process ID \tBurst Time \tWaiting Time \tTurnaround Time")
for i, burst_time in enumerate(burst_times):
print(
f" {i + 1}\t\t {burst_time}\t\t {waiting_times[i]}\t\t "
f"{turn_around_times[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_times):.5f}")
print(f"Average turn around time = {mean(turn_around_times):.5f}")
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Greatest Common Divisor.
Wikipedia reference: https://en.wikipedia.org/wiki/Greatest_common_divisor
gcd(a, b) = gcd(a, -b) = gcd(-a, b) = gcd(-a, -b) by definition of divisibility
"""
def greatest_common_divisor(a: int, b: int) -> int:
"""
Calculate Greatest Common Divisor (GCD).
>>> greatest_common_divisor(24, 40)
8
>>> greatest_common_divisor(1, 1)
1
>>> greatest_common_divisor(1, 800)
1
>>> greatest_common_divisor(11, 37)
1
>>> greatest_common_divisor(3, 5)
1
>>> greatest_common_divisor(16, 4)
4
>>> greatest_common_divisor(-3, 9)
3
>>> greatest_common_divisor(9, -3)
3
>>> greatest_common_divisor(3, -9)
3
>>> greatest_common_divisor(-3, -9)
3
"""
return abs(b) if a == 0 else greatest_common_divisor(b % a, a)
def gcd_by_iterative(x: int, y: int) -> int:
"""
Below method is more memory efficient because it does not create additional
stack frames for recursive functions calls (as done in the above method).
>>> gcd_by_iterative(24, 40)
8
>>> greatest_common_divisor(24, 40) == gcd_by_iterative(24, 40)
True
>>> gcd_by_iterative(-3, -9)
3
>>> gcd_by_iterative(3, -9)
3
>>> gcd_by_iterative(1, -800)
1
>>> gcd_by_iterative(11, 37)
1
"""
while y: # --> when y=0 then loop will terminate and return x as final GCD.
x, y = y, x % y
return abs(x)
def main():
"""
Call Greatest Common Divisor function.
"""
try:
nums = input("Enter two integers separated by comma (,): ").split(",")
num_1 = int(nums[0])
num_2 = int(nums[1])
print(
f"greatest_common_divisor({num_1}, {num_2}) = "
f"{greatest_common_divisor(num_1, num_2)}"
)
print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}")
except (IndexError, UnboundLocalError, ValueError):
print("Wrong input")
if __name__ == "__main__":
main()
| """
Greatest Common Divisor.
Wikipedia reference: https://en.wikipedia.org/wiki/Greatest_common_divisor
gcd(a, b) = gcd(a, -b) = gcd(-a, b) = gcd(-a, -b) by definition of divisibility
"""
def greatest_common_divisor(a: int, b: int) -> int:
"""
Calculate Greatest Common Divisor (GCD).
>>> greatest_common_divisor(24, 40)
8
>>> greatest_common_divisor(1, 1)
1
>>> greatest_common_divisor(1, 800)
1
>>> greatest_common_divisor(11, 37)
1
>>> greatest_common_divisor(3, 5)
1
>>> greatest_common_divisor(16, 4)
4
>>> greatest_common_divisor(-3, 9)
3
>>> greatest_common_divisor(9, -3)
3
>>> greatest_common_divisor(3, -9)
3
>>> greatest_common_divisor(-3, -9)
3
"""
return abs(b) if a == 0 else greatest_common_divisor(b % a, a)
def gcd_by_iterative(x: int, y: int) -> int:
"""
Below method is more memory efficient because it does not create additional
stack frames for recursive functions calls (as done in the above method).
>>> gcd_by_iterative(24, 40)
8
>>> greatest_common_divisor(24, 40) == gcd_by_iterative(24, 40)
True
>>> gcd_by_iterative(-3, -9)
3
>>> gcd_by_iterative(3, -9)
3
>>> gcd_by_iterative(1, -800)
1
>>> gcd_by_iterative(11, 37)
1
"""
while y: # --> when y=0 then loop will terminate and return x as final GCD.
x, y = y, x % y
return abs(x)
def main():
"""
Call Greatest Common Divisor function.
"""
try:
nums = input("Enter two integers separated by comma (,): ").split(",")
num_1 = int(nums[0])
num_2 = int(nums[1])
print(
f"greatest_common_divisor({num_1}, {num_2}) = "
f"{greatest_common_divisor(num_1, num_2)}"
)
print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}")
except (IndexError, UnboundLocalError, ValueError):
print("Wrong input")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 6,040 | fixed mypy annotations for arithmetic_analysis | ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Leoriem-code | "2022-03-09T16:34:31Z" | "2022-05-12T03:35:56Z" | e23c18fb5cb34d51b69e2840c304ade597163085 | 533eea5afa916fbe1d0db6db8da76c68b2928ca0 | fixed mypy annotations for arithmetic_analysis. ### Describe your change:
Updated mypy annotations
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from datetime import datetime
import requests
from bs4 import BeautifulSoup
if __name__ == "__main__":
url = input("Enter image url: ").strip()
print(f"Downloading image from {url} ...")
soup = BeautifulSoup(requests.get(url).content, "html.parser")
# The image URL is in the content field of the first meta tag with property og:image
image_url = soup.find("meta", {"property": "og:image"})["content"]
image_data = requests.get(image_url).content
file_name = f"{datetime.now():%Y-%m-%d_%H:%M:%S}.jpg"
with open(file_name, "wb") as fp:
fp.write(image_data)
print(f"Done. Image saved to disk as {file_name}.")
| from datetime import datetime
import requests
from bs4 import BeautifulSoup
if __name__ == "__main__":
url = input("Enter image url: ").strip()
print(f"Downloading image from {url} ...")
soup = BeautifulSoup(requests.get(url).content, "html.parser")
# The image URL is in the content field of the first meta tag with property og:image
image_url = soup.find("meta", {"property": "og:image"})["content"]
image_data = requests.get(image_url).content
file_name = f"{datetime.now():%Y-%m-%d_%H:%M:%S}.jpg"
with open(file_name, "wb") as fp:
fp.write(image_data)
print(f"Done. Image saved to disk as {file_name}.")
| -1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.