repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from math import log
from os import name, system
from random import gauss, seed
from typing import Callable, TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from math import log
from os import name, system
from random import gauss, seed
from typing import Callable, TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Uses Pythagoras theorem to calculate the distance between two points in space."""
import math
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def __repr__(self) -> str:
return f"Point({self.x}, {self.y}, {self.z})"
def distance(a: Point, b: Point) -> float:
return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
def test_distance() -> None:
"""
>>> point1 = Point(2, -1, 7)
>>> point2 = Point(1, -3, 5)
>>> print(f"Distance from {point1} to {point2} is {distance(point1, point2)}")
Distance from Point(2, -1, 7) to Point(1, -3, 5) is 3.0
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
| """Uses Pythagoras theorem to calculate the distance between two points in space."""
import math
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def __repr__(self) -> str:
return f"Point({self.x}, {self.y}, {self.z})"
def distance(a: Point, b: Point) -> float:
return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
def test_distance() -> None:
"""
>>> point1 = Point(2, -1, 7)
>>> point2 = Point(1, -3, 5)
>>> print(f"Distance from {point1} to {point2} is {distance(point1, point2)}")
Distance from Point(2, -1, 7) to Point(1, -3, 5) is 3.0
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of sequential minimal optimization (SMO) for support vector machines
(SVM).
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic
programming (QP) problem that arises during the training of support vector
machines.
It was invented by John Platt in 1998.
Input:
0: type: numpy.ndarray.
1: first column of ndarray must be tags of samples, must be 1 or -1.
2: rows of ndarray represent samples.
Usage:
Command:
python3 sequential_minimum_optimization.py
Code:
from sequential_minimum_optimization import SmoSVM, Kernel
kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5)
init_alphas = np.zeros(train.shape[0])
SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4,
b=0.0, tolerance=0.001)
SVM.fit()
predict = SVM.predict(test_samples)
Reference:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf
http://web.cs.iastate.edu/~honavar/smo-svm.pdf
"""
import os
import sys
import urllib.request
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs, make_circles
from sklearn.preprocessing import StandardScaler
CANCER_DATASET_URL = (
"http://archive.ics.uci.edu/ml/machine-learning-databases/"
"breast-cancer-wisconsin/wdbc.data"
)
class SmoSVM:
def __init__(
self,
train,
kernel_func,
alpha_list=None,
cost=0.4,
b=0.0,
tolerance=0.001,
auto_norm=True,
):
self._init = True
self._auto_norm = auto_norm
self._c = np.float64(cost)
self._b = np.float64(b)
self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001)
self.tags = train[:, 0]
self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:]
self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0])
self.Kernel = kernel_func
self._eps = 0.001
self._all_samples = list(range(self.length))
self._K_matrix = self._calculate_k_matrix()
self._error = np.zeros(self.length)
self._unbound = []
self.choose_alpha = self._choose_alphas()
# Calculate alphas using SMO algorithm
def fit(self):
K = self._k
state = None
while True:
# 1: Find alpha1, alpha2
try:
i1, i2 = self.choose_alpha.send(state)
state = None
except StopIteration:
print("Optimization done!\nEvery sample satisfy the KKT condition!")
break
# 2: calculate new alpha2 and new alpha1
y1, y2 = self.tags[i1], self.tags[i2]
a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy()
e1, e2 = self._e(i1), self._e(i2)
args = (i1, i2, a1, a2, e1, e2, y1, y2)
a1_new, a2_new = self._get_new_alpha(*args)
if not a1_new and not a2_new:
state = False
continue
self.alphas[i1], self.alphas[i2] = a1_new, a2_new
# 3: update threshold(b)
b1_new = np.float64(
-e1
- y1 * K(i1, i1) * (a1_new - a1)
- y2 * K(i2, i1) * (a2_new - a2)
+ self._b
)
b2_new = np.float64(
-e2
- y2 * K(i2, i2) * (a2_new - a2)
- y1 * K(i1, i2) * (a1_new - a1)
+ self._b
)
if 0.0 < a1_new < self._c:
b = b1_new
if 0.0 < a2_new < self._c:
b = b2_new
if not (np.float64(0) < a2_new < self._c) and not (
np.float64(0) < a1_new < self._c
):
b = (b1_new + b2_new) / 2.0
b_old = self._b
self._b = b
# 4: update error value,here we only calculate those non-bound samples'
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
if s == i1 or s == i2:
continue
self._error[s] += (
y1 * (a1_new - a1) * K(i1, s)
+ y2 * (a2_new - a2) * K(i2, s)
+ (self._b - b_old)
)
# if i1 or i2 is non-bound,update there error value to zero
if self._is_unbound(i1):
self._error[i1] = 0
if self._is_unbound(i2):
self._error[i2] = 0
# Predict test samles
def predict(self, test_samples, classify=True):
if test_samples.shape[1] > self.samples.shape[1]:
raise ValueError(
"Test samples' feature length does not equal to that of train samples"
)
if self._auto_norm:
test_samples = self._norm(test_samples)
results = []
for test_sample in test_samples:
result = self._predict(test_sample)
if classify:
results.append(1 if result > 0 else -1)
else:
results.append(result)
return np.array(results)
# Check if alpha violate KKT condition
def _check_obey_kkt(self, index):
alphas = self.alphas
tol = self._tol
r = self._e(index) * self.tags[index]
c = self._c
return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0)
# Get value calculated from kernel function
def _k(self, i1, i2):
# for test samples,use Kernel function
if isinstance(i2, np.ndarray):
return self.Kernel(self.samples[i1], i2)
# for train samples,Kernel values have been saved in matrix
else:
return self._K_matrix[i1, i2]
# Get sample's error
def _e(self, index):
"""
Two cases:
1:Sample[index] is non-bound,Fetch error from list: _error
2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi
"""
# get from error data
if self._is_unbound(index):
return self._error[index]
# get by g(xi) - yi
else:
gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b
yi = self.tags[index]
return gx - yi
# Calculate Kernel matrix of all possible i1,i2 ,saving time
def _calculate_k_matrix(self):
k_matrix = np.zeros([self.length, self.length])
for i in self._all_samples:
for j in self._all_samples:
k_matrix[i, j] = np.float64(
self.Kernel(self.samples[i, :], self.samples[j, :])
)
return k_matrix
# Predict test sample's tag
def _predict(self, sample):
k = self._k
predicted_value = (
np.sum(
[
self.alphas[i1] * self.tags[i1] * k(i1, sample)
for i1 in self._all_samples
]
)
+ self._b
)
return predicted_value
# Choose alpha1 and alpha2
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
return
return locis
def _choose_a1(self):
"""
Choose first alpha ;steps:
1:First loop over all sample
2:Second loop over all non-bound samples till all non-bound samples does not
voilate kkt condition.
3:Repeat this two process endlessly,till all samples does not voilate kkt
condition samples after first loop.
"""
while True:
all_not_obey = True
# all sample
print("scanning all sample!")
for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]:
all_not_obey = False
yield from self._choose_a2(i1)
# non-bound sample
print("scanning non-bound sample!")
while True:
not_obey = True
for i1 in [
i
for i in self._all_samples
if self._check_obey_kkt(i) and self._is_unbound(i)
]:
not_obey = False
yield from self._choose_a2(i1)
if not_obey:
print("all non-bound samples fit the KKT condition!")
break
if all_not_obey:
print("all samples fit the KKT condition! Optimization done!")
break
return False
def _choose_a2(self, i1):
"""
Choose the second alpha by using heuristic algorithm ;steps:
1: Choose alpha2 which gets the maximum step size (|E1 - E2|).
2: Start in a random point,loop over all non-bound samples till alpha1 and
alpha2 are optimized.
3: Start in a random point,loop over all samples till alpha1 and alpha2 are
optimized.
"""
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
if len(self.unbound) > 0:
tmp_error = self._error.copy().tolist()
tmp_error_dict = {
index: value
for index, value in enumerate(tmp_error)
if self._is_unbound(index)
}
if self._e(i1) >= 0:
i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index])
else:
i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index])
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self.unbound, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self._all_samples, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
# Get the new alpha2 and new alpha1
def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2):
K = self._k
if i1 == i2:
return None, None
# calculate L and H which bound the new alpha2
s = y1 * y2
if s == -1:
L, H = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1)
else:
L, H = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1)
if L == H:
return None, None
# calculate eta
k11 = K(i1, i1)
k22 = K(i2, i2)
k12 = K(i1, i2)
eta = k11 + k22 - 2.0 * k12
# select the new alpha2 which could get the minimal objectives
if eta > 0.0:
a2_new_unc = a2 + (y2 * (e1 - e2)) / eta
# a2_new has a boundary
if a2_new_unc >= H:
a2_new = H
elif a2_new_unc <= L:
a2_new = L
else:
a2_new = a2_new_unc
else:
b = self._b
l1 = a1 + s * (a2 - L)
h1 = a1 + s * (a2 - H)
# way 1
f1 = y1 * (e1 + b) - a1 * K(i1, i1) - s * a2 * K(i1, i2)
f2 = y2 * (e2 + b) - a2 * K(i2, i2) - s * a1 * K(i1, i2)
ol = (
l1 * f1
+ L * f2
+ 1 / 2 * l1 ** 2 * K(i1, i1)
+ 1 / 2 * L ** 2 * K(i2, i2)
+ s * L * l1 * K(i1, i2)
)
oh = (
h1 * f1
+ H * f2
+ 1 / 2 * h1 ** 2 * K(i1, i1)
+ 1 / 2 * H ** 2 * K(i2, i2)
+ s * H * h1 * K(i1, i2)
)
"""
# way 2
Use objective function check which alpha2 new could get the minimal
objectives
"""
if ol < (oh - self._eps):
a2_new = L
elif ol > oh + self._eps:
a2_new = H
else:
a2_new = a2
# a1_new has a boundary too
a1_new = a1 + s * (a2 - a2_new)
if a1_new < 0:
a2_new += s * a1_new
a1_new = 0
if a1_new > self._c:
a2_new += s * (a1_new - self._c)
a1_new = self._c
return a1_new, a2_new
# Normalise data using min_max way
def _norm(self, data):
if self._init:
self._min = np.min(data, axis=0)
self._max = np.max(data, axis=0)
self._init = False
return (data - self._min) / (self._max - self._min)
else:
return (data - self._min) / (self._max - self._min)
def _is_unbound(self, index):
if 0.0 < self.alphas[index] < self._c:
return True
else:
return False
def _is_support(self, index):
if self.alphas[index] > 0:
return True
else:
return False
@property
def unbound(self):
return self._unbound
@property
def support(self):
return [i for i in range(self.length) if self._is_support(i)]
@property
def length(self):
return self.samples.shape[0]
class Kernel:
def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0):
self.degree = np.float64(degree)
self.coef0 = np.float64(coef0)
self.gamma = np.float64(gamma)
self._kernel_name = kernel
self._kernel = self._get_kernel(kernel_name=kernel)
self._check()
def _polynomial(self, v1, v2):
return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree
def _linear(self, v1, v2):
return np.inner(v1, v2) + self.coef0
def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
if self._kernel == self._rbf:
if self.gamma < 0:
raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
return maps[kernel_name]
def __call__(self, v1, v2):
return self._kernel(v1, v2)
def __repr__(self):
return self._kernel_name
def count_time(func):
def call_func(*args, **kwargs):
import time
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print(f"smo algorithm cost {end_time - start_time} seconds")
return call_func
@count_time
def test_cancel_data():
print("Hello!\nStart test svm by smo algorithm!")
# 0: download dataset and load into pandas' dataframe
if not os.path.exists(r"cancel_data.csv"):
request = urllib.request.Request(
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
response = urllib.request.urlopen(request)
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
data = pd.read_csv(r"cancel_data.csv", header=None)
# 1: pre-processing data
del data[data.columns.tolist()[0]]
data = data.dropna(axis=0)
data = data.replace({"M": np.float64(1), "B": np.float64(-1)})
samples = np.array(data)[:, :]
# 2: dividing data into train_data data and test_data data
train_data, test_data = samples[:328, :], samples[328:, :]
test_tags, test_samples = test_data[:, 0], test_data[:, 1:]
# 3: choose kernel function,and set initial alphas to zero(optional)
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
al = np.zeros(train_data.shape[0])
# 4: calculating best alphas using SMO algorithm and predict test_data samples
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
alpha_list=al,
cost=0.4,
b=0.0,
tolerance=0.001,
)
mysvm.fit()
predict = mysvm.predict(test_samples)
# 5: check accuracy
score = 0
test_num = test_tags.shape[0]
for i in range(test_tags.shape[0]):
if test_tags[i] == predict[i]:
score += 1
print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}")
print(f"Rough Accuracy: {score / test_tags.shape[0]}")
def test_demonstration():
# change stdout
print("\nStart plot,please wait!!!")
sys.stdout = open(os.devnull, "w")
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0))
ax4 = plt.subplot2grid((2, 2), (1, 1))
ax1.set_title("linear svm,cost:0.1")
test_linear_kernel(ax1, cost=0.1)
ax2.set_title("linear svm,cost:500")
test_linear_kernel(ax2, cost=500)
ax3.set_title("rbf kernel svm,cost:0.1")
test_rbf_kernel(ax3, cost=0.1)
ax4.set_title("rbf kernel svm,cost:500")
test_rbf_kernel(ax4, cost=500)
sys.stdout = sys.__stdout__
print("Plot done!!!")
def test_linear_kernel(ax, cost):
train_x, train_y = make_blobs(
n_samples=500, centers=2, n_features=2, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def test_rbf_kernel(ax, cost):
train_x, train_y = make_circles(
n_samples=500, noise=0.1, factor=0.1, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def plot_partition_boundary(
model, train_data, ax, resolution=100, colors=("b", "k", "r")
):
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
prediced values of these points are calculated by using our tained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
train_data_x = train_data[:, 1]
train_data_y = train_data[:, 2]
train_data_tags = train_data[:, 0]
xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution)
yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution)
test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape(
resolution * resolution, 2
)
test_tags = model.predict(test_samples, classify=False)
grid = test_tags.reshape((len(xrange), len(yrange)))
# Plot contour map which represents the partition boundary
ax.contour(
xrange,
yrange,
np.mat(grid).T,
levels=(-1, 0, 1),
linestyles=("--", "-", "--"),
linewidths=(1, 1, 1),
colors=colors,
)
# Plot all train samples
ax.scatter(
train_data_x,
train_data_y,
c=train_data_tags,
cmap=plt.cm.Dark2,
lw=0,
alpha=0.5,
)
# Plot support vectors
support = model.support
ax.scatter(
train_data_x[support],
train_data_y[support],
c=train_data_tags[support],
cmap=plt.cm.Dark2,
)
if __name__ == "__main__":
test_cancel_data()
test_demonstration()
plt.show()
| """
Implementation of sequential minimal optimization (SMO) for support vector machines
(SVM).
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic
programming (QP) problem that arises during the training of support vector
machines.
It was invented by John Platt in 1998.
Input:
0: type: numpy.ndarray.
1: first column of ndarray must be tags of samples, must be 1 or -1.
2: rows of ndarray represent samples.
Usage:
Command:
python3 sequential_minimum_optimization.py
Code:
from sequential_minimum_optimization import SmoSVM, Kernel
kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5)
init_alphas = np.zeros(train.shape[0])
SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4,
b=0.0, tolerance=0.001)
SVM.fit()
predict = SVM.predict(test_samples)
Reference:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf
http://web.cs.iastate.edu/~honavar/smo-svm.pdf
"""
import os
import sys
import urllib.request
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs, make_circles
from sklearn.preprocessing import StandardScaler
CANCER_DATASET_URL = (
"http://archive.ics.uci.edu/ml/machine-learning-databases/"
"breast-cancer-wisconsin/wdbc.data"
)
class SmoSVM:
def __init__(
self,
train,
kernel_func,
alpha_list=None,
cost=0.4,
b=0.0,
tolerance=0.001,
auto_norm=True,
):
self._init = True
self._auto_norm = auto_norm
self._c = np.float64(cost)
self._b = np.float64(b)
self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001)
self.tags = train[:, 0]
self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:]
self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0])
self.Kernel = kernel_func
self._eps = 0.001
self._all_samples = list(range(self.length))
self._K_matrix = self._calculate_k_matrix()
self._error = np.zeros(self.length)
self._unbound = []
self.choose_alpha = self._choose_alphas()
# Calculate alphas using SMO algorithm
def fit(self):
K = self._k
state = None
while True:
# 1: Find alpha1, alpha2
try:
i1, i2 = self.choose_alpha.send(state)
state = None
except StopIteration:
print("Optimization done!\nEvery sample satisfy the KKT condition!")
break
# 2: calculate new alpha2 and new alpha1
y1, y2 = self.tags[i1], self.tags[i2]
a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy()
e1, e2 = self._e(i1), self._e(i2)
args = (i1, i2, a1, a2, e1, e2, y1, y2)
a1_new, a2_new = self._get_new_alpha(*args)
if not a1_new and not a2_new:
state = False
continue
self.alphas[i1], self.alphas[i2] = a1_new, a2_new
# 3: update threshold(b)
b1_new = np.float64(
-e1
- y1 * K(i1, i1) * (a1_new - a1)
- y2 * K(i2, i1) * (a2_new - a2)
+ self._b
)
b2_new = np.float64(
-e2
- y2 * K(i2, i2) * (a2_new - a2)
- y1 * K(i1, i2) * (a1_new - a1)
+ self._b
)
if 0.0 < a1_new < self._c:
b = b1_new
if 0.0 < a2_new < self._c:
b = b2_new
if not (np.float64(0) < a2_new < self._c) and not (
np.float64(0) < a1_new < self._c
):
b = (b1_new + b2_new) / 2.0
b_old = self._b
self._b = b
# 4: update error value,here we only calculate those non-bound samples'
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
if s == i1 or s == i2:
continue
self._error[s] += (
y1 * (a1_new - a1) * K(i1, s)
+ y2 * (a2_new - a2) * K(i2, s)
+ (self._b - b_old)
)
# if i1 or i2 is non-bound,update there error value to zero
if self._is_unbound(i1):
self._error[i1] = 0
if self._is_unbound(i2):
self._error[i2] = 0
# Predict test samles
def predict(self, test_samples, classify=True):
if test_samples.shape[1] > self.samples.shape[1]:
raise ValueError(
"Test samples' feature length does not equal to that of train samples"
)
if self._auto_norm:
test_samples = self._norm(test_samples)
results = []
for test_sample in test_samples:
result = self._predict(test_sample)
if classify:
results.append(1 if result > 0 else -1)
else:
results.append(result)
return np.array(results)
# Check if alpha violate KKT condition
def _check_obey_kkt(self, index):
alphas = self.alphas
tol = self._tol
r = self._e(index) * self.tags[index]
c = self._c
return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0)
# Get value calculated from kernel function
def _k(self, i1, i2):
# for test samples,use Kernel function
if isinstance(i2, np.ndarray):
return self.Kernel(self.samples[i1], i2)
# for train samples,Kernel values have been saved in matrix
else:
return self._K_matrix[i1, i2]
# Get sample's error
def _e(self, index):
"""
Two cases:
1:Sample[index] is non-bound,Fetch error from list: _error
2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi
"""
# get from error data
if self._is_unbound(index):
return self._error[index]
# get by g(xi) - yi
else:
gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b
yi = self.tags[index]
return gx - yi
# Calculate Kernel matrix of all possible i1,i2 ,saving time
def _calculate_k_matrix(self):
k_matrix = np.zeros([self.length, self.length])
for i in self._all_samples:
for j in self._all_samples:
k_matrix[i, j] = np.float64(
self.Kernel(self.samples[i, :], self.samples[j, :])
)
return k_matrix
# Predict test sample's tag
def _predict(self, sample):
k = self._k
predicted_value = (
np.sum(
[
self.alphas[i1] * self.tags[i1] * k(i1, sample)
for i1 in self._all_samples
]
)
+ self._b
)
return predicted_value
# Choose alpha1 and alpha2
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
return
return locis
def _choose_a1(self):
"""
Choose first alpha ;steps:
1:First loop over all sample
2:Second loop over all non-bound samples till all non-bound samples does not
voilate kkt condition.
3:Repeat this two process endlessly,till all samples does not voilate kkt
condition samples after first loop.
"""
while True:
all_not_obey = True
# all sample
print("scanning all sample!")
for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]:
all_not_obey = False
yield from self._choose_a2(i1)
# non-bound sample
print("scanning non-bound sample!")
while True:
not_obey = True
for i1 in [
i
for i in self._all_samples
if self._check_obey_kkt(i) and self._is_unbound(i)
]:
not_obey = False
yield from self._choose_a2(i1)
if not_obey:
print("all non-bound samples fit the KKT condition!")
break
if all_not_obey:
print("all samples fit the KKT condition! Optimization done!")
break
return False
def _choose_a2(self, i1):
"""
Choose the second alpha by using heuristic algorithm ;steps:
1: Choose alpha2 which gets the maximum step size (|E1 - E2|).
2: Start in a random point,loop over all non-bound samples till alpha1 and
alpha2 are optimized.
3: Start in a random point,loop over all samples till alpha1 and alpha2 are
optimized.
"""
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
if len(self.unbound) > 0:
tmp_error = self._error.copy().tolist()
tmp_error_dict = {
index: value
for index, value in enumerate(tmp_error)
if self._is_unbound(index)
}
if self._e(i1) >= 0:
i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index])
else:
i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index])
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self.unbound, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self._all_samples, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
# Get the new alpha2 and new alpha1
def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2):
K = self._k
if i1 == i2:
return None, None
# calculate L and H which bound the new alpha2
s = y1 * y2
if s == -1:
L, H = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1)
else:
L, H = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1)
if L == H:
return None, None
# calculate eta
k11 = K(i1, i1)
k22 = K(i2, i2)
k12 = K(i1, i2)
eta = k11 + k22 - 2.0 * k12
# select the new alpha2 which could get the minimal objectives
if eta > 0.0:
a2_new_unc = a2 + (y2 * (e1 - e2)) / eta
# a2_new has a boundary
if a2_new_unc >= H:
a2_new = H
elif a2_new_unc <= L:
a2_new = L
else:
a2_new = a2_new_unc
else:
b = self._b
l1 = a1 + s * (a2 - L)
h1 = a1 + s * (a2 - H)
# way 1
f1 = y1 * (e1 + b) - a1 * K(i1, i1) - s * a2 * K(i1, i2)
f2 = y2 * (e2 + b) - a2 * K(i2, i2) - s * a1 * K(i1, i2)
ol = (
l1 * f1
+ L * f2
+ 1 / 2 * l1 ** 2 * K(i1, i1)
+ 1 / 2 * L ** 2 * K(i2, i2)
+ s * L * l1 * K(i1, i2)
)
oh = (
h1 * f1
+ H * f2
+ 1 / 2 * h1 ** 2 * K(i1, i1)
+ 1 / 2 * H ** 2 * K(i2, i2)
+ s * H * h1 * K(i1, i2)
)
"""
# way 2
Use objective function check which alpha2 new could get the minimal
objectives
"""
if ol < (oh - self._eps):
a2_new = L
elif ol > oh + self._eps:
a2_new = H
else:
a2_new = a2
# a1_new has a boundary too
a1_new = a1 + s * (a2 - a2_new)
if a1_new < 0:
a2_new += s * a1_new
a1_new = 0
if a1_new > self._c:
a2_new += s * (a1_new - self._c)
a1_new = self._c
return a1_new, a2_new
# Normalise data using min_max way
def _norm(self, data):
if self._init:
self._min = np.min(data, axis=0)
self._max = np.max(data, axis=0)
self._init = False
return (data - self._min) / (self._max - self._min)
else:
return (data - self._min) / (self._max - self._min)
def _is_unbound(self, index):
if 0.0 < self.alphas[index] < self._c:
return True
else:
return False
def _is_support(self, index):
if self.alphas[index] > 0:
return True
else:
return False
@property
def unbound(self):
return self._unbound
@property
def support(self):
return [i for i in range(self.length) if self._is_support(i)]
@property
def length(self):
return self.samples.shape[0]
class Kernel:
def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0):
self.degree = np.float64(degree)
self.coef0 = np.float64(coef0)
self.gamma = np.float64(gamma)
self._kernel_name = kernel
self._kernel = self._get_kernel(kernel_name=kernel)
self._check()
def _polynomial(self, v1, v2):
return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree
def _linear(self, v1, v2):
return np.inner(v1, v2) + self.coef0
def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
if self._kernel == self._rbf:
if self.gamma < 0:
raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
return maps[kernel_name]
def __call__(self, v1, v2):
return self._kernel(v1, v2)
def __repr__(self):
return self._kernel_name
def count_time(func):
def call_func(*args, **kwargs):
import time
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print(f"smo algorithm cost {end_time - start_time} seconds")
return call_func
@count_time
def test_cancel_data():
print("Hello!\nStart test svm by smo algorithm!")
# 0: download dataset and load into pandas' dataframe
if not os.path.exists(r"cancel_data.csv"):
request = urllib.request.Request(
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
response = urllib.request.urlopen(request)
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
data = pd.read_csv(r"cancel_data.csv", header=None)
# 1: pre-processing data
del data[data.columns.tolist()[0]]
data = data.dropna(axis=0)
data = data.replace({"M": np.float64(1), "B": np.float64(-1)})
samples = np.array(data)[:, :]
# 2: dividing data into train_data data and test_data data
train_data, test_data = samples[:328, :], samples[328:, :]
test_tags, test_samples = test_data[:, 0], test_data[:, 1:]
# 3: choose kernel function,and set initial alphas to zero(optional)
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
al = np.zeros(train_data.shape[0])
# 4: calculating best alphas using SMO algorithm and predict test_data samples
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
alpha_list=al,
cost=0.4,
b=0.0,
tolerance=0.001,
)
mysvm.fit()
predict = mysvm.predict(test_samples)
# 5: check accuracy
score = 0
test_num = test_tags.shape[0]
for i in range(test_tags.shape[0]):
if test_tags[i] == predict[i]:
score += 1
print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}")
print(f"Rough Accuracy: {score / test_tags.shape[0]}")
def test_demonstration():
# change stdout
print("\nStart plot,please wait!!!")
sys.stdout = open(os.devnull, "w")
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0))
ax4 = plt.subplot2grid((2, 2), (1, 1))
ax1.set_title("linear svm,cost:0.1")
test_linear_kernel(ax1, cost=0.1)
ax2.set_title("linear svm,cost:500")
test_linear_kernel(ax2, cost=500)
ax3.set_title("rbf kernel svm,cost:0.1")
test_rbf_kernel(ax3, cost=0.1)
ax4.set_title("rbf kernel svm,cost:500")
test_rbf_kernel(ax4, cost=500)
sys.stdout = sys.__stdout__
print("Plot done!!!")
def test_linear_kernel(ax, cost):
train_x, train_y = make_blobs(
n_samples=500, centers=2, n_features=2, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def test_rbf_kernel(ax, cost):
train_x, train_y = make_circles(
n_samples=500, noise=0.1, factor=0.1, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def plot_partition_boundary(
model, train_data, ax, resolution=100, colors=("b", "k", "r")
):
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
prediced values of these points are calculated by using our tained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
train_data_x = train_data[:, 1]
train_data_y = train_data[:, 2]
train_data_tags = train_data[:, 0]
xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution)
yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution)
test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape(
resolution * resolution, 2
)
test_tags = model.predict(test_samples, classify=False)
grid = test_tags.reshape((len(xrange), len(yrange)))
# Plot contour map which represents the partition boundary
ax.contour(
xrange,
yrange,
np.mat(grid).T,
levels=(-1, 0, 1),
linestyles=("--", "-", "--"),
linewidths=(1, 1, 1),
colors=colors,
)
# Plot all train samples
ax.scatter(
train_data_x,
train_data_y,
c=train_data_tags,
cmap=plt.cm.Dark2,
lw=0,
alpha=0.5,
)
# Plot support vectors
support = model.support
ax.scatter(
train_data_x[support],
train_data_y[support],
c=train_data_tags[support],
cmap=plt.cm.Dark2,
)
if __name__ == "__main__":
test_cancel_data()
test_demonstration()
plt.show()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of comb sort algorithm.
Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz
Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991.
Comb sort improves on bubble sort algorithm.
In bubble sort, distance (or gap) between two compared elements is always one.
Comb sort improvement is that gap can be much more than 1, in order to prevent slowing
down by small values
at the end of a list.
More info on: https://en.wikipedia.org/wiki/Comb_sort
For doctests run following command:
python -m doctest -v comb_sort.py
or
python3 -m doctest -v comb_sort.py
For manual testing run:
python comb_sort.py
"""
def comb_sort(data: list) -> list:
"""Pure implementation of comb sort algorithm in Python
:param data: mutable collection with comparable items
:return: the same collection in ascending order
Examples:
>>> comb_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> comb_sort([])
[]
>>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3])
[-15, -7, 0, 2, 3, 8, 45, 99]
"""
shrink_factor = 1.3
gap = len(data)
completed = False
while not completed:
# Update the gap value for a next comb
gap = int(gap / shrink_factor)
if gap <= 1:
completed = True
index = 0
while index + gap < len(data):
if data[index] > data[index + gap]:
# Swap values
data[index], data[index + gap] = data[index + gap], data[index]
completed = False
index += 1
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(comb_sort(unsorted))
| """
This is pure Python implementation of comb sort algorithm.
Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz
Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991.
Comb sort improves on bubble sort algorithm.
In bubble sort, distance (or gap) between two compared elements is always one.
Comb sort improvement is that gap can be much more than 1, in order to prevent slowing
down by small values
at the end of a list.
More info on: https://en.wikipedia.org/wiki/Comb_sort
For doctests run following command:
python -m doctest -v comb_sort.py
or
python3 -m doctest -v comb_sort.py
For manual testing run:
python comb_sort.py
"""
def comb_sort(data: list) -> list:
"""Pure implementation of comb sort algorithm in Python
:param data: mutable collection with comparable items
:return: the same collection in ascending order
Examples:
>>> comb_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> comb_sort([])
[]
>>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3])
[-15, -7, 0, 2, 3, 8, 45, 99]
"""
shrink_factor = 1.3
gap = len(data)
completed = False
while not completed:
# Update the gap value for a next comb
gap = int(gap / shrink_factor)
if gap <= 1:
completed = True
index = 0
while index + gap < len(data):
if data[index] > data[index + gap]:
# Swap values
data[index], data[index + gap] = data[index + gap], data[index]
completed = False
index += 1
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(comb_sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the bogosort algorithm,
also known as permutation sort, stupid sort, slowsort, shotgun sort, or monkey sort.
Bogosort generates random permutations until it guesses the correct one.
More info on: https://en.wikipedia.org/wiki/Bogosort
For doctests run following command:
python -m doctest -v bogo_sort.py
or
python3 -m doctest -v bogo_sort.py
For manual testing run:
python bogo_sort.py
"""
import random
def bogo_sort(collection):
"""Pure implementation of the bogosort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> bogo_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> bogo_sort([])
[]
>>> bogo_sort([-2, -5, -45])
[-45, -5, -2]
"""
def is_sorted(collection):
if len(collection) < 2:
return True
for i in range(len(collection) - 1):
if collection[i] > collection[i + 1]:
return False
return True
while not is_sorted(collection):
random.shuffle(collection)
return collection
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(bogo_sort(unsorted))
| """
This is a pure Python implementation of the bogosort algorithm,
also known as permutation sort, stupid sort, slowsort, shotgun sort, or monkey sort.
Bogosort generates random permutations until it guesses the correct one.
More info on: https://en.wikipedia.org/wiki/Bogosort
For doctests run following command:
python -m doctest -v bogo_sort.py
or
python3 -m doctest -v bogo_sort.py
For manual testing run:
python bogo_sort.py
"""
import random
def bogo_sort(collection):
"""Pure implementation of the bogosort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> bogo_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> bogo_sort([])
[]
>>> bogo_sort([-2, -5, -45])
[-45, -5, -2]
"""
def is_sorted(collection):
if len(collection) < 2:
return True
for i in range(len(collection) - 1):
if collection[i] > collection[i + 1]:
return False
return True
while not is_sorted(collection):
random.shuffle(collection)
return collection
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(bogo_sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import random
import sys
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def main() -> None:
message = input("Enter message: ")
key = "LFWOAYUISVKMNXPBDCRJTQEGHZ"
resp = input("Encrypt/Decrypt [e/d]: ")
checkValidKey(key)
if resp.lower().startswith("e"):
mode = "encrypt"
translated = encryptMessage(key, message)
elif resp.lower().startswith("d"):
mode = "decrypt"
translated = decryptMessage(key, message)
print(f"\n{mode.title()}ion: \n{translated}")
def checkValidKey(key: str) -> None:
keyList = list(key)
lettersList = list(LETTERS)
keyList.sort()
lettersList.sort()
if keyList != lettersList:
sys.exit("Error in the key or symbol set.")
def encryptMessage(key: str, message: str) -> str:
"""
>>> encryptMessage('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji')
'Ilcrism Olcvs'
"""
return translateMessage(key, message, "encrypt")
def decryptMessage(key: str, message: str) -> str:
"""
>>> decryptMessage('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs')
'Harshil Darji'
"""
return translateMessage(key, message, "decrypt")
def translateMessage(key: str, message: str, mode: str) -> str:
translated = ""
charsA = LETTERS
charsB = key
if mode == "decrypt":
charsA, charsB = charsB, charsA
for symbol in message:
if symbol.upper() in charsA:
symIndex = charsA.find(symbol.upper())
if symbol.isupper():
translated += charsB[symIndex].upper()
else:
translated += charsB[symIndex].lower()
else:
translated += symbol
return translated
def getRandomKey() -> str:
key = list(LETTERS)
random.shuffle(key)
return "".join(key)
if __name__ == "__main__":
main()
| import random
import sys
LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def main() -> None:
message = input("Enter message: ")
key = "LFWOAYUISVKMNXPBDCRJTQEGHZ"
resp = input("Encrypt/Decrypt [e/d]: ")
checkValidKey(key)
if resp.lower().startswith("e"):
mode = "encrypt"
translated = encryptMessage(key, message)
elif resp.lower().startswith("d"):
mode = "decrypt"
translated = decryptMessage(key, message)
print(f"\n{mode.title()}ion: \n{translated}")
def checkValidKey(key: str) -> None:
keyList = list(key)
lettersList = list(LETTERS)
keyList.sort()
lettersList.sort()
if keyList != lettersList:
sys.exit("Error in the key or symbol set.")
def encryptMessage(key: str, message: str) -> str:
"""
>>> encryptMessage('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Harshil Darji')
'Ilcrism Olcvs'
"""
return translateMessage(key, message, "encrypt")
def decryptMessage(key: str, message: str) -> str:
"""
>>> decryptMessage('LFWOAYUISVKMNXPBDCRJTQEGHZ', 'Ilcrism Olcvs')
'Harshil Darji'
"""
return translateMessage(key, message, "decrypt")
def translateMessage(key: str, message: str, mode: str) -> str:
translated = ""
charsA = LETTERS
charsB = key
if mode == "decrypt":
charsA, charsB = charsB, charsA
for symbol in message:
if symbol.upper() in charsA:
symIndex = charsA.find(symbol.upper())
if symbol.isupper():
translated += charsB[symIndex].upper()
else:
translated += charsB[symIndex].lower()
else:
translated += symbol
return translated
def getRandomKey() -> str:
key = list(LETTERS)
random.shuffle(key)
return "".join(key)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Similarity Search : https://en.wikipedia.org/wiki/Similarity_search
Similarity search is a search algorithm for finding the nearest vector from
vectors, used in natural language processing.
In this algorithm, it calculates distance with euclidean distance and
returns a list containing two data for each vector:
1. the nearest vector
2. distance between the vector and the nearest vector (float)
"""
import math
from typing import List, Union
import numpy as np
def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates euclidean distance between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Euclidean distance of input_a and input_b. By using math.sqrt(),
result will be float.
>>> euclidean(np.array([0]), np.array([1]))
1.0
>>> euclidean(np.array([0, 1]), np.array([1, 1]))
1.0
>>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1]))
1.0
"""
return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b)))
def similarity_search(
dataset: np.ndarray, value_array: np.ndarray
) -> List[List[Union[List[float], float]]]:
"""
:param dataset: Set containing the vectors. Should be ndarray.
:param value_array: vector/vectors we want to know the nearest vector from dataset.
:return: Result will be a list containing
1. the nearest vector
2. distance from the vector
>>> dataset = np.array([[0], [1], [2]])
>>> value_array = np.array([[0]])
>>> similarity_search(dataset, value_array)
[[[0], 0.0]]
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]]
These are the errors that might occur:
1. If dimensions are different.
For example, dataset has 2d array and value_array has 1d array:
>>> dataset = np.array([[1]])
>>> value_array = np.array([1])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1
2. If data's shapes are different.
For example, dataset has shape of (3, 2) and value_array has (2, 3).
We are expecting same shapes of two arrays, so it is wrong.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's shape... dataset : 2, value_array : 3
3. If data types are different.
When trying to compare, we are expecting same types so they should be same.
If not, it'll come up with errors.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32)
>>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32)
>>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: Input data have different datatype...
dataset : float32, value_array : int32
"""
if dataset.ndim != value_array.ndim:
raise ValueError(
f"Wrong input data's dimensions... dataset : {dataset.ndim}, "
f"value_array : {value_array.ndim}"
)
try:
if dataset.shape[1] != value_array.shape[1]:
raise ValueError(
f"Wrong input data's shape... dataset : {dataset.shape[1]}, "
f"value_array : {value_array.shape[1]}"
)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
raise TypeError(
f"Input data have different datatype... dataset : {dataset.dtype}, "
f"value_array : {value_array.dtype}"
)
answer = []
for value in value_array:
dist = euclidean(value, dataset[0])
vector = dataset[0].tolist()
for dataset_value in dataset[1:]:
temp_dist = euclidean(value, dataset_value)
if dist > temp_dist:
dist = temp_dist
vector = dataset_value.tolist()
answer.append([vector, dist])
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Similarity Search : https://en.wikipedia.org/wiki/Similarity_search
Similarity search is a search algorithm for finding the nearest vector from
vectors, used in natural language processing.
In this algorithm, it calculates distance with euclidean distance and
returns a list containing two data for each vector:
1. the nearest vector
2. distance between the vector and the nearest vector (float)
"""
import math
from typing import List, Union
import numpy as np
def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates euclidean distance between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Euclidean distance of input_a and input_b. By using math.sqrt(),
result will be float.
>>> euclidean(np.array([0]), np.array([1]))
1.0
>>> euclidean(np.array([0, 1]), np.array([1, 1]))
1.0
>>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1]))
1.0
"""
return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b)))
def similarity_search(
dataset: np.ndarray, value_array: np.ndarray
) -> List[List[Union[List[float], float]]]:
"""
:param dataset: Set containing the vectors. Should be ndarray.
:param value_array: vector/vectors we want to know the nearest vector from dataset.
:return: Result will be a list containing
1. the nearest vector
2. distance from the vector
>>> dataset = np.array([[0], [1], [2]])
>>> value_array = np.array([[0]])
>>> similarity_search(dataset, value_array)
[[[0], 0.0]]
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]]
These are the errors that might occur:
1. If dimensions are different.
For example, dataset has 2d array and value_array has 1d array:
>>> dataset = np.array([[1]])
>>> value_array = np.array([1])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1
2. If data's shapes are different.
For example, dataset has shape of (3, 2) and value_array has (2, 3).
We are expecting same shapes of two arrays, so it is wrong.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's shape... dataset : 2, value_array : 3
3. If data types are different.
When trying to compare, we are expecting same types so they should be same.
If not, it'll come up with errors.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32)
>>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32)
>>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: Input data have different datatype...
dataset : float32, value_array : int32
"""
if dataset.ndim != value_array.ndim:
raise ValueError(
f"Wrong input data's dimensions... dataset : {dataset.ndim}, "
f"value_array : {value_array.ndim}"
)
try:
if dataset.shape[1] != value_array.shape[1]:
raise ValueError(
f"Wrong input data's shape... dataset : {dataset.shape[1]}, "
f"value_array : {value_array.shape[1]}"
)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
raise TypeError(
f"Input data have different datatype... dataset : {dataset.dtype}, "
f"value_array : {value_array.dtype}"
)
answer = []
for value in value_array:
dist = euclidean(value, dataset[0])
vector = dataset[0].tolist()
for dataset_value in dataset[1:]:
temp_dist = euclidean(value, dataset_value)
if dist > temp_dist:
dist = temp_dist
vector = dataset_value.tolist()
answer.append([vector, dist])
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 46: https://projecteuler.net/problem=46
It was proposed by Christian Goldbach that every odd composite number can be
written as the sum of a prime and twice a square.
9 = 7 + 2 × 12
15 = 7 + 2 × 22
21 = 3 + 2 × 32
25 = 7 + 2 × 32
27 = 19 + 2 × 22
33 = 31 + 2 × 12
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a
prime and twice a square?
"""
from __future__ import annotations
seive = [True] * 100001
i = 2
while i * i <= 100000:
if seive[i]:
for j in range(i * i, 100001, i):
seive[j] = False
i += 1
def is_prime(n: int) -> bool:
"""
Returns True if n is prime,
False otherwise, for 2 <= n <= 100000
>>> is_prime(87)
False
>>> is_prime(23)
True
>>> is_prime(25363)
False
"""
return seive[n]
odd_composites = [num for num in range(3, len(seive), 2) if not is_prime(num)]
def compute_nums(n: int) -> list[int]:
"""
Returns a list of first n odd composite numbers which do
not follow the conjecture.
>>> compute_nums(1)
[5777]
>>> compute_nums(2)
[5777, 5993]
>>> compute_nums(0)
Traceback (most recent call last):
...
ValueError: n must be >= 0
>>> compute_nums("a")
Traceback (most recent call last):
...
ValueError: n must be an integer
>>> compute_nums(1.1)
Traceback (most recent call last):
...
ValueError: n must be an integer
"""
if not isinstance(n, int):
raise ValueError("n must be an integer")
if n <= 0:
raise ValueError("n must be >= 0")
list_nums = []
for num in range(len(odd_composites)):
i = 0
while 2 * i * i <= odd_composites[num]:
rem = odd_composites[num] - 2 * i * i
if is_prime(rem):
break
i += 1
else:
list_nums.append(odd_composites[num])
if len(list_nums) == n:
return list_nums
def solution() -> int:
"""Return the solution to the problem"""
return compute_nums(1)[0]
if __name__ == "__main__":
print(f"{solution() = }")
| """
Problem 46: https://projecteuler.net/problem=46
It was proposed by Christian Goldbach that every odd composite number can be
written as the sum of a prime and twice a square.
9 = 7 + 2 × 12
15 = 7 + 2 × 22
21 = 3 + 2 × 32
25 = 7 + 2 × 32
27 = 19 + 2 × 22
33 = 31 + 2 × 12
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a
prime and twice a square?
"""
from __future__ import annotations
seive = [True] * 100001
i = 2
while i * i <= 100000:
if seive[i]:
for j in range(i * i, 100001, i):
seive[j] = False
i += 1
def is_prime(n: int) -> bool:
"""
Returns True if n is prime,
False otherwise, for 2 <= n <= 100000
>>> is_prime(87)
False
>>> is_prime(23)
True
>>> is_prime(25363)
False
"""
return seive[n]
odd_composites = [num for num in range(3, len(seive), 2) if not is_prime(num)]
def compute_nums(n: int) -> list[int]:
"""
Returns a list of first n odd composite numbers which do
not follow the conjecture.
>>> compute_nums(1)
[5777]
>>> compute_nums(2)
[5777, 5993]
>>> compute_nums(0)
Traceback (most recent call last):
...
ValueError: n must be >= 0
>>> compute_nums("a")
Traceback (most recent call last):
...
ValueError: n must be an integer
>>> compute_nums(1.1)
Traceback (most recent call last):
...
ValueError: n must be an integer
"""
if not isinstance(n, int):
raise ValueError("n must be an integer")
if n <= 0:
raise ValueError("n must be >= 0")
list_nums = []
for num in range(len(odd_composites)):
i = 0
while 2 * i * i <= odd_composites[num]:
rem = odd_composites[num] - 2 * i * i
if is_prime(rem):
break
i += 1
else:
list_nums.append(odd_composites[num])
if len(list_nums) == n:
return list_nums
def solution() -> int:
"""Return the solution to the problem"""
return compute_nums(1)[0]
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.wikipedia.org/wiki/Lowest_common_ancestor
# https://en.wikipedia.org/wiki/Breadth-first_search
from __future__ import annotations
import queue
def swap(a: int, b: int) -> tuple[int, int]:
"""
Return a tuple (b, a) when given two integers a and b
>>> swap(2,3)
(3, 2)
>>> swap(3,4)
(4, 3)
>>> swap(67, 12)
(12, 67)
"""
a ^= b
b ^= a
a ^= b
return a, b
def create_sparse(max_node: int, parent: list[list[int]]) -> list[list[int]]:
"""
creating sparse table which saves each nodes 2^i-th parent
"""
j = 1
while (1 << j) < max_node:
for i in range(1, max_node + 1):
parent[j][i] = parent[j - 1][parent[j - 1][i]]
j += 1
return parent
# returns lca of node u,v
def lowest_common_ancestor(
u: int, v: int, level: list[int], parent: list[list[int]]
) -> list[list[int]]:
# u must be deeper in the tree than v
if level[u] < level[v]:
u, v = swap(u, v)
# making depth of u same as depth of v
for i in range(18, -1, -1):
if level[u] - (1 << i) >= level[v]:
u = parent[i][u]
# at the same depth if u==v that mean lca is found
if u == v:
return u
# moving both nodes upwards till lca in found
for i in range(18, -1, -1):
if parent[i][u] != 0 and parent[i][u] != parent[i][v]:
u, v = parent[i][u], parent[i][v]
# returning longest common ancestor of u,v
return parent[0][u]
# runs a breadth first search from root node of the tree
def breadth_first_search(
level: list[int],
parent: list[list[int]],
max_node: int,
graph: dict[int, int],
root=1,
) -> tuple[list[int], list[list[int]]]:
"""
sets every nodes direct parent
parent of root node is set to 0
calculates depth of each node from root node
"""
level[root] = 0
q = queue.Queue(maxsize=max_node)
q.put(root)
while q.qsize() != 0:
u = q.get()
for v in graph[u]:
if level[v] == -1:
level[v] = level[u] + 1
q.put(v)
parent[0][v] = u
return level, parent
def main() -> None:
max_node = 13
# initializing with 0
parent = [[0 for _ in range(max_node + 10)] for _ in range(20)]
# initializing with -1 which means every node is unvisited
level = [-1 for _ in range(max_node + 10)]
graph = {
1: [2, 3, 4],
2: [5],
3: [6, 7],
4: [8],
5: [9, 10],
6: [11],
7: [],
8: [12, 13],
9: [],
10: [],
11: [],
12: [],
13: [],
}
level, parent = breadth_first_search(level, parent, max_node, graph, 1)
parent = create_sparse(max_node, parent)
print("LCA of node 1 and 3 is: ", lowest_common_ancestor(1, 3, level, parent))
print("LCA of node 5 and 6 is: ", lowest_common_ancestor(5, 6, level, parent))
print("LCA of node 7 and 11 is: ", lowest_common_ancestor(7, 11, level, parent))
print("LCA of node 6 and 7 is: ", lowest_common_ancestor(6, 7, level, parent))
print("LCA of node 4 and 12 is: ", lowest_common_ancestor(4, 12, level, parent))
print("LCA of node 8 and 8 is: ", lowest_common_ancestor(8, 8, level, parent))
if __name__ == "__main__":
main()
| # https://en.wikipedia.org/wiki/Lowest_common_ancestor
# https://en.wikipedia.org/wiki/Breadth-first_search
from __future__ import annotations
import queue
def swap(a: int, b: int) -> tuple[int, int]:
"""
Return a tuple (b, a) when given two integers a and b
>>> swap(2,3)
(3, 2)
>>> swap(3,4)
(4, 3)
>>> swap(67, 12)
(12, 67)
"""
a ^= b
b ^= a
a ^= b
return a, b
def create_sparse(max_node: int, parent: list[list[int]]) -> list[list[int]]:
"""
creating sparse table which saves each nodes 2^i-th parent
"""
j = 1
while (1 << j) < max_node:
for i in range(1, max_node + 1):
parent[j][i] = parent[j - 1][parent[j - 1][i]]
j += 1
return parent
# returns lca of node u,v
def lowest_common_ancestor(
u: int, v: int, level: list[int], parent: list[list[int]]
) -> list[list[int]]:
# u must be deeper in the tree than v
if level[u] < level[v]:
u, v = swap(u, v)
# making depth of u same as depth of v
for i in range(18, -1, -1):
if level[u] - (1 << i) >= level[v]:
u = parent[i][u]
# at the same depth if u==v that mean lca is found
if u == v:
return u
# moving both nodes upwards till lca in found
for i in range(18, -1, -1):
if parent[i][u] != 0 and parent[i][u] != parent[i][v]:
u, v = parent[i][u], parent[i][v]
# returning longest common ancestor of u,v
return parent[0][u]
# runs a breadth first search from root node of the tree
def breadth_first_search(
level: list[int],
parent: list[list[int]],
max_node: int,
graph: dict[int, int],
root=1,
) -> tuple[list[int], list[list[int]]]:
"""
sets every nodes direct parent
parent of root node is set to 0
calculates depth of each node from root node
"""
level[root] = 0
q = queue.Queue(maxsize=max_node)
q.put(root)
while q.qsize() != 0:
u = q.get()
for v in graph[u]:
if level[v] == -1:
level[v] = level[u] + 1
q.put(v)
parent[0][v] = u
return level, parent
def main() -> None:
max_node = 13
# initializing with 0
parent = [[0 for _ in range(max_node + 10)] for _ in range(20)]
# initializing with -1 which means every node is unvisited
level = [-1 for _ in range(max_node + 10)]
graph = {
1: [2, 3, 4],
2: [5],
3: [6, 7],
4: [8],
5: [9, 10],
6: [11],
7: [],
8: [12, 13],
9: [],
10: [],
11: [],
12: [],
13: [],
}
level, parent = breadth_first_search(level, parent, max_node, graph, 1)
parent = create_sparse(max_node, parent)
print("LCA of node 1 and 3 is: ", lowest_common_ancestor(1, 3, level, parent))
print("LCA of node 5 and 6 is: ", lowest_common_ancestor(5, 6, level, parent))
print("LCA of node 7 and 11 is: ", lowest_common_ancestor(7, 11, level, parent))
print("LCA of node 6 and 7 is: ", lowest_common_ancestor(6, 7, level, parent))
print("LCA of node 4 and 12 is: ", lowest_common_ancestor(4, 12, level, parent))
print("LCA of node 8 and 8 is: ", lowest_common_ancestor(8, 8, level, parent))
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
In this problem, we want to rotate the matrix elements by 90, 180, 270
(counterclockwise)
Discussion in stackoverflow:
https://stackoverflow.com/questions/42519/how-do-you-rotate-a-two-dimensional-array
"""
from __future__ import annotations
def make_matrix(row_size: int = 4) -> list[list]:
"""
>>> make_matrix()
[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
>>> make_matrix(1)
[[1]]
>>> make_matrix(-2)
[[1, 2], [3, 4]]
>>> make_matrix(3)
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> make_matrix() == make_matrix(4)
True
"""
row_size = abs(row_size) or 4
return [[1 + x + y * row_size for x in range(row_size)] for y in range(row_size)]
def rotate_90(matrix: list[list]) -> list[list]:
"""
>>> rotate_90(make_matrix())
[[4, 8, 12, 16], [3, 7, 11, 15], [2, 6, 10, 14], [1, 5, 9, 13]]
>>> rotate_90(make_matrix()) == transpose(reverse_column(make_matrix()))
True
"""
return reverse_row(transpose(matrix))
# OR.. transpose(reverse_column(matrix))
def rotate_180(matrix: list[list]) -> list[list]:
"""
>>> rotate_180(make_matrix())
[[16, 15, 14, 13], [12, 11, 10, 9], [8, 7, 6, 5], [4, 3, 2, 1]]
>>> rotate_180(make_matrix()) == reverse_column(reverse_row(make_matrix()))
True
"""
return reverse_row(reverse_column(matrix))
# OR.. reverse_column(reverse_row(matrix))
def rotate_270(matrix: list[list]) -> list[list]:
"""
>>> rotate_270(make_matrix())
[[13, 9, 5, 1], [14, 10, 6, 2], [15, 11, 7, 3], [16, 12, 8, 4]]
>>> rotate_270(make_matrix()) == transpose(reverse_row(make_matrix()))
True
"""
return reverse_column(transpose(matrix))
# OR.. transpose(reverse_row(matrix))
def transpose(matrix: list[list]) -> list[list]:
matrix[:] = [list(x) for x in zip(*matrix)]
return matrix
def reverse_row(matrix: list[list]) -> list[list]:
matrix[:] = matrix[::-1]
return matrix
def reverse_column(matrix: list[list]) -> list[list]:
matrix[:] = [x[::-1] for x in matrix]
return matrix
def print_matrix(matrix: list[list]) -> None:
for i in matrix:
print(*i)
if __name__ == "__main__":
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 90 counterclockwise:\n")
print_matrix(rotate_90(matrix))
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 180:\n")
print_matrix(rotate_180(matrix))
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 270 counterclockwise:\n")
print_matrix(rotate_270(matrix))
| """
In this problem, we want to rotate the matrix elements by 90, 180, 270
(counterclockwise)
Discussion in stackoverflow:
https://stackoverflow.com/questions/42519/how-do-you-rotate-a-two-dimensional-array
"""
from __future__ import annotations
def make_matrix(row_size: int = 4) -> list[list]:
"""
>>> make_matrix()
[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
>>> make_matrix(1)
[[1]]
>>> make_matrix(-2)
[[1, 2], [3, 4]]
>>> make_matrix(3)
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> make_matrix() == make_matrix(4)
True
"""
row_size = abs(row_size) or 4
return [[1 + x + y * row_size for x in range(row_size)] for y in range(row_size)]
def rotate_90(matrix: list[list]) -> list[list]:
"""
>>> rotate_90(make_matrix())
[[4, 8, 12, 16], [3, 7, 11, 15], [2, 6, 10, 14], [1, 5, 9, 13]]
>>> rotate_90(make_matrix()) == transpose(reverse_column(make_matrix()))
True
"""
return reverse_row(transpose(matrix))
# OR.. transpose(reverse_column(matrix))
def rotate_180(matrix: list[list]) -> list[list]:
"""
>>> rotate_180(make_matrix())
[[16, 15, 14, 13], [12, 11, 10, 9], [8, 7, 6, 5], [4, 3, 2, 1]]
>>> rotate_180(make_matrix()) == reverse_column(reverse_row(make_matrix()))
True
"""
return reverse_row(reverse_column(matrix))
# OR.. reverse_column(reverse_row(matrix))
def rotate_270(matrix: list[list]) -> list[list]:
"""
>>> rotate_270(make_matrix())
[[13, 9, 5, 1], [14, 10, 6, 2], [15, 11, 7, 3], [16, 12, 8, 4]]
>>> rotate_270(make_matrix()) == transpose(reverse_row(make_matrix()))
True
"""
return reverse_column(transpose(matrix))
# OR.. transpose(reverse_row(matrix))
def transpose(matrix: list[list]) -> list[list]:
matrix[:] = [list(x) for x in zip(*matrix)]
return matrix
def reverse_row(matrix: list[list]) -> list[list]:
matrix[:] = matrix[::-1]
return matrix
def reverse_column(matrix: list[list]) -> list[list]:
matrix[:] = [x[::-1] for x in matrix]
return matrix
def print_matrix(matrix: list[list]) -> None:
for i in matrix:
print(*i)
if __name__ == "__main__":
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 90 counterclockwise:\n")
print_matrix(rotate_90(matrix))
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 180:\n")
print_matrix(rotate_180(matrix))
matrix = make_matrix()
print("\norigin:\n")
print_matrix(matrix)
print("\nrotate 270 counterclockwise:\n")
print_matrix(rotate_270(matrix))
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
def fibonacci(n: int) -> int:
"""
Computes the Fibonacci number for input n by iterating through n numbers
and creating an array of ints using the Fibonacci formula.
Returns the nth element of the array.
>>> fibonacci(2)
1
>>> fibonacci(3)
2
>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(12)
144
"""
if n == 1 or type(n) is not int:
return 0
elif n == 2:
return 1
else:
sequence = [0, 1]
for i in range(2, n + 1):
sequence.append(sequence[i - 1] + sequence[i - 2])
return sequence[n]
def fibonacci_digits_index(n: int) -> int:
"""
Computes incrementing Fibonacci numbers starting from 3 until the length
of the resulting Fibonacci result is the input value n. Returns the term
of the Fibonacci sequence where this occurs.
>>> fibonacci_digits_index(1000)
4782
>>> fibonacci_digits_index(100)
476
>>> fibonacci_digits_index(50)
237
>>> fibonacci_digits_index(3)
12
"""
digits = 0
index = 2
while digits < n:
index += 1
digits = len(str(fibonacci(index)))
return index
def solution(n: int = 1000) -> int:
"""
Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
return fibonacci_digits_index(n)
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
def fibonacci(n: int) -> int:
"""
Computes the Fibonacci number for input n by iterating through n numbers
and creating an array of ints using the Fibonacci formula.
Returns the nth element of the array.
>>> fibonacci(2)
1
>>> fibonacci(3)
2
>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(12)
144
"""
if n == 1 or type(n) is not int:
return 0
elif n == 2:
return 1
else:
sequence = [0, 1]
for i in range(2, n + 1):
sequence.append(sequence[i - 1] + sequence[i - 2])
return sequence[n]
def fibonacci_digits_index(n: int) -> int:
"""
Computes incrementing Fibonacci numbers starting from 3 until the length
of the resulting Fibonacci result is the input value n. Returns the term
of the Fibonacci sequence where this occurs.
>>> fibonacci_digits_index(1000)
4782
>>> fibonacci_digits_index(100)
476
>>> fibonacci_digits_index(50)
237
>>> fibonacci_digits_index(3)
12
"""
digits = 0
index = 2
while digits < n:
index += 1
digits = len(str(fibonacci(index)))
return index
def solution(n: int = 1000) -> int:
"""
Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
return fibonacci_digits_index(n)
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
import math
def default_matrix_multiplication(a: list, b: list) -> list:
"""
Multiplication only for 2x2 matrices
"""
if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2:
raise Exception("Matrices are not 2x2")
new_matrix = [
[a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]],
[a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]],
]
return new_matrix
def matrix_addition(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def matrix_subtraction(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def split_matrix(a: list) -> tuple[list, list, list, list]:
"""
Given an even length matrix, returns the top_left, top_right, bot_left, bot_right
quadrant.
>>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]])
([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]])
>>> split_matrix([
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6],
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6]
... ]) # doctest: +NORMALIZE_WHITESPACE
([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4],
[2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1],
[6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3],
[8, 4, 1, 6]])
"""
if len(a) % 2 != 0 or len(a[0]) % 2 != 0:
raise Exception("Odd matrices are not supported!")
matrix_length = len(a)
mid = matrix_length // 2
top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)]
bot_right = [
[a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length)
]
top_left = [[a[i][j] for j in range(mid)] for i in range(mid)]
bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)]
return top_left, top_right, bot_left, bot_right
def matrix_dimensions(matrix: list) -> tuple[int, int]:
return len(matrix), len(matrix[0])
def print_matrix(matrix: list) -> None:
for i in range(len(matrix)):
print(matrix[i])
def actual_strassen(matrix_a: list, matrix_b: list) -> list:
"""
Recursive function to calculate the product of two matrices, using the Strassen
Algorithm. It only supports even length matrices.
"""
if matrix_dimensions(matrix_a) == (2, 2):
return default_matrix_multiplication(matrix_a, matrix_b)
a, b, c, d = split_matrix(matrix_a)
e, f, g, h = split_matrix(matrix_b)
t1 = actual_strassen(a, matrix_subtraction(f, h))
t2 = actual_strassen(matrix_addition(a, b), h)
t3 = actual_strassen(matrix_addition(c, d), e)
t4 = actual_strassen(d, matrix_subtraction(g, e))
t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h))
t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h))
t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f))
top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6)
top_right = matrix_addition(t1, t2)
bot_left = matrix_addition(t3, t4)
bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7)
# construct the new matrix from our 4 quadrants
new_matrix = []
for i in range(len(top_right)):
new_matrix.append(top_left[i] + top_right[i])
for i in range(len(bot_right)):
new_matrix.append(bot_left[i] + bot_right[i])
return new_matrix
def strassen(matrix1: list, matrix2: list) -> list:
"""
>>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]])
[[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]]
>>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]])
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
raise Exception(
f"Unable to multiply these matrices, please check the dimensions. \n"
f"Matrix A:{matrix1} \nMatrix B:{matrix2}"
)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
maximum = max(max(dimension1), max(dimension2))
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
# Adding zeros to the matrices so that the arrays dimensions are the same and also
# power of 2
for i in range(0, maxim):
if i < dimension1[0]:
for j in range(dimension1[1], maxim):
new_matrix1[i].append(0)
else:
new_matrix1.append([0] * maxim)
if i < dimension2[0]:
for j in range(dimension2[1], maxim):
new_matrix2[i].append(0)
else:
new_matrix2.append([0] * maxim)
final_matrix = actual_strassen(new_matrix1, new_matrix2)
# Removing the additional zeros
for i in range(0, maxim):
if i < dimension1[0]:
for j in range(dimension2[1], maxim):
final_matrix[i].pop()
else:
final_matrix.pop()
return final_matrix
if __name__ == "__main__":
matrix1 = [
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 2, 3, 1],
]
matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]]
print(strassen(matrix1, matrix2))
| from __future__ import annotations
import math
def default_matrix_multiplication(a: list, b: list) -> list:
"""
Multiplication only for 2x2 matrices
"""
if len(a) != 2 or len(a[0]) != 2 or len(b) != 2 or len(b[0]) != 2:
raise Exception("Matrices are not 2x2")
new_matrix = [
[a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]],
[a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]],
]
return new_matrix
def matrix_addition(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] + matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def matrix_subtraction(matrix_a: list, matrix_b: list):
return [
[matrix_a[row][col] - matrix_b[row][col] for col in range(len(matrix_a[row]))]
for row in range(len(matrix_a))
]
def split_matrix(a: list) -> tuple[list, list, list, list]:
"""
Given an even length matrix, returns the top_left, top_right, bot_left, bot_right
quadrant.
>>> split_matrix([[4,3,2,4],[2,3,1,1],[6,5,4,3],[8,4,1,6]])
([[4, 3], [2, 3]], [[2, 4], [1, 1]], [[6, 5], [8, 4]], [[4, 3], [1, 6]])
>>> split_matrix([
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6],
... [4,3,2,4,4,3,2,4],[2,3,1,1,2,3,1,1],[6,5,4,3,6,5,4,3],[8,4,1,6,8,4,1,6]
... ]) # doctest: +NORMALIZE_WHITESPACE
([[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4],
[2, 3, 1, 1], [6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1],
[6, 5, 4, 3], [8, 4, 1, 6]], [[4, 3, 2, 4], [2, 3, 1, 1], [6, 5, 4, 3],
[8, 4, 1, 6]])
"""
if len(a) % 2 != 0 or len(a[0]) % 2 != 0:
raise Exception("Odd matrices are not supported!")
matrix_length = len(a)
mid = matrix_length // 2
top_right = [[a[i][j] for j in range(mid, matrix_length)] for i in range(mid)]
bot_right = [
[a[i][j] for j in range(mid, matrix_length)] for i in range(mid, matrix_length)
]
top_left = [[a[i][j] for j in range(mid)] for i in range(mid)]
bot_left = [[a[i][j] for j in range(mid)] for i in range(mid, matrix_length)]
return top_left, top_right, bot_left, bot_right
def matrix_dimensions(matrix: list) -> tuple[int, int]:
return len(matrix), len(matrix[0])
def print_matrix(matrix: list) -> None:
for i in range(len(matrix)):
print(matrix[i])
def actual_strassen(matrix_a: list, matrix_b: list) -> list:
"""
Recursive function to calculate the product of two matrices, using the Strassen
Algorithm. It only supports even length matrices.
"""
if matrix_dimensions(matrix_a) == (2, 2):
return default_matrix_multiplication(matrix_a, matrix_b)
a, b, c, d = split_matrix(matrix_a)
e, f, g, h = split_matrix(matrix_b)
t1 = actual_strassen(a, matrix_subtraction(f, h))
t2 = actual_strassen(matrix_addition(a, b), h)
t3 = actual_strassen(matrix_addition(c, d), e)
t4 = actual_strassen(d, matrix_subtraction(g, e))
t5 = actual_strassen(matrix_addition(a, d), matrix_addition(e, h))
t6 = actual_strassen(matrix_subtraction(b, d), matrix_addition(g, h))
t7 = actual_strassen(matrix_subtraction(a, c), matrix_addition(e, f))
top_left = matrix_addition(matrix_subtraction(matrix_addition(t5, t4), t2), t6)
top_right = matrix_addition(t1, t2)
bot_left = matrix_addition(t3, t4)
bot_right = matrix_subtraction(matrix_subtraction(matrix_addition(t1, t5), t3), t7)
# construct the new matrix from our 4 quadrants
new_matrix = []
for i in range(len(top_right)):
new_matrix.append(top_left[i] + top_right[i])
for i in range(len(bot_right)):
new_matrix.append(bot_left[i] + bot_right[i])
return new_matrix
def strassen(matrix1: list, matrix2: list) -> list:
"""
>>> strassen([[2,1,3],[3,4,6],[1,4,2],[7,6,7]], [[4,2,3,4],[2,1,1,1],[8,6,4,2]])
[[34, 23, 19, 15], [68, 46, 37, 28], [28, 18, 15, 12], [96, 62, 55, 48]]
>>> strassen([[3,7,5,6,9],[1,5,3,7,8],[1,4,4,5,7]], [[2,4],[5,2],[1,7],[5,5],[7,8]])
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
raise Exception(
f"Unable to multiply these matrices, please check the dimensions. \n"
f"Matrix A:{matrix1} \nMatrix B:{matrix2}"
)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
maximum = max(max(dimension1), max(dimension2))
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
# Adding zeros to the matrices so that the arrays dimensions are the same and also
# power of 2
for i in range(0, maxim):
if i < dimension1[0]:
for j in range(dimension1[1], maxim):
new_matrix1[i].append(0)
else:
new_matrix1.append([0] * maxim)
if i < dimension2[0]:
for j in range(dimension2[1], maxim):
new_matrix2[i].append(0)
else:
new_matrix2.append([0] * maxim)
final_matrix = actual_strassen(new_matrix1, new_matrix2)
# Removing the additional zeros
for i in range(0, maxim):
if i < dimension1[0]:
for j in range(dimension2[1], maxim):
final_matrix[i].pop()
else:
final_matrix.pop()
return final_matrix
if __name__ == "__main__":
matrix1 = [
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 4, 3, 1],
[2, 3, 6, 7],
[3, 1, 2, 4],
[2, 3, 4, 5],
[6, 2, 3, 1],
]
matrix2 = [[0, 2, 1, 1], [16, 2, 3, 3], [2, 2, 7, 7], [13, 11, 22, 4]]
print(strassen(matrix1, matrix2))
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A Trie/Prefix Tree is a kind of search tree used to provide quick lookup
of words/patterns in a set of words. A basic Trie however has O(n^2) space complexity
making it impractical in practice. It however provides O(max(search_string, length of
longest word)) lookup time making it an optimal approach when space is not an issue.
"""
class TrieNode:
def __init__(self):
self.nodes = dict() # Mapping from char to TrieNode
self.is_leaf = False
def insert_many(self, words: [str]):
"""
Inserts a list of words into the Trie
:param words: list of string words
:return: None
"""
for word in words:
self.insert(word)
def insert(self, word: str):
"""
Inserts a word into the Trie
:param word: word to be inserted
:return: None
"""
curr = self
for char in word:
if char not in curr.nodes:
curr.nodes[char] = TrieNode()
curr = curr.nodes[char]
curr.is_leaf = True
def find(self, word: str) -> bool:
"""
Tries to find word in a Trie
:param word: word to look for
:return: Returns True if word is found, False otherwise
"""
curr = self
for char in word:
if char not in curr.nodes:
return False
curr = curr.nodes[char]
return curr.is_leaf
def delete(self, word: str):
"""
Deletes a word in a Trie
:param word: word to delete
:return: None
"""
def _delete(curr: TrieNode, word: str, index: int):
if index == len(word):
# If word does not exist
if not curr.is_leaf:
return False
curr.is_leaf = False
return len(curr.nodes) == 0
char = word[index]
char_node = curr.nodes.get(char)
# If char not in current trie node
if not char_node:
return False
# Flag to check if node can be deleted
delete_curr = _delete(char_node, word, index + 1)
if delete_curr:
del curr.nodes[char]
return len(curr.nodes) == 0
return delete_curr
_delete(self, word, 0)
def print_words(node: TrieNode, word: str):
"""
Prints all the words in a Trie
:param node: root node of Trie
:param word: Word variable should be empty at start
:return: None
"""
if node.is_leaf:
print(word, end=" ")
for key, value in node.nodes.items():
print_words(value, word + key)
def test_trie():
words = "banana bananas bandana band apple all beast".split()
root = TrieNode()
root.insert_many(words)
# print_words(root, "")
assert all(root.find(word) for word in words)
assert root.find("banana")
assert not root.find("bandanas")
assert not root.find("apps")
assert root.find("apple")
assert root.find("all")
root.delete("all")
assert not root.find("all")
root.delete("banana")
assert not root.find("banana")
assert root.find("bananas")
return True
def print_results(msg: str, passes: bool) -> None:
print(str(msg), "works!" if passes else "doesn't work :(")
def pytests():
assert test_trie()
def main():
"""
>>> pytests()
"""
print_results("Testing trie functionality", test_trie())
if __name__ == "__main__":
main()
| """
A Trie/Prefix Tree is a kind of search tree used to provide quick lookup
of words/patterns in a set of words. A basic Trie however has O(n^2) space complexity
making it impractical in practice. It however provides O(max(search_string, length of
longest word)) lookup time making it an optimal approach when space is not an issue.
"""
class TrieNode:
def __init__(self):
self.nodes = dict() # Mapping from char to TrieNode
self.is_leaf = False
def insert_many(self, words: [str]):
"""
Inserts a list of words into the Trie
:param words: list of string words
:return: None
"""
for word in words:
self.insert(word)
def insert(self, word: str):
"""
Inserts a word into the Trie
:param word: word to be inserted
:return: None
"""
curr = self
for char in word:
if char not in curr.nodes:
curr.nodes[char] = TrieNode()
curr = curr.nodes[char]
curr.is_leaf = True
def find(self, word: str) -> bool:
"""
Tries to find word in a Trie
:param word: word to look for
:return: Returns True if word is found, False otherwise
"""
curr = self
for char in word:
if char not in curr.nodes:
return False
curr = curr.nodes[char]
return curr.is_leaf
def delete(self, word: str):
"""
Deletes a word in a Trie
:param word: word to delete
:return: None
"""
def _delete(curr: TrieNode, word: str, index: int):
if index == len(word):
# If word does not exist
if not curr.is_leaf:
return False
curr.is_leaf = False
return len(curr.nodes) == 0
char = word[index]
char_node = curr.nodes.get(char)
# If char not in current trie node
if not char_node:
return False
# Flag to check if node can be deleted
delete_curr = _delete(char_node, word, index + 1)
if delete_curr:
del curr.nodes[char]
return len(curr.nodes) == 0
return delete_curr
_delete(self, word, 0)
def print_words(node: TrieNode, word: str):
"""
Prints all the words in a Trie
:param node: root node of Trie
:param word: Word variable should be empty at start
:return: None
"""
if node.is_leaf:
print(word, end=" ")
for key, value in node.nodes.items():
print_words(value, word + key)
def test_trie():
words = "banana bananas bandana band apple all beast".split()
root = TrieNode()
root.insert_many(words)
# print_words(root, "")
assert all(root.find(word) for word in words)
assert root.find("banana")
assert not root.find("bandanas")
assert not root.find("apps")
assert root.find("apple")
assert root.find("all")
root.delete("all")
assert not root.find("all")
root.delete("banana")
assert not root.find("banana")
assert root.find("bananas")
return True
def print_results(msg: str, passes: bool) -> None:
print(str(msg), "works!" if passes else "doesn't work :(")
def pytests():
assert test_trie()
def main():
"""
>>> pytests()
"""
print_results("Testing trie functionality", test_trie())
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
import os
try:
from .build_directory_md import good_file_paths
except ImportError:
from build_directory_md import good_file_paths # type: ignore
filepaths = list(good_file_paths())
assert filepaths, "good_file_paths() failed!"
upper_files = [file for file in filepaths if file != file.lower()]
if upper_files:
print(f"{len(upper_files)} files contain uppercase characters:")
print("\n".join(upper_files) + "\n")
space_files = [file for file in filepaths if " " in file]
if space_files:
print(f"{len(space_files)} files contain space characters:")
print("\n".join(space_files) + "\n")
hyphen_files = [file for file in filepaths if "-" in file]
if hyphen_files:
print(f"{len(hyphen_files)} files contain hyphen characters:")
print("\n".join(hyphen_files) + "\n")
nodir_files = [file for file in filepaths if os.sep not in file]
if nodir_files:
print(f"{len(nodir_files)} files are not in a directory:")
print("\n".join(nodir_files) + "\n")
bad_files = len(upper_files + space_files + hyphen_files + nodir_files)
if bad_files:
import sys
sys.exit(bad_files)
| #!/usr/bin/env python3
import os
try:
from .build_directory_md import good_file_paths
except ImportError:
from build_directory_md import good_file_paths # type: ignore
filepaths = list(good_file_paths())
assert filepaths, "good_file_paths() failed!"
upper_files = [file for file in filepaths if file != file.lower()]
if upper_files:
print(f"{len(upper_files)} files contain uppercase characters:")
print("\n".join(upper_files) + "\n")
space_files = [file for file in filepaths if " " in file]
if space_files:
print(f"{len(space_files)} files contain space characters:")
print("\n".join(space_files) + "\n")
hyphen_files = [file for file in filepaths if "-" in file]
if hyphen_files:
print(f"{len(hyphen_files)} files contain hyphen characters:")
print("\n".join(hyphen_files) + "\n")
nodir_files = [file for file in filepaths if os.sep not in file]
if nodir_files:
print(f"{len(nodir_files)} files are not in a directory:")
print("\n".join(nodir_files) + "\n")
bad_files = len(upper_files + space_files + hyphen_files + nodir_files)
if bad_files:
import sys
sys.exit(bad_files)
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from string import ascii_lowercase, ascii_uppercase
def capitalize(sentence: str) -> str:
"""
This function will capitalize the first letter of a sentence or a word
>>> capitalize("hello world")
'Hello world'
>>> capitalize("123 hello world")
'123 hello world'
>>> capitalize(" hello world")
' hello world'
>>> capitalize("a")
'A'
>>> capitalize("")
''
"""
if not sentence:
return ""
lower_to_upper = {lc: uc for lc, uc in zip(ascii_lowercase, ascii_uppercase)}
return lower_to_upper.get(sentence[0], sentence[0]) + sentence[1:]
if __name__ == "__main__":
from doctest import testmod
testmod()
| from string import ascii_lowercase, ascii_uppercase
def capitalize(sentence: str) -> str:
"""
This function will capitalize the first letter of a sentence or a word
>>> capitalize("hello world")
'Hello world'
>>> capitalize("123 hello world")
'123 hello world'
>>> capitalize(" hello world")
' hello world'
>>> capitalize("a")
'A'
>>> capitalize("")
''
"""
if not sentence:
return ""
lower_to_upper = {lc: uc for lc, uc in zip(ascii_lowercase, ascii_uppercase)}
return lower_to_upper.get(sentence[0], sentence[0]) + sentence[1:]
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
fib = [0, 1]
i = 0
while fib[i] <= n:
fib.append(fib[i] + fib[i + 1])
if fib[i + 2] > n:
break
i += 1
total = 0
for j in range(len(fib) - 1):
if fib[j] % 2 == 0:
total += fib[j]
return total
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 2: https://projecteuler.net/problem=2
Even Fibonacci Numbers
Each new term in the Fibonacci sequence is generated by adding the previous
two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed
four million, find the sum of the even-valued terms.
References:
- https://en.wikipedia.org/wiki/Fibonacci_number
"""
def solution(n: int = 4000000) -> int:
"""
Returns the sum of all even fibonacci sequence elements that are lower
or equal to n.
>>> solution(10)
10
>>> solution(15)
10
>>> solution(2)
2
>>> solution(1)
0
>>> solution(34)
44
"""
fib = [0, 1]
i = 0
while fib[i] <= n:
fib.append(fib[i] + fib[i + 1])
if fib[i + 2] > n:
break
i += 1
total = 0
for j in range(len(fib) - 1):
if fib[j] % 2 == 0:
total += fib[j]
return total
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import os
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
headers = {"UserAgent": UserAgent().random}
URL = "https://www.mywaifulist.moe/random"
def save_image(image_url: str, image_title: str) -> None:
"""
Saves the image of anime character
"""
image = requests.get(image_url, headers=headers)
with open(image_title, "wb") as file:
file.write(image.content)
def random_anime_character() -> tuple[str, str, str]:
"""
Returns the Title, Description, and Image Title of a random anime character .
"""
soup = BeautifulSoup(requests.get(URL, headers=headers).text, "html.parser")
title = soup.find("meta", attrs={"property": "og:title"}).attrs["content"]
image_url = soup.find("meta", attrs={"property": "og:image"}).attrs["content"]
description = soup.find("p", id="description").get_text()
_, image_extension = os.path.splitext(os.path.basename(image_url))
image_title = title.strip().replace(" ", "_")
image_title = f"{image_title}{image_extension}"
save_image(image_url, image_title)
return (title, description, image_title)
if __name__ == "__main__":
title, desc, image_title = random_anime_character()
print(f"{title}\n\n{desc}\n\nImage saved : {image_title}")
| import os
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
headers = {"UserAgent": UserAgent().random}
URL = "https://www.mywaifulist.moe/random"
def save_image(image_url: str, image_title: str) -> None:
"""
Saves the image of anime character
"""
image = requests.get(image_url, headers=headers)
with open(image_title, "wb") as file:
file.write(image.content)
def random_anime_character() -> tuple[str, str, str]:
"""
Returns the Title, Description, and Image Title of a random anime character .
"""
soup = BeautifulSoup(requests.get(URL, headers=headers).text, "html.parser")
title = soup.find("meta", attrs={"property": "og:title"}).attrs["content"]
image_url = soup.find("meta", attrs={"property": "og:image"}).attrs["content"]
description = soup.find("p", id="description").get_text()
_, image_extension = os.path.splitext(os.path.basename(image_url))
image_title = title.strip().replace(" ", "_")
image_title = f"{image_title}{image_extension}"
save_image(image_url, image_title)
return (title, description, image_title)
if __name__ == "__main__":
title, desc, image_title = random_anime_character()
print(f"{title}\n\n{desc}\n\nImage saved : {image_title}")
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def oct_to_decimal(oct_string: str) -> int:
"""
Convert a octal value to its decimal equivalent
>>> oct_to_decimal("12")
10
>>> oct_to_decimal(" 12 ")
10
>>> oct_to_decimal("-45")
-37
>>> oct_to_decimal("2-0Fm")
Traceback (most recent call last):
...
ValueError: Non-octal value was passed to the function
>>> oct_to_decimal("")
Traceback (most recent call last):
...
ValueError: Empty string was passed to the function
>>> oct_to_decimal("19")
Traceback (most recent call last):
...
ValueError: Non-octal value was passed to the function
"""
oct_string = str(oct_string).strip()
if not oct_string:
raise ValueError("Empty string was passed to the function")
is_negative = oct_string[0] == "-"
if is_negative:
oct_string = oct_string[1:]
if not oct_string.isdigit() or not all(0 <= int(char) <= 7 for char in oct_string):
raise ValueError("Non-octal value was passed to the function")
decimal_number = 0
for char in oct_string:
decimal_number = 8 * decimal_number + int(char)
if is_negative:
decimal_number = -decimal_number
return decimal_number
if __name__ == "__main__":
from doctest import testmod
testmod()
| def oct_to_decimal(oct_string: str) -> int:
"""
Convert a octal value to its decimal equivalent
>>> oct_to_decimal("12")
10
>>> oct_to_decimal(" 12 ")
10
>>> oct_to_decimal("-45")
-37
>>> oct_to_decimal("2-0Fm")
Traceback (most recent call last):
...
ValueError: Non-octal value was passed to the function
>>> oct_to_decimal("")
Traceback (most recent call last):
...
ValueError: Empty string was passed to the function
>>> oct_to_decimal("19")
Traceback (most recent call last):
...
ValueError: Non-octal value was passed to the function
"""
oct_string = str(oct_string).strip()
if not oct_string:
raise ValueError("Empty string was passed to the function")
is_negative = oct_string[0] == "-"
if is_negative:
oct_string = oct_string[1:]
if not oct_string.isdigit() or not all(0 <= int(char) <= 7 for char in oct_string):
raise ValueError("Non-octal value was passed to the function")
decimal_number = 0
for char in oct_string:
decimal_number = 8 * decimal_number + int(char)
if is_negative:
decimal_number = -decimal_number
return decimal_number
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
@author: MatteoRaso
"""
from math import pi, sqrt
from random import uniform
from statistics import mean
from typing import Callable
def pi_estimator(iterations: int):
"""
An implementation of the Monte Carlo method used to find pi.
1. Draw a 2x2 square centred at (0,0).
2. Inscribe a circle within the square.
3. For each iteration, place a dot anywhere in the square.
a. Record the number of dots within the circle.
4. After all the dots are placed, divide the dots in the circle by the total.
5. Multiply this value by 4 to get your estimate of pi.
6. Print the estimated and numpy value of pi
"""
# A local function to see if a dot lands in the circle.
def is_in_circle(x: float, y: float) -> bool:
distance_from_centre = sqrt((x ** 2) + (y ** 2))
# Our circle has a radius of 1, so a distance
# greater than 1 would land outside the circle.
return distance_from_centre <= 1
# The proportion of guesses that landed in the circle
proportion = mean(
int(is_in_circle(uniform(-1.0, 1.0), uniform(-1.0, 1.0)))
for _ in range(iterations)
)
# The ratio of the area for circle to square is pi/4.
pi_estimate = proportion * 4
print(f"The estimated value of pi is {pi_estimate}")
print(f"The numpy value of pi is {pi}")
print(f"The total error is {abs(pi - pi_estimate)}")
def area_under_curve_estimator(
iterations: int,
function_to_integrate: Callable[[float], float],
min_value: float = 0.0,
max_value: float = 1.0,
) -> float:
"""
An implementation of the Monte Carlo method to find area under
a single variable non-negative real-valued continuous function,
say f(x), where x lies within a continuous bounded interval,
say [min_value, max_value], where min_value and max_value are
finite numbers
1. Let x be a uniformly distributed random variable between min_value to
max_value
2. Expected value of f(x) =
(integrate f(x) from min_value to max_value)/(max_value - min_value)
3. Finding expected value of f(x):
a. Repeatedly draw x from uniform distribution
b. Evaluate f(x) at each of the drawn x values
c. Expected value = average of the function evaluations
4. Estimated value of integral = Expected value * (max_value - min_value)
5. Returns estimated value
"""
return mean(
function_to_integrate(uniform(min_value, max_value)) for _ in range(iterations)
) * (max_value - min_value)
def area_under_line_estimator_check(
iterations: int, min_value: float = 0.0, max_value: float = 1.0
) -> None:
"""
Checks estimation error for area_under_curve_estimator function
for f(x) = x where x lies within min_value to max_value
1. Calls "area_under_curve_estimator" function
2. Compares with the expected value
3. Prints estimated, expected and error value
"""
def identity_function(x: float) -> float:
"""
Represents identity function
>>> [function_to_integrate(x) for x in [-2.0, -1.0, 0.0, 1.0, 2.0]]
[-2.0, -1.0, 0.0, 1.0, 2.0]
"""
return x
estimated_value = area_under_curve_estimator(
iterations, identity_function, min_value, max_value
)
expected_value = (max_value * max_value - min_value * min_value) / 2
print("******************")
print(f"Estimating area under y=x where x varies from {min_value} to {max_value}")
print(f"Estimated value is {estimated_value}")
print(f"Expected value is {expected_value}")
print(f"Total error is {abs(estimated_value - expected_value)}")
print("******************")
def pi_estimator_using_area_under_curve(iterations: int) -> None:
"""
Area under curve y = sqrt(4 - x^2) where x lies in 0 to 2 is equal to pi
"""
def function_to_integrate(x: float) -> float:
"""
Represents semi-circle with radius 2
>>> [function_to_integrate(x) for x in [-2.0, 0.0, 2.0]]
[0.0, 2.0, 0.0]
"""
return sqrt(4.0 - x * x)
estimated_value = area_under_curve_estimator(
iterations, function_to_integrate, 0.0, 2.0
)
print("******************")
print("Estimating pi using area_under_curve_estimator")
print(f"Estimated value is {estimated_value}")
print(f"Expected value is {pi}")
print(f"Total error is {abs(estimated_value - pi)}")
print("******************")
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
@author: MatteoRaso
"""
from math import pi, sqrt
from random import uniform
from statistics import mean
from typing import Callable
def pi_estimator(iterations: int):
"""
An implementation of the Monte Carlo method used to find pi.
1. Draw a 2x2 square centred at (0,0).
2. Inscribe a circle within the square.
3. For each iteration, place a dot anywhere in the square.
a. Record the number of dots within the circle.
4. After all the dots are placed, divide the dots in the circle by the total.
5. Multiply this value by 4 to get your estimate of pi.
6. Print the estimated and numpy value of pi
"""
# A local function to see if a dot lands in the circle.
def is_in_circle(x: float, y: float) -> bool:
distance_from_centre = sqrt((x ** 2) + (y ** 2))
# Our circle has a radius of 1, so a distance
# greater than 1 would land outside the circle.
return distance_from_centre <= 1
# The proportion of guesses that landed in the circle
proportion = mean(
int(is_in_circle(uniform(-1.0, 1.0), uniform(-1.0, 1.0)))
for _ in range(iterations)
)
# The ratio of the area for circle to square is pi/4.
pi_estimate = proportion * 4
print(f"The estimated value of pi is {pi_estimate}")
print(f"The numpy value of pi is {pi}")
print(f"The total error is {abs(pi - pi_estimate)}")
def area_under_curve_estimator(
iterations: int,
function_to_integrate: Callable[[float], float],
min_value: float = 0.0,
max_value: float = 1.0,
) -> float:
"""
An implementation of the Monte Carlo method to find area under
a single variable non-negative real-valued continuous function,
say f(x), where x lies within a continuous bounded interval,
say [min_value, max_value], where min_value and max_value are
finite numbers
1. Let x be a uniformly distributed random variable between min_value to
max_value
2. Expected value of f(x) =
(integrate f(x) from min_value to max_value)/(max_value - min_value)
3. Finding expected value of f(x):
a. Repeatedly draw x from uniform distribution
b. Evaluate f(x) at each of the drawn x values
c. Expected value = average of the function evaluations
4. Estimated value of integral = Expected value * (max_value - min_value)
5. Returns estimated value
"""
return mean(
function_to_integrate(uniform(min_value, max_value)) for _ in range(iterations)
) * (max_value - min_value)
def area_under_line_estimator_check(
iterations: int, min_value: float = 0.0, max_value: float = 1.0
) -> None:
"""
Checks estimation error for area_under_curve_estimator function
for f(x) = x where x lies within min_value to max_value
1. Calls "area_under_curve_estimator" function
2. Compares with the expected value
3. Prints estimated, expected and error value
"""
def identity_function(x: float) -> float:
"""
Represents identity function
>>> [function_to_integrate(x) for x in [-2.0, -1.0, 0.0, 1.0, 2.0]]
[-2.0, -1.0, 0.0, 1.0, 2.0]
"""
return x
estimated_value = area_under_curve_estimator(
iterations, identity_function, min_value, max_value
)
expected_value = (max_value * max_value - min_value * min_value) / 2
print("******************")
print(f"Estimating area under y=x where x varies from {min_value} to {max_value}")
print(f"Estimated value is {estimated_value}")
print(f"Expected value is {expected_value}")
print(f"Total error is {abs(estimated_value - expected_value)}")
print("******************")
def pi_estimator_using_area_under_curve(iterations: int) -> None:
"""
Area under curve y = sqrt(4 - x^2) where x lies in 0 to 2 is equal to pi
"""
def function_to_integrate(x: float) -> float:
"""
Represents semi-circle with radius 2
>>> [function_to_integrate(x) for x in [-2.0, 0.0, 2.0]]
[0.0, 2.0, 0.0]
"""
return sqrt(4.0 - x * x)
estimated_value = area_under_curve_estimator(
iterations, function_to_integrate, 0.0, 2.0
)
print("******************")
print("Estimating pi using area_under_curve_estimator")
print(f"Estimated value is {estimated_value}")
print(f"Expected value is {pi}")
print(f"Total error is {abs(estimated_value - pi)}")
print("******************")
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of Bilateral filter
Inputs:
img: A 2d image with values in between 0 and 1
varS: variance in space dimension.
varI: variance in Intensity.
N: Kernel size(Must be an odd number)
Output:
img:A 2d zero padded image with values in between 0 and 1
"""
import math
import sys
import cv2
import numpy as np
def vec_gaussian(img: np.ndarray, variance: float) -> np.ndarray:
# For applying gaussian function for each element in matrix.
sigma = math.sqrt(variance)
cons = 1 / (sigma * math.sqrt(2 * math.pi))
return cons * np.exp(-((img / sigma) ** 2) * 0.5)
def get_slice(img: np.ndarray, x: int, y: int, kernel_size: int) -> np.ndarray:
half = kernel_size // 2
return img[x - half : x + half + 1, y - half : y + half + 1]
def get_gauss_kernel(kernel_size: int, spatial_variance: float) -> np.ndarray:
# Creates a gaussian kernel of given dimension.
arr = np.zeros((kernel_size, kernel_size))
for i in range(0, kernel_size):
for j in range(0, kernel_size):
arr[i, j] = math.sqrt(
abs(i - kernel_size // 2) ** 2 + abs(j - kernel_size // 2) ** 2
)
return vec_gaussian(arr, spatial_variance)
def bilateral_filter(
img: np.ndarray,
spatial_variance: float,
intensity_variance: float,
kernel_size: int,
) -> np.ndarray:
img2 = np.zeros(img.shape)
gaussKer = get_gauss_kernel(kernel_size, spatial_variance)
sizeX, sizeY = img.shape
for i in range(kernel_size // 2, sizeX - kernel_size // 2):
for j in range(kernel_size // 2, sizeY - kernel_size // 2):
imgS = get_slice(img, i, j, kernel_size)
imgI = imgS - imgS[kernel_size // 2, kernel_size // 2]
imgIG = vec_gaussian(imgI, intensity_variance)
weights = np.multiply(gaussKer, imgIG)
vals = np.multiply(imgS, weights)
val = np.sum(vals) / np.sum(weights)
img2[i, j] = val
return img2
def parse_args(args: list) -> tuple:
filename = args[1] if args[1:] else "../image_data/lena.jpg"
spatial_variance = float(args[2]) if args[2:] else 1.0
intensity_variance = float(args[3]) if args[3:] else 1.0
if args[4:]:
kernel_size = int(args[4])
kernel_size = kernel_size + abs(kernel_size % 2 - 1)
else:
kernel_size = 5
return filename, spatial_variance, intensity_variance, kernel_size
if __name__ == "__main__":
filename, spatial_variance, intensity_variance, kernel_size = parse_args(sys.argv)
img = cv2.imread(filename, 0)
cv2.imshow("input image", img)
out = img / 255
out = out.astype("float32")
out = bilateral_filter(out, spatial_variance, intensity_variance, kernel_size)
out = out * 255
out = np.uint8(out)
cv2.imshow("output image", out)
cv2.waitKey(0)
cv2.destroyAllWindows()
| """
Implementation of Bilateral filter
Inputs:
img: A 2d image with values in between 0 and 1
varS: variance in space dimension.
varI: variance in Intensity.
N: Kernel size(Must be an odd number)
Output:
img:A 2d zero padded image with values in between 0 and 1
"""
import math
import sys
import cv2
import numpy as np
def vec_gaussian(img: np.ndarray, variance: float) -> np.ndarray:
# For applying gaussian function for each element in matrix.
sigma = math.sqrt(variance)
cons = 1 / (sigma * math.sqrt(2 * math.pi))
return cons * np.exp(-((img / sigma) ** 2) * 0.5)
def get_slice(img: np.ndarray, x: int, y: int, kernel_size: int) -> np.ndarray:
half = kernel_size // 2
return img[x - half : x + half + 1, y - half : y + half + 1]
def get_gauss_kernel(kernel_size: int, spatial_variance: float) -> np.ndarray:
# Creates a gaussian kernel of given dimension.
arr = np.zeros((kernel_size, kernel_size))
for i in range(0, kernel_size):
for j in range(0, kernel_size):
arr[i, j] = math.sqrt(
abs(i - kernel_size // 2) ** 2 + abs(j - kernel_size // 2) ** 2
)
return vec_gaussian(arr, spatial_variance)
def bilateral_filter(
img: np.ndarray,
spatial_variance: float,
intensity_variance: float,
kernel_size: int,
) -> np.ndarray:
img2 = np.zeros(img.shape)
gaussKer = get_gauss_kernel(kernel_size, spatial_variance)
sizeX, sizeY = img.shape
for i in range(kernel_size // 2, sizeX - kernel_size // 2):
for j in range(kernel_size // 2, sizeY - kernel_size // 2):
imgS = get_slice(img, i, j, kernel_size)
imgI = imgS - imgS[kernel_size // 2, kernel_size // 2]
imgIG = vec_gaussian(imgI, intensity_variance)
weights = np.multiply(gaussKer, imgIG)
vals = np.multiply(imgS, weights)
val = np.sum(vals) / np.sum(weights)
img2[i, j] = val
return img2
def parse_args(args: list) -> tuple:
filename = args[1] if args[1:] else "../image_data/lena.jpg"
spatial_variance = float(args[2]) if args[2:] else 1.0
intensity_variance = float(args[3]) if args[3:] else 1.0
if args[4:]:
kernel_size = int(args[4])
kernel_size = kernel_size + abs(kernel_size % 2 - 1)
else:
kernel_size = 5
return filename, spatial_variance, intensity_variance, kernel_size
if __name__ == "__main__":
filename, spatial_variance, intensity_variance, kernel_size = parse_args(sys.argv)
img = cv2.imread(filename, 0)
cv2.imshow("input image", img)
out = img / 255
out = out.astype("float32")
out = bilateral_filter(out, spatial_variance, intensity_variance, kernel_size)
out = out * 255
out = np.uint8(out)
cv2.imshow("output image", out)
cv2.waitKey(0)
cv2.destroyAllWindows()
| -1 |
TheAlgorithms/Python | 4,617 | [mypy] Fix type annotations for maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-16T14:48:38Z" | "2021-08-18T10:45:07Z" | 4545270ace03411ec861361329345a36195b881d | af0810fca133dde19b39fc7735572b6989ea269b | [mypy] Fix type annotations for maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import requests
APPID = "" # <-- Put your OpenWeatherMap appid here!
URL_BASE = "http://api.openweathermap.org/data/2.5/"
def current_weather(q: str = "Chicago", appid: str = APPID) -> dict:
"""https://openweathermap.org/api"""
return requests.get(URL_BASE + "weather", params=locals()).json()
def weather_forecast(q: str = "Kolkata, India", appid: str = APPID) -> dict:
"""https://openweathermap.org/forecast5"""
return requests.get(URL_BASE + "forecast", params=locals()).json()
def weather_onecall(lat: float = 55.68, lon: float = 12.57, appid: str = APPID) -> dict:
"""https://openweathermap.org/api/one-call-api"""
return requests.get(URL_BASE + "onecall", params=locals()).json()
if __name__ == "__main__":
from pprint import pprint
while True:
location = input("Enter a location:").strip()
if location:
pprint(current_weather(location))
else:
break
| import requests
APPID = "" # <-- Put your OpenWeatherMap appid here!
URL_BASE = "http://api.openweathermap.org/data/2.5/"
def current_weather(q: str = "Chicago", appid: str = APPID) -> dict:
"""https://openweathermap.org/api"""
return requests.get(URL_BASE + "weather", params=locals()).json()
def weather_forecast(q: str = "Kolkata, India", appid: str = APPID) -> dict:
"""https://openweathermap.org/forecast5"""
return requests.get(URL_BASE + "forecast", params=locals()).json()
def weather_onecall(lat: float = 55.68, lon: float = 12.57, appid: str = APPID) -> dict:
"""https://openweathermap.org/api/one-call-api"""
return requests.get(URL_BASE + "onecall", params=locals()).json()
if __name__ == "__main__":
from pprint import pprint
while True:
location = input("Enter a location:").strip()
if location:
pprint(current_weather(location))
else:
break
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Test cases:
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make in Indian Currency: 987
Following is minimal change for 987 :
500 100 100 100 100 50 20 10 5 2
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:10
1
5
10
20
50
100
200
500
1000
2000
Enter the change you want to make: 18745
Following is minimal change for 18745 :
2000 2000 2000 2000 2000 2000 2000 2000 2000 500 200 20 20 5
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: 0
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: -98
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:5
1
5
100
500
1000
Enter the change you want to make: 456
Following is minimal change for 456 :
100 100 100 100 5 5 5 5 5 5 5 5 5 5 5 1
"""
def find_minimum_change(denominations: list[int], value: int) -> list[int]:
"""
Find the minimum change from the given denominations and value
>>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745)
[2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 987)
[500, 100, 100, 100, 100, 50, 20, 10, 5, 2]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 0)
[]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], -98)
[]
>>> find_minimum_change([1, 5, 100, 500, 1000], 456)
[100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1]
"""
total_value = int(value)
# Initialize Result
answer = []
# Traverse through all denomination
for denomination in reversed(denominations):
# Find denominations
while int(total_value) >= int(denomination):
total_value -= int(denomination)
answer.append(denomination) # Append the "answers" array
return answer
# Driver Code
if __name__ == "__main__":
denominations = list()
value = 0
if (
input("Do you want to enter your denominations ? (yY/n): ").strip().lower()
== "y"
):
n = int(input("Enter the number of denominations you want to add: ").strip())
for i in range(0, n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
# All denominations of Indian Currency if user does not enter
denominations = [1, 2, 5, 10, 20, 50, 100, 500, 2000]
value = input("Enter the change you want to make: ").strip()
if int(value) == 0 or int(value) < 0:
print("The total value cannot be zero or negative.")
else:
print(f"Following is minimal change for {value}: ")
answer = find_minimum_change(denominations, value)
# Print result
for i in range(len(answer)):
print(answer[i], end=" ")
| """
Test cases:
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make in Indian Currency: 987
Following is minimal change for 987 :
500 100 100 100 100 50 20 10 5 2
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:10
1
5
10
20
50
100
200
500
1000
2000
Enter the change you want to make: 18745
Following is minimal change for 18745 :
2000 2000 2000 2000 2000 2000 2000 2000 2000 500 200 20 20 5
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: 0
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :N
Enter the change you want to make: -98
The total value cannot be zero or negative.
Do you want to enter your denominations ? (Y/N) :Y
Enter number of denomination:5
1
5
100
500
1000
Enter the change you want to make: 456
Following is minimal change for 456 :
100 100 100 100 5 5 5 5 5 5 5 5 5 5 5 1
"""
def find_minimum_change(denominations: list[int], value: str) -> list[int]:
"""
Find the minimum change from the given denominations and value
>>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745)
[2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 987)
[500, 100, 100, 100, 100, 50, 20, 10, 5, 2]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 0)
[]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], -98)
[]
>>> find_minimum_change([1, 5, 100, 500, 1000], 456)
[100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1]
"""
total_value = int(value)
# Initialize Result
answer = []
# Traverse through all denomination
for denomination in reversed(denominations):
# Find denominations
while int(total_value) >= int(denomination):
total_value -= int(denomination)
answer.append(denomination) # Append the "answers" array
return answer
# Driver Code
if __name__ == "__main__":
denominations = list()
value = "0"
if (
input("Do you want to enter your denominations ? (yY/n): ").strip().lower()
== "y"
):
n = int(input("Enter the number of denominations you want to add: ").strip())
for i in range(0, n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
# All denominations of Indian Currency if user does not enter
denominations = [1, 2, 5, 10, 20, 50, 100, 500, 2000]
value = input("Enter the change you want to make: ").strip()
if int(value) == 0 or int(value) < 0:
print("The total value cannot be zero or negative.")
else:
print(f"Following is minimal change for {value}: ")
answer = find_minimum_change(denominations, value)
# Print result
for i in range(len(answer)):
print(answer[i], end=" ")
| 1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Given an array of integers and another integer target,
we are required to find a triplet from the array such that it's sum is equal to
the target.
"""
from __future__ import annotations
from itertools import permutations
from random import randint
from timeit import repeat
def make_dataset() -> tuple[list[int], int]:
arr = [randint(-1000, 1000) for i in range(10)]
r = randint(-5000, 5000)
return (arr, r)
dataset = make_dataset()
def triplet_sum1(arr: list[int], target: int) -> tuple[int, int, int]:
"""
Returns a triplet in the array with sum equal to target,
else (0, 0, 0).
>>> triplet_sum1([13, 29, 7, 23, 5], 35)
(5, 7, 23)
>>> triplet_sum1([37, 9, 19, 50, 44], 65)
(9, 19, 37)
>>> arr = [6, 47, 27, 1, 15]
>>> target = 11
>>> triplet_sum1(arr, target)
(0, 0, 0)
"""
for triplet in permutations(arr, 3):
if sum(triplet) == target:
return tuple(sorted(triplet))
return (0, 0, 0)
def triplet_sum2(arr: list[int], target: int) -> tuple[int, int, int]:
"""
Returns a triplet in the array with sum equal to target,
else (0, 0, 0).
>>> triplet_sum2([13, 29, 7, 23, 5], 35)
(5, 7, 23)
>>> triplet_sum2([37, 9, 19, 50, 44], 65)
(9, 19, 37)
>>> arr = [6, 47, 27, 1, 15]
>>> target = 11
>>> triplet_sum2(arr, target)
(0, 0, 0)
"""
arr.sort()
n = len(arr)
for i in range(n - 1):
left, right = i + 1, n - 1
while left < right:
if arr[i] + arr[left] + arr[right] == target:
return (arr[i], arr[left], arr[right])
elif arr[i] + arr[left] + arr[right] < target:
left += 1
elif arr[i] + arr[left] + arr[right] > target:
right -= 1
return (0, 0, 0)
def solution_times() -> tuple[float, float]:
setup_code = """
from __main__ import dataset, triplet_sum1, triplet_sum2
"""
test_code1 = """
triplet_sum1(*dataset)
"""
test_code2 = """
triplet_sum2(*dataset)
"""
times1 = repeat(setup=setup_code, stmt=test_code1, repeat=5, number=10000)
times2 = repeat(setup=setup_code, stmt=test_code2, repeat=5, number=10000)
return (min(times1), min(times2))
if __name__ == "__main__":
from doctest import testmod
testmod()
times = solution_times()
print(f"The time for naive implementation is {times[0]}.")
print(f"The time for optimized implementation is {times[1]}.")
| """
Given an array of integers and another integer target,
we are required to find a triplet from the array such that it's sum is equal to
the target.
"""
from __future__ import annotations
from itertools import permutations
from random import randint
from timeit import repeat
def make_dataset() -> tuple[list[int], int]:
arr = [randint(-1000, 1000) for i in range(10)]
r = randint(-5000, 5000)
return (arr, r)
dataset = make_dataset()
def triplet_sum1(arr: list[int], target: int) -> tuple[int, ...]:
"""
Returns a triplet in the array with sum equal to target,
else (0, 0, 0).
>>> triplet_sum1([13, 29, 7, 23, 5], 35)
(5, 7, 23)
>>> triplet_sum1([37, 9, 19, 50, 44], 65)
(9, 19, 37)
>>> arr = [6, 47, 27, 1, 15]
>>> target = 11
>>> triplet_sum1(arr, target)
(0, 0, 0)
"""
for triplet in permutations(arr, 3):
if sum(triplet) == target:
return tuple(sorted(triplet))
return (0, 0, 0)
def triplet_sum2(arr: list[int], target: int) -> tuple[int, int, int]:
"""
Returns a triplet in the array with sum equal to target,
else (0, 0, 0).
>>> triplet_sum2([13, 29, 7, 23, 5], 35)
(5, 7, 23)
>>> triplet_sum2([37, 9, 19, 50, 44], 65)
(9, 19, 37)
>>> arr = [6, 47, 27, 1, 15]
>>> target = 11
>>> triplet_sum2(arr, target)
(0, 0, 0)
"""
arr.sort()
n = len(arr)
for i in range(n - 1):
left, right = i + 1, n - 1
while left < right:
if arr[i] + arr[left] + arr[right] == target:
return (arr[i], arr[left], arr[right])
elif arr[i] + arr[left] + arr[right] < target:
left += 1
elif arr[i] + arr[left] + arr[right] > target:
right -= 1
return (0, 0, 0)
def solution_times() -> tuple[float, float]:
setup_code = """
from __main__ import dataset, triplet_sum1, triplet_sum2
"""
test_code1 = """
triplet_sum1(*dataset)
"""
test_code2 = """
triplet_sum2(*dataset)
"""
times1 = repeat(setup=setup_code, stmt=test_code1, repeat=5, number=10000)
times2 = repeat(setup=setup_code, stmt=test_code2, repeat=5, number=10000)
return (min(times1), min(times2))
if __name__ == "__main__":
from doctest import testmod
testmod()
times = solution_times()
print(f"The time for naive implementation is {times[0]}.")
print(f"The time for optimized implementation is {times[1]}.")
| 1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Given an array of integers, return indices of the two numbers such that they add up to
a specific target.
You may assume that each input would have exactly one solution, and you may not use the
same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
"""
from __future__ import annotations
def two_sum(nums: list[int], target: int) -> list[int]:
"""
>>> two_sum([2, 7, 11, 15], 9)
[0, 1]
>>> two_sum([15, 2, 11, 7], 13)
[1, 2]
>>> two_sum([2, 7, 11, 15], 17)
[0, 3]
>>> two_sum([7, 15, 11, 2], 18)
[0, 2]
>>> two_sum([2, 7, 11, 15], 26)
[2, 3]
>>> two_sum([2, 7, 11, 15], 8)
[]
>>> two_sum([3 * i for i in range(10)], 19)
[]
"""
chk_map = {}
for index, val in enumerate(nums):
compl = target - val
if compl in chk_map:
return [chk_map[compl], index]
chk_map[val] = index
return []
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{two_sum([2, 7, 11, 15], 9) = }")
| """
Given an array of integers, return indices of the two numbers such that they add up to
a specific target.
You may assume that each input would have exactly one solution, and you may not use the
same element twice.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
"""
from __future__ import annotations
def two_sum(nums: list[int], target: int) -> list[int]:
"""
>>> two_sum([2, 7, 11, 15], 9)
[0, 1]
>>> two_sum([15, 2, 11, 7], 13)
[1, 2]
>>> two_sum([2, 7, 11, 15], 17)
[0, 3]
>>> two_sum([7, 15, 11, 2], 18)
[0, 2]
>>> two_sum([2, 7, 11, 15], 26)
[2, 3]
>>> two_sum([2, 7, 11, 15], 8)
[]
>>> two_sum([3 * i for i in range(10)], 19)
[]
"""
chk_map: dict[int, int] = {}
for index, val in enumerate(nums):
compl = target - val
if compl in chk_map:
return [chk_map[compl], index]
chk_map[val] = index
return []
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{two_sum([2, 7, 11, 15], 9) = }")
| 1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
from .number_theory.prime_numbers import next_prime
class HashTable:
"""
Basic Hash Table example with open addressing and linear probing
"""
def __init__(self, size_table, charge_factor=None, lim_charge=None):
self.size_table = size_table
self.values = [None] * self.size_table
self.lim_charge = 0.75 if lim_charge is None else lim_charge
self.charge_factor = 1 if charge_factor is None else charge_factor
self.__aux_list = []
self._keys = {}
def keys(self):
return self._keys
def balanced_factor(self):
return sum([1 for slot in self.values if slot is not None]) / (
self.size_table * self.charge_factor
)
def hash_function(self, key):
return key % self.size_table
def _step_by_step(self, step_ord):
print(f"step {step_ord}")
print([i for i in range(len(self.values))])
print(self.values)
def bulk_insert(self, values):
i = 1
self.__aux_list = values
for value in values:
self.insert_data(value)
self._step_by_step(i)
i += 1
def _set_value(self, key, data):
self.values[key] = data
self._keys[key] = data
def _collision_resolution(self, key, data=None):
new_key = self.hash_function(key + 1)
while self.values[new_key] is not None and self.values[new_key] != key:
if self.values.count(None) > 0:
new_key = self.hash_function(new_key + 1)
else:
new_key = None
break
return new_key
def rehashing(self):
survivor_values = [value for value in self.values if value is not None]
self.size_table = next_prime(self.size_table, factor=2)
self._keys.clear()
self.values = [None] * self.size_table # hell's pointers D: don't DRY ;/
for value in survivor_values:
self.insert_data(value)
def insert_data(self, data):
key = self.hash_function(data)
if self.values[key] is None:
self._set_value(key, data)
elif self.values[key] == data:
pass
else:
collision_resolution = self._collision_resolution(key, data)
if collision_resolution is not None:
self._set_value(collision_resolution, data)
else:
self.rehashing()
self.insert_data(data)
| #!/usr/bin/env python3
from .number_theory.prime_numbers import next_prime
class HashTable:
"""
Basic Hash Table example with open addressing and linear probing
"""
def __init__(self, size_table, charge_factor=None, lim_charge=None):
self.size_table = size_table
self.values = [None] * self.size_table
self.lim_charge = 0.75 if lim_charge is None else lim_charge
self.charge_factor = 1 if charge_factor is None else charge_factor
self.__aux_list = []
self._keys = {}
def keys(self):
return self._keys
def balanced_factor(self):
return sum([1 for slot in self.values if slot is not None]) / (
self.size_table * self.charge_factor
)
def hash_function(self, key):
return key % self.size_table
def _step_by_step(self, step_ord):
print(f"step {step_ord}")
print([i for i in range(len(self.values))])
print(self.values)
def bulk_insert(self, values):
i = 1
self.__aux_list = values
for value in values:
self.insert_data(value)
self._step_by_step(i)
i += 1
def _set_value(self, key, data):
self.values[key] = data
self._keys[key] = data
def _collision_resolution(self, key, data=None):
new_key = self.hash_function(key + 1)
while self.values[new_key] is not None and self.values[new_key] != key:
if self.values.count(None) > 0:
new_key = self.hash_function(new_key + 1)
else:
new_key = None
break
return new_key
def rehashing(self):
survivor_values = [value for value in self.values if value is not None]
self.size_table = next_prime(self.size_table, factor=2)
self._keys.clear()
self.values = [None] * self.size_table # hell's pointers D: don't DRY ;/
for value in survivor_values:
self.insert_data(value)
def insert_data(self, data):
key = self.hash_function(data)
if self.values[key] is None:
self._set_value(key, data)
elif self.values[key] == data:
pass
else:
collision_resolution = self._collision_resolution(key, data)
if collision_resolution is not None:
self._set_value(collision_resolution, data)
else:
self.rehashing()
self.insert_data(data)
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
class Node:
def __init__(self, data):
self.data = data # Assign data
self.next = None # Initialize next as null
self.prev = None # Initialize prev as null
class Stack:
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self):
self.head = None
def push(self, data):
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self):
"""pop the top element off the stack"""
if self.head is None:
return None
else:
temp = self.head.data
self.head = self.head.next
self.head.prev = None
return temp
def top(self):
"""return the top element of the stack"""
return self.head.data
def __len__(self):
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self):
return self.head is None
def print_stack(self):
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
class Node:
def __init__(self, data):
self.data = data # Assign data
self.next = None # Initialize next as null
self.prev = None # Initialize prev as null
class Stack:
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self):
self.head = None
def push(self, data):
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self):
"""pop the top element off the stack"""
if self.head is None:
return None
else:
temp = self.head.data
self.head = self.head.next
self.head.prev = None
return temp
def top(self):
"""return the top element of the stack"""
return self.head.data
def __len__(self):
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self):
return self.head is None
def print_stack(self):
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The A* algorithm combines features of uniform-cost search and pure
heuristic search to efficiently compute optimal solutions.
A* algorithm is a best-first search algorithm in which the cost
associated with a node is f(n) = g(n) + h(n),
where g(n) is the cost of the path from the initial state to node n and
h(n) is the heuristic estimate or the cost or a path
from node n to a goal.A* algorithm introduces a heuristic into a
regular graph-searching algorithm,
essentially planning ahead at each step so a more optimal decision
is made.A* also known as the algorithm with brains
"""
import numpy as np
class Cell:
"""
Class cell represents a cell in the world which have the property
position : The position of the represented by tupleof x and y
coordinates initially set to (0,0)
parent : This contains the parent cell object which we visited
before arrinving this cell
g,h,f : The parameters for constructing the heuristic function
which can be any function. for simplicity used line
distance
"""
def __init__(self):
self.position = (0, 0)
self.parent = None
self.g = 0
self.h = 0
self.f = 0
"""
overrides equals method because otherwise cell assign will give
wrong results
"""
def __eq__(self, cell):
return self.position == cell.position
def showcell(self):
print(self.position)
class Gridworld:
"""
Gridworld class represents the external world here a grid M*M
matrix
world_size: create a numpy array with the given world_size default is 5
"""
def __init__(self, world_size=(5, 5)):
self.w = np.zeros(world_size)
self.world_x_limit = world_size[0]
self.world_y_limit = world_size[1]
def show(self):
print(self.w)
def get_neigbours(self, cell):
"""
Return the neighbours of cell
"""
neughbour_cord = [
(-1, -1),
(-1, 0),
(-1, 1),
(0, -1),
(0, 1),
(1, -1),
(1, 0),
(1, 1),
]
current_x = cell.position[0]
current_y = cell.position[1]
neighbours = []
for n in neughbour_cord:
x = current_x + n[0]
y = current_y + n[1]
if 0 <= x < self.world_x_limit and 0 <= y < self.world_y_limit:
c = Cell()
c.position = (x, y)
c.parent = cell
neighbours.append(c)
return neighbours
def astar(world, start, goal):
"""
Implementation of a start algorithm
world : Object of the world object
start : Object of the cell as start position
stop : Object of the cell as goal position
>>> p = Gridworld()
>>> start = Cell()
>>> start.position = (0,0)
>>> goal = Cell()
>>> goal.position = (4,4)
>>> astar(p, start, goal)
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
"""
_open = []
_closed = []
_open.append(start)
while _open:
min_f = np.argmin([n.f for n in _open])
current = _open[min_f]
_closed.append(_open.pop(min_f))
if current == goal:
break
for n in world.get_neigbours(current):
for c in _closed:
if c == n:
continue
n.g = current.g + 1
x1, y1 = n.position
x2, y2 = goal.position
n.h = (y2 - y1) ** 2 + (x2 - x1) ** 2
n.f = n.h + n.g
for c in _open:
if c == n and c.f < n.f:
continue
_open.append(n)
path = []
while current.parent is not None:
path.append(current.position)
current = current.parent
path.append(current.position)
return path[::-1]
if __name__ == "__main__":
world = Gridworld()
# stat position and Goal
start = Cell()
start.position = (0, 0)
goal = Cell()
goal.position = (4, 4)
print(f"path from {start.position} to {goal.position}")
s = astar(world, start, goal)
# Just for visual reasons
for i in s:
world.w[i] = 1
print(world.w)
| """
The A* algorithm combines features of uniform-cost search and pure
heuristic search to efficiently compute optimal solutions.
A* algorithm is a best-first search algorithm in which the cost
associated with a node is f(n) = g(n) + h(n),
where g(n) is the cost of the path from the initial state to node n and
h(n) is the heuristic estimate or the cost or a path
from node n to a goal.A* algorithm introduces a heuristic into a
regular graph-searching algorithm,
essentially planning ahead at each step so a more optimal decision
is made.A* also known as the algorithm with brains
"""
import numpy as np
class Cell:
"""
Class cell represents a cell in the world which have the property
position : The position of the represented by tupleof x and y
coordinates initially set to (0,0)
parent : This contains the parent cell object which we visited
before arrinving this cell
g,h,f : The parameters for constructing the heuristic function
which can be any function. for simplicity used line
distance
"""
def __init__(self):
self.position = (0, 0)
self.parent = None
self.g = 0
self.h = 0
self.f = 0
"""
overrides equals method because otherwise cell assign will give
wrong results
"""
def __eq__(self, cell):
return self.position == cell.position
def showcell(self):
print(self.position)
class Gridworld:
"""
Gridworld class represents the external world here a grid M*M
matrix
world_size: create a numpy array with the given world_size default is 5
"""
def __init__(self, world_size=(5, 5)):
self.w = np.zeros(world_size)
self.world_x_limit = world_size[0]
self.world_y_limit = world_size[1]
def show(self):
print(self.w)
def get_neigbours(self, cell):
"""
Return the neighbours of cell
"""
neughbour_cord = [
(-1, -1),
(-1, 0),
(-1, 1),
(0, -1),
(0, 1),
(1, -1),
(1, 0),
(1, 1),
]
current_x = cell.position[0]
current_y = cell.position[1]
neighbours = []
for n in neughbour_cord:
x = current_x + n[0]
y = current_y + n[1]
if 0 <= x < self.world_x_limit and 0 <= y < self.world_y_limit:
c = Cell()
c.position = (x, y)
c.parent = cell
neighbours.append(c)
return neighbours
def astar(world, start, goal):
"""
Implementation of a start algorithm
world : Object of the world object
start : Object of the cell as start position
stop : Object of the cell as goal position
>>> p = Gridworld()
>>> start = Cell()
>>> start.position = (0,0)
>>> goal = Cell()
>>> goal.position = (4,4)
>>> astar(p, start, goal)
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
"""
_open = []
_closed = []
_open.append(start)
while _open:
min_f = np.argmin([n.f for n in _open])
current = _open[min_f]
_closed.append(_open.pop(min_f))
if current == goal:
break
for n in world.get_neigbours(current):
for c in _closed:
if c == n:
continue
n.g = current.g + 1
x1, y1 = n.position
x2, y2 = goal.position
n.h = (y2 - y1) ** 2 + (x2 - x1) ** 2
n.f = n.h + n.g
for c in _open:
if c == n and c.f < n.f:
continue
_open.append(n)
path = []
while current.parent is not None:
path.append(current.position)
current = current.parent
path.append(current.position)
return path[::-1]
if __name__ == "__main__":
world = Gridworld()
# stat position and Goal
start = Cell()
start.position = (0, 0)
goal = Cell()
goal.position = (4, 4)
print(f"path from {start.position} to {goal.position}")
s = astar(world, start, goal)
# Just for visual reasons
for i in s:
world.w[i] = 1
print(world.w)
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a**2 + b**2 = c**2
2. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution()
# 31875000
"""
return [
a * b * (1000 - a - b)
for a in range(1, 999)
for b in range(a, 999)
if (a * a + b * b == (1000 - a - b) ** 2)
][0]
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a**2 + b**2 = c**2
2. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution()
# 31875000
"""
return [
a * b * (1000 - a - b)
for a in range(1, 999)
for b in range(a, 999)
if (a * a + b * b == (1000 - a - b) ** 2)
][0]
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.wikipedia.org/wiki/B%C3%A9zier_curve
# https://www.tutorialspoint.com/computer_graphics/computer_graphics_curves.htm
from __future__ import annotations
from scipy.special import comb # type: ignore
class BezierCurve:
"""
Bezier curve is a weighted sum of a set of control points.
Generate Bezier curves from a given set of control points.
This implementation works only for 2d coordinates in the xy plane.
"""
def __init__(self, list_of_points: list[tuple[float, float]]):
"""
list_of_points: Control points in the xy plane on which to interpolate. These
points control the behavior (shape) of the Bezier curve.
"""
self.list_of_points = list_of_points
# Degree determines the flexibility of the curve.
# Degree = 1 will produce a straight line.
self.degree = len(list_of_points) - 1
def basis_function(self, t: float) -> list[float]:
"""
The basis function determines the weight of each control point at time t.
t: time value between 0 and 1 inclusive at which to evaluate the basis of
the curve.
returns the x, y values of basis function at time t
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.basis_function(0)
[1.0, 0.0]
>>> curve.basis_function(1)
[0.0, 1.0]
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
output_values: list[float] = []
for i in range(len(self.list_of_points)):
# basis function for each i
output_values.append(
comb(self.degree, i) * ((1 - t) ** (self.degree - i)) * (t ** i)
)
# the basis must sum up to 1 for it to produce a valid Bezier curve.
assert round(sum(output_values), 5) == 1
return output_values
def bezier_curve_function(self, t: float) -> tuple[float, float]:
"""
The function to produce the values of the Bezier curve at time t.
t: the value of time t at which to evaluate the Bezier function
Returns the x, y coordinates of the Bezier curve at time t.
The first point in the curve is when t = 0.
The last point in the curve is when t = 1.
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.bezier_curve_function(0)
(1.0, 1.0)
>>> curve.bezier_curve_function(1)
(1.0, 2.0)
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
basis_function = self.basis_function(t)
x = 0.0
y = 0.0
for i in range(len(self.list_of_points)):
# For all points, sum up the product of i-th basis function and i-th point.
x += basis_function[i] * self.list_of_points[i][0]
y += basis_function[i] * self.list_of_points[i][1]
return (x, y)
def plot_curve(self, step_size: float = 0.01):
"""
Plots the Bezier curve using matplotlib plotting capabilities.
step_size: defines the step(s) at which to evaluate the Bezier curve.
The smaller the step size, the finer the curve produced.
"""
from matplotlib import pyplot as plt # type: ignore
to_plot_x: list[float] = [] # x coordinates of points to plot
to_plot_y: list[float] = [] # y coordinates of points to plot
t = 0.0
while t <= 1:
value = self.bezier_curve_function(t)
to_plot_x.append(value[0])
to_plot_y.append(value[1])
t += step_size
x = [i[0] for i in self.list_of_points]
y = [i[1] for i in self.list_of_points]
plt.plot(
to_plot_x,
to_plot_y,
color="blue",
label="Curve of Degree " + str(self.degree),
)
plt.scatter(x, y, color="red", label="Control Points")
plt.legend()
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
BezierCurve([(1, 2), (3, 5)]).plot_curve() # degree 1
BezierCurve([(0, 0), (5, 5), (5, 0)]).plot_curve() # degree 2
BezierCurve([(0, 0), (5, 5), (5, 0), (2.5, -2.5)]).plot_curve() # degree 3
| # https://en.wikipedia.org/wiki/B%C3%A9zier_curve
# https://www.tutorialspoint.com/computer_graphics/computer_graphics_curves.htm
from __future__ import annotations
from scipy.special import comb # type: ignore
class BezierCurve:
"""
Bezier curve is a weighted sum of a set of control points.
Generate Bezier curves from a given set of control points.
This implementation works only for 2d coordinates in the xy plane.
"""
def __init__(self, list_of_points: list[tuple[float, float]]):
"""
list_of_points: Control points in the xy plane on which to interpolate. These
points control the behavior (shape) of the Bezier curve.
"""
self.list_of_points = list_of_points
# Degree determines the flexibility of the curve.
# Degree = 1 will produce a straight line.
self.degree = len(list_of_points) - 1
def basis_function(self, t: float) -> list[float]:
"""
The basis function determines the weight of each control point at time t.
t: time value between 0 and 1 inclusive at which to evaluate the basis of
the curve.
returns the x, y values of basis function at time t
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.basis_function(0)
[1.0, 0.0]
>>> curve.basis_function(1)
[0.0, 1.0]
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
output_values: list[float] = []
for i in range(len(self.list_of_points)):
# basis function for each i
output_values.append(
comb(self.degree, i) * ((1 - t) ** (self.degree - i)) * (t ** i)
)
# the basis must sum up to 1 for it to produce a valid Bezier curve.
assert round(sum(output_values), 5) == 1
return output_values
def bezier_curve_function(self, t: float) -> tuple[float, float]:
"""
The function to produce the values of the Bezier curve at time t.
t: the value of time t at which to evaluate the Bezier function
Returns the x, y coordinates of the Bezier curve at time t.
The first point in the curve is when t = 0.
The last point in the curve is when t = 1.
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.bezier_curve_function(0)
(1.0, 1.0)
>>> curve.bezier_curve_function(1)
(1.0, 2.0)
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
basis_function = self.basis_function(t)
x = 0.0
y = 0.0
for i in range(len(self.list_of_points)):
# For all points, sum up the product of i-th basis function and i-th point.
x += basis_function[i] * self.list_of_points[i][0]
y += basis_function[i] * self.list_of_points[i][1]
return (x, y)
def plot_curve(self, step_size: float = 0.01):
"""
Plots the Bezier curve using matplotlib plotting capabilities.
step_size: defines the step(s) at which to evaluate the Bezier curve.
The smaller the step size, the finer the curve produced.
"""
from matplotlib import pyplot as plt # type: ignore
to_plot_x: list[float] = [] # x coordinates of points to plot
to_plot_y: list[float] = [] # y coordinates of points to plot
t = 0.0
while t <= 1:
value = self.bezier_curve_function(t)
to_plot_x.append(value[0])
to_plot_y.append(value[1])
t += step_size
x = [i[0] for i in self.list_of_points]
y = [i[1] for i in self.list_of_points]
plt.plot(
to_plot_x,
to_plot_y,
color="blue",
label="Curve of Degree " + str(self.degree),
)
plt.scatter(x, y, color="red", label="Control Points")
plt.legend()
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
BezierCurve([(1, 2), (3, 5)]).plot_curve() # degree 1
BezierCurve([(0, 0), (5, 5), (5, 0)]).plot_curve() # degree 2
BezierCurve([(0, 0), (5, 5), (5, 0), (2.5, -2.5)]).plot_curve() # degree 3
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def lower(word: str) -> str:
"""
Will convert the entire string to lowercase letters
>>> lower("wow")
'wow'
>>> lower("HellZo")
'hellzo'
>>> lower("WHAT")
'what'
>>> lower("wh[]32")
'wh[]32'
>>> lower("whAT")
'what'
"""
# converting to ascii value int value and checking to see if char is a capital
# letter if it is a capital letter it is getting shift by 32 which makes it a lower
# case letter
return "".join(chr(ord(char) + 32) if "A" <= char <= "Z" else char for char in word)
if __name__ == "__main__":
from doctest import testmod
testmod()
| def lower(word: str) -> str:
"""
Will convert the entire string to lowercase letters
>>> lower("wow")
'wow'
>>> lower("HellZo")
'hellzo'
>>> lower("WHAT")
'what'
>>> lower("wh[]32")
'wh[]32'
>>> lower("whAT")
'what'
"""
# converting to ascii value int value and checking to see if char is a capital
# letter if it is a capital letter it is getting shift by 32 which makes it a lower
# case letter
return "".join(chr(ord(char) + 32) if "A" <= char <= "Z" else char for char in word)
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A bag contains one red disc and one blue disc. In a game of chance a player takes a
disc at random and its colour is noted. After each turn the disc is returned to the
bag, an extra red disc is added, and another disc is taken at random.
The player pays £1 to play and wins if they have taken more blue discs than red
discs at the end of the game.
If the game is played for four turns, the probability of a player winning is exactly
11/120, and so the maximum prize fund the banker should allocate for winning in this
game would be £10 before they would expect to incur a loss. Note that any payout will
be a whole number of pounds and also includes the original £1 paid to play the game,
so in the example given the player actually wins £9.
Find the maximum prize fund that should be allocated to a single game in which
fifteen turns are played.
Solution:
For each 15-disc sequence of red and blue for which there are more red than blue,
we calculate the probability of that sequence and add it to the total probability
of the player winning. The inverse of this probability gives an upper bound for
the prize if the banker wants to avoid an expected loss.
"""
from itertools import product
def solution(num_turns: int = 15) -> int:
"""
Find the maximum prize fund that should be allocated to a single game in which
fifteen turns are played.
>>> solution(4)
10
>>> solution(10)
225
"""
total_prob: float = 0.0
prob: float
num_blue: int
num_red: int
ind: int
col: int
series: tuple[int, ...]
for series in product(range(2), repeat=num_turns):
num_blue = series.count(1)
num_red = num_turns - num_blue
if num_red >= num_blue:
continue
prob = 1.0
for ind, col in enumerate(series, 2):
if col == 0:
prob *= (ind - 1) / ind
else:
prob *= 1 / ind
total_prob += prob
return int(1 / total_prob)
if __name__ == "__main__":
print(f"{solution() = }")
| """
A bag contains one red disc and one blue disc. In a game of chance a player takes a
disc at random and its colour is noted. After each turn the disc is returned to the
bag, an extra red disc is added, and another disc is taken at random.
The player pays £1 to play and wins if they have taken more blue discs than red
discs at the end of the game.
If the game is played for four turns, the probability of a player winning is exactly
11/120, and so the maximum prize fund the banker should allocate for winning in this
game would be £10 before they would expect to incur a loss. Note that any payout will
be a whole number of pounds and also includes the original £1 paid to play the game,
so in the example given the player actually wins £9.
Find the maximum prize fund that should be allocated to a single game in which
fifteen turns are played.
Solution:
For each 15-disc sequence of red and blue for which there are more red than blue,
we calculate the probability of that sequence and add it to the total probability
of the player winning. The inverse of this probability gives an upper bound for
the prize if the banker wants to avoid an expected loss.
"""
from itertools import product
def solution(num_turns: int = 15) -> int:
"""
Find the maximum prize fund that should be allocated to a single game in which
fifteen turns are played.
>>> solution(4)
10
>>> solution(10)
225
"""
total_prob: float = 0.0
prob: float
num_blue: int
num_red: int
ind: int
col: int
series: tuple[int, ...]
for series in product(range(2), repeat=num_turns):
num_blue = series.count(1)
num_red = num_turns - num_blue
if num_red >= num_blue:
continue
prob = 1.0
for ind, col in enumerate(series, 2):
if col == 0:
prob *= (ind - 1) / ind
else:
prob *= 1 / ind
total_prob += prob
return int(1 / total_prob)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from math import asin, atan, cos, radians, sin, sqrt, tan
def haversine_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""
Calculate great circle distance between two points in a sphere,
given longitudes and latitudes https://en.wikipedia.org/wiki/Haversine_formula
We know that the globe is "sort of" spherical, so a path between two points
isn't exactly a straight line. We need to account for the Earth's curvature
when calculating distance from point A to B. This effect is negligible for
small distances but adds up as distance increases. The Haversine method treats
the earth as a sphere which allows us to "project" the two points A and B
onto the surface of that sphere and approximate the spherical distance between
them. Since the Earth is not a perfect sphere, other methods which model the
Earth's ellipsoidal nature are more accurate but a quick and modifiable
computation like Haversine can be handy for shorter range distances.
Args:
lat1, lon1: latitude and longitude of coordinate 1
lat2, lon2: latitude and longitude of coordinate 2
Returns:
geographical distance between two points in metres
>>> from collections import namedtuple
>>> point_2d = namedtuple("point_2d", "lat lon")
>>> SAN_FRANCISCO = point_2d(37.774856, -122.424227)
>>> YOSEMITE = point_2d(37.864742, -119.537521)
>>> f"{haversine_distance(*SAN_FRANCISCO, *YOSEMITE):0,.0f} meters"
'254,352 meters'
"""
# CONSTANTS per WGS84 https://en.wikipedia.org/wiki/World_Geodetic_System
# Distance in metres(m)
AXIS_A = 6378137.0
AXIS_B = 6356752.314245
RADIUS = 6378137
# Equation parameters
# Equation https://en.wikipedia.org/wiki/Haversine_formula#Formulation
flattening = (AXIS_A - AXIS_B) / AXIS_A
phi_1 = atan((1 - flattening) * tan(radians(lat1)))
phi_2 = atan((1 - flattening) * tan(radians(lat2)))
lambda_1 = radians(lon1)
lambda_2 = radians(lon2)
# Equation
sin_sq_phi = sin((phi_2 - phi_1) / 2)
sin_sq_lambda = sin((lambda_2 - lambda_1) / 2)
# Square both values
sin_sq_phi *= sin_sq_phi
sin_sq_lambda *= sin_sq_lambda
h_value = sqrt(sin_sq_phi + (cos(phi_1) * cos(phi_2) * sin_sq_lambda))
return 2 * RADIUS * asin(h_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from math import asin, atan, cos, radians, sin, sqrt, tan
def haversine_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""
Calculate great circle distance between two points in a sphere,
given longitudes and latitudes https://en.wikipedia.org/wiki/Haversine_formula
We know that the globe is "sort of" spherical, so a path between two points
isn't exactly a straight line. We need to account for the Earth's curvature
when calculating distance from point A to B. This effect is negligible for
small distances but adds up as distance increases. The Haversine method treats
the earth as a sphere which allows us to "project" the two points A and B
onto the surface of that sphere and approximate the spherical distance between
them. Since the Earth is not a perfect sphere, other methods which model the
Earth's ellipsoidal nature are more accurate but a quick and modifiable
computation like Haversine can be handy for shorter range distances.
Args:
lat1, lon1: latitude and longitude of coordinate 1
lat2, lon2: latitude and longitude of coordinate 2
Returns:
geographical distance between two points in metres
>>> from collections import namedtuple
>>> point_2d = namedtuple("point_2d", "lat lon")
>>> SAN_FRANCISCO = point_2d(37.774856, -122.424227)
>>> YOSEMITE = point_2d(37.864742, -119.537521)
>>> f"{haversine_distance(*SAN_FRANCISCO, *YOSEMITE):0,.0f} meters"
'254,352 meters'
"""
# CONSTANTS per WGS84 https://en.wikipedia.org/wiki/World_Geodetic_System
# Distance in metres(m)
AXIS_A = 6378137.0
AXIS_B = 6356752.314245
RADIUS = 6378137
# Equation parameters
# Equation https://en.wikipedia.org/wiki/Haversine_formula#Formulation
flattening = (AXIS_A - AXIS_B) / AXIS_A
phi_1 = atan((1 - flattening) * tan(radians(lat1)))
phi_2 = atan((1 - flattening) * tan(radians(lat2)))
lambda_1 = radians(lon1)
lambda_2 = radians(lon2)
# Equation
sin_sq_phi = sin((phi_2 - phi_1) / 2)
sin_sq_lambda = sin((lambda_2 - lambda_1) / 2)
# Square both values
sin_sq_phi *= sin_sq_phi
sin_sq_lambda *= sin_sq_lambda
h_value = sqrt(sin_sq_phi + (cos(phi_1) * cos(phi_2) * sin_sq_lambda))
return 2 * RADIUS * asin(h_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 80: https://projecteuler.net/problem=80
Author: Sandeep Gupta
Problem statement: For the first one hundred natural numbers, find the total of
the digital sums of the first one hundred decimal digits for all the irrational
square roots.
Time: 5 October 2020, 18:30
"""
import decimal
def solution() -> int:
"""
To evaluate the sum, Used decimal python module to calculate the decimal
places up to 100, the most important thing would be take calculate
a few extra places for decimal otherwise there will be rounding
error.
>>> solution()
40886
"""
answer = 0
decimal_context = decimal.Context(prec=105)
for i in range(2, 100):
number = decimal.Decimal(i)
sqrt_number = number.sqrt(decimal_context)
if len(str(sqrt_number)) > 1:
answer += int(str(sqrt_number)[0])
sqrt_number = str(sqrt_number)[2:101]
answer += sum([int(x) for x in sqrt_number])
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Project Euler Problem 80: https://projecteuler.net/problem=80
Author: Sandeep Gupta
Problem statement: For the first one hundred natural numbers, find the total of
the digital sums of the first one hundred decimal digits for all the irrational
square roots.
Time: 5 October 2020, 18:30
"""
import decimal
def solution() -> int:
"""
To evaluate the sum, Used decimal python module to calculate the decimal
places up to 100, the most important thing would be take calculate
a few extra places for decimal otherwise there will be rounding
error.
>>> solution()
40886
"""
answer = 0
decimal_context = decimal.Context(prec=105)
for i in range(2, 100):
number = decimal.Decimal(i)
sqrt_number = number.sqrt(decimal_context)
if len(str(sqrt_number)) > 1:
answer += int(str(sqrt_number)[0])
sqrt_number = str(sqrt_number)[2:101]
answer += sum([int(x) for x in sqrt_number])
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from math import log
from os import name, system
from random import gauss, seed
from typing import Callable, TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| """
Linear Discriminant Analysis
Assumptions About Data :
1. The input variables has a gaussian distribution.
2. The variance calculated for each input variables by class grouping is the
same.
3. The mix of classes in your training set is representative of the problem.
Learning The Model :
The LDA model requires the estimation of statistics from the training data :
1. Mean of each input value for each class.
2. Probability of an instance belong to each class.
3. Covariance for the input data for each class
Calculate the class means :
mean(x) = 1/n ( for i = 1 to i = n --> sum(xi))
Calculate the class probabilities :
P(y = 0) = count(y = 0) / (count(y = 0) + count(y = 1))
P(y = 1) = count(y = 1) / (count(y = 0) + count(y = 1))
Calculate the variance :
We can calculate the variance for dataset in two steps :
1. Calculate the squared difference for each input variable from the
group mean.
2. Calculate the mean of the squared difference.
------------------------------------------------
Squared_Difference = (x - mean(k)) ** 2
Variance = (1 / (count(x) - count(classes))) *
(for i = 1 to i = n --> sum(Squared_Difference(xi)))
Making Predictions :
discriminant(x) = x * (mean / variance) -
((mean ** 2) / (2 * variance)) + Ln(probability)
---------------------------------------------------------------------------
After calculating the discriminant value for each class, the class with the
largest discriminant value is taken as the prediction.
Author: @EverLookNeverSee
"""
from math import log
from os import name, system
from random import gauss, seed
from typing import Callable, TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 206: https://projecteuler.net/problem=206
Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0,
where each “_” is a single digit.
-----
Instead of computing every single permutation of that number and going
through a 10^9 search space, we can narrow it down considerably.
If the square ends in a 0, then the square root must also end in a 0. Thus,
the last missing digit must be 0 and the square root is a multiple of 10.
We can narrow the search space down to the first 8 digits and multiply the
result of that by 10 at the end.
Now the last digit is a 9, which can only happen if the square root ends
in a 3 or 7. From this point, we can try one of two different methods to find
the answer:
1. Start at the lowest possible base number whose square would be in the
format, and count up. The base we would start at is 101010103, whose square is
the closest number to 10203040506070809. Alternate counting up by 4 and 6 so
the last digit of the base is always a 3 or 7.
2. Start at the highest possible base number whose square would be in the
format, and count down. That base would be 138902663, whose square is the
closest number to 1929394959697989. Alternate counting down by 6 and 4 so the
last digit of the base is always a 3 or 7.
The solution does option 2 because the answer happens to be much closer to the
starting point.
"""
def is_square_form(num: int) -> bool:
"""
Determines if num is in the form 1_2_3_4_5_6_7_8_9
>>> is_square_form(1)
False
>>> is_square_form(112233445566778899)
True
>>> is_square_form(123456789012345678)
False
"""
digit = 9
while num > 0:
if num % 10 != digit:
return False
num //= 100
digit -= 1
return True
def solution() -> int:
"""
Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0
"""
num = 138902663
while not is_square_form(num * num):
if num % 10 == 3:
num -= 6 # (3 - 6) % 10 = 7
else:
num -= 4 # (7 - 4) % 10 = 3
return num * 10
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 206: https://projecteuler.net/problem=206
Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0,
where each “_” is a single digit.
-----
Instead of computing every single permutation of that number and going
through a 10^9 search space, we can narrow it down considerably.
If the square ends in a 0, then the square root must also end in a 0. Thus,
the last missing digit must be 0 and the square root is a multiple of 10.
We can narrow the search space down to the first 8 digits and multiply the
result of that by 10 at the end.
Now the last digit is a 9, which can only happen if the square root ends
in a 3 or 7. From this point, we can try one of two different methods to find
the answer:
1. Start at the lowest possible base number whose square would be in the
format, and count up. The base we would start at is 101010103, whose square is
the closest number to 10203040506070809. Alternate counting up by 4 and 6 so
the last digit of the base is always a 3 or 7.
2. Start at the highest possible base number whose square would be in the
format, and count down. That base would be 138902663, whose square is the
closest number to 1929394959697989. Alternate counting down by 6 and 4 so the
last digit of the base is always a 3 or 7.
The solution does option 2 because the answer happens to be much closer to the
starting point.
"""
def is_square_form(num: int) -> bool:
"""
Determines if num is in the form 1_2_3_4_5_6_7_8_9
>>> is_square_form(1)
False
>>> is_square_form(112233445566778899)
True
>>> is_square_form(123456789012345678)
False
"""
digit = 9
while num > 0:
if num % 10 != digit:
return False
num //= 100
digit -= 1
return True
def solution() -> int:
"""
Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0
"""
num = 138902663
while not is_square_form(num * num):
if num % 10 == 3:
num -= 6 # (3 - 6) % 10 = 7
else:
num -= 4 # (7 - 4) % 10 = 3
return num * 10
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Binary Exponentiation."""
# Author : Junth Basnet
# Time Complexity : O(logn)
def binary_exponentiation(a, n):
if n == 0:
return 1
elif n % 2 == 1:
return binary_exponentiation(a, n - 1) * a
else:
b = binary_exponentiation(a, n / 2)
return b * b
if __name__ == "__main__":
try:
BASE = int(input("Enter Base : ").strip())
POWER = int(input("Enter Power : ").strip())
except ValueError:
print("Invalid literal for integer")
RESULT = binary_exponentiation(BASE, POWER)
print(f"{BASE}^({POWER}) : {RESULT}")
| """Binary Exponentiation."""
# Author : Junth Basnet
# Time Complexity : O(logn)
def binary_exponentiation(a, n):
if n == 0:
return 1
elif n % 2 == 1:
return binary_exponentiation(a, n - 1) * a
else:
b = binary_exponentiation(a, n / 2)
return b * b
if __name__ == "__main__":
try:
BASE = int(input("Enter Base : ").strip())
POWER = int(input("Enter Power : ").strip())
except ValueError:
print("Invalid literal for integer")
RESULT = binary_exponentiation(BASE, POWER)
print(f"{BASE}^({POWER}) : {RESULT}")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
- https://en.wikipedia.org/wiki/Euclidean_algorithm
- https://en.wikipedia.org/wiki/Least_common_multiple
"""
def gcd(x: int, y: int) -> int:
"""
Euclidean GCD algorithm (Greatest Common Divisor)
>>> gcd(0, 0)
0
>>> gcd(23, 42)
1
>>> gcd(15, 33)
3
>>> gcd(12345, 67890)
15
"""
return x if y == 0 else gcd(y, x % y)
def lcm(x: int, y: int) -> int:
"""
Least Common Multiple.
Using the property that lcm(a, b) * gcd(a, b) = a*b
>>> lcm(3, 15)
15
>>> lcm(1, 27)
27
>>> lcm(13, 27)
351
>>> lcm(64, 48)
192
"""
return (x * y) // gcd(x, y)
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
"""
g = 1
for i in range(1, n + 1):
g = lcm(g, i)
return g
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def double_linear_search(array: list[int], search_item: int) -> int:
"""
Iterate through the array from both sides to find the index of search_item.
:param array: the array to be searched
:param search_item: the item to be searched
:return the index of search_item, if search_item is in array, else -1
Examples:
>>> double_linear_search([1, 5, 5, 10], 1)
0
>>> double_linear_search([1, 5, 5, 10], 5)
1
>>> double_linear_search([1, 5, 5, 10], 100)
-1
>>> double_linear_search([1, 5, 5, 10], 10)
3
"""
# define the start and end index of the given array
start_ind, end_ind = 0, len(array) - 1
while start_ind <= end_ind:
if array[start_ind] == search_item:
return start_ind
elif array[end_ind] == search_item:
return end_ind
else:
start_ind += 1
end_ind -= 1
# returns -1 if search_item is not found in array
return -1
if __name__ == "__main__":
print(double_linear_search(list(range(100)), 40))
| from __future__ import annotations
def double_linear_search(array: list[int], search_item: int) -> int:
"""
Iterate through the array from both sides to find the index of search_item.
:param array: the array to be searched
:param search_item: the item to be searched
:return the index of search_item, if search_item is in array, else -1
Examples:
>>> double_linear_search([1, 5, 5, 10], 1)
0
>>> double_linear_search([1, 5, 5, 10], 5)
1
>>> double_linear_search([1, 5, 5, 10], 100)
-1
>>> double_linear_search([1, 5, 5, 10], 10)
3
"""
# define the start and end index of the given array
start_ind, end_ind = 0, len(array) - 1
while start_ind <= end_ind:
if array[start_ind] == search_item:
return start_ind
elif array[end_ind] == search_item:
return end_ind
else:
start_ind += 1
end_ind -= 1
# returns -1 if search_item is not found in array
return -1
if __name__ == "__main__":
print(double_linear_search(list(range(100)), 40))
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Breath First Search (BFS) can be used when finding the shortest path
from a given source node to a target node in an unweighted graph.
"""
from __future__ import annotations
from typing import Optional
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
class Graph:
def __init__(self, graph: dict[str, list[str]], source_vertex: str) -> None:
"""
Graph is implemented as dictionary of adjacency lists. Also,
Source vertex have to be defined upon initialization.
"""
self.graph = graph
# mapping node to its parent in resulting breadth first tree
self.parent: dict[str, Optional[str]] = {}
self.source_vertex = source_vertex
def breath_first_search(self) -> None:
"""
This function is a helper for running breath first search on this graph.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
>>> g.parent
{'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'}
"""
visited = {self.source_vertex}
self.parent[self.source_vertex] = None
queue = [self.source_vertex] # first in first out queue
while queue:
vertex = queue.pop(0)
for adjacent_vertex in self.graph[vertex]:
if adjacent_vertex not in visited:
visited.add(adjacent_vertex)
self.parent[adjacent_vertex] = vertex
queue.append(adjacent_vertex)
def shortest_path(self, target_vertex: str) -> str:
"""
This shortest path function returns a string, describing the result:
1.) No path is found. The string is a human readable message to indicate this.
2.) The shortest path is found. The string is in the form
`v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target
vertex, if it exists separately.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
Case 1 - No path is found.
>>> g.shortest_path("Foo")
'No path from vertex:G to vertex:Foo'
Case 2 - The path is found.
>>> g.shortest_path("D")
'G->C->A->B->D'
>>> g.shortest_path("G")
'G'
"""
if target_vertex == self.source_vertex:
return self.source_vertex
target_vertex_parent = self.parent.get(target_vertex)
if target_vertex_parent is None:
return f"No path from vertex:{self.source_vertex} to vertex:{target_vertex}"
return self.shortest_path(target_vertex_parent) + f"->{target_vertex}"
if __name__ == "__main__":
g = Graph(graph, "G")
g.breath_first_search()
print(g.shortest_path("D"))
print(g.shortest_path("G"))
print(g.shortest_path("Foo"))
| """Breath First Search (BFS) can be used when finding the shortest path
from a given source node to a target node in an unweighted graph.
"""
from __future__ import annotations
from typing import Optional
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
class Graph:
def __init__(self, graph: dict[str, list[str]], source_vertex: str) -> None:
"""
Graph is implemented as dictionary of adjacency lists. Also,
Source vertex have to be defined upon initialization.
"""
self.graph = graph
# mapping node to its parent in resulting breadth first tree
self.parent: dict[str, Optional[str]] = {}
self.source_vertex = source_vertex
def breath_first_search(self) -> None:
"""
This function is a helper for running breath first search on this graph.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
>>> g.parent
{'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'}
"""
visited = {self.source_vertex}
self.parent[self.source_vertex] = None
queue = [self.source_vertex] # first in first out queue
while queue:
vertex = queue.pop(0)
for adjacent_vertex in self.graph[vertex]:
if adjacent_vertex not in visited:
visited.add(adjacent_vertex)
self.parent[adjacent_vertex] = vertex
queue.append(adjacent_vertex)
def shortest_path(self, target_vertex: str) -> str:
"""
This shortest path function returns a string, describing the result:
1.) No path is found. The string is a human readable message to indicate this.
2.) The shortest path is found. The string is in the form
`v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target
vertex, if it exists separately.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
Case 1 - No path is found.
>>> g.shortest_path("Foo")
'No path from vertex:G to vertex:Foo'
Case 2 - The path is found.
>>> g.shortest_path("D")
'G->C->A->B->D'
>>> g.shortest_path("G")
'G'
"""
if target_vertex == self.source_vertex:
return self.source_vertex
target_vertex_parent = self.parent.get(target_vertex)
if target_vertex_parent is None:
return f"No path from vertex:{self.source_vertex} to vertex:{target_vertex}"
return self.shortest_path(target_vertex_parent) + f"->{target_vertex}"
if __name__ == "__main__":
g = Graph(graph, "G")
g.breath_first_search()
print(g.shortest_path("D"))
print(g.shortest_path("G"))
print(g.shortest_path("Foo"))
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def gcd(a: int, b: int) -> int:
while a != 0:
a, b = b % a, a
return b
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
raise ValueError(f"mod inverse of {a!r} and {m!r} does not exist")
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:
q = u3 // v3
v1, v2, v3, u1, u2, u3 = (u1 - q * v1), (u2 - q * v2), (u3 - q * v3), v1, v2, v3
return u1 % m
| def gcd(a: int, b: int) -> int:
while a != 0:
a, b = b % a, a
return b
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
raise ValueError(f"mod inverse of {a!r} and {m!r} does not exist")
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:
q = u3 // v3
v1, v2, v3, u1, u2, u3 = (u1 - q * v1), (u2 - q * v2), (u3 - q * v3), v1, v2, v3
return u1 % m
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Peak signal-to-noise ratio - PSNR
https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Source:
https://tutorials.techonical.com/how-to-calculate-psnr-value-of-two-images-using-python
"""
import math
import os
import cv2
import numpy as np
def psnr(original, contrast):
mse = np.mean((original - contrast) ** 2)
if mse == 0:
return 100
PIXEL_MAX = 255.0
PSNR = 20 * math.log10(PIXEL_MAX / math.sqrt(mse))
return PSNR
def main():
dir_path = os.path.dirname(os.path.realpath(__file__))
# Loading images (original image and compressed image)
original = cv2.imread(os.path.join(dir_path, "image_data/original_image.png"))
contrast = cv2.imread(os.path.join(dir_path, "image_data/compressed_image.png"), 1)
original2 = cv2.imread(os.path.join(dir_path, "image_data/PSNR-example-base.png"))
contrast2 = cv2.imread(
os.path.join(dir_path, "image_data/PSNR-example-comp-10.jpg"), 1
)
# Value expected: 29.73dB
print("-- First Test --")
print(f"PSNR value is {psnr(original, contrast)} dB")
# # Value expected: 31.53dB (Wikipedia Example)
print("\n-- Second Test --")
print(f"PSNR value is {psnr(original2, contrast2)} dB")
if __name__ == "__main__":
main()
| """
Peak signal-to-noise ratio - PSNR
https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Source:
https://tutorials.techonical.com/how-to-calculate-psnr-value-of-two-images-using-python
"""
import math
import os
import cv2
import numpy as np
def psnr(original, contrast):
mse = np.mean((original - contrast) ** 2)
if mse == 0:
return 100
PIXEL_MAX = 255.0
PSNR = 20 * math.log10(PIXEL_MAX / math.sqrt(mse))
return PSNR
def main():
dir_path = os.path.dirname(os.path.realpath(__file__))
# Loading images (original image and compressed image)
original = cv2.imread(os.path.join(dir_path, "image_data/original_image.png"))
contrast = cv2.imread(os.path.join(dir_path, "image_data/compressed_image.png"), 1)
original2 = cv2.imread(os.path.join(dir_path, "image_data/PSNR-example-base.png"))
contrast2 = cv2.imread(
os.path.join(dir_path, "image_data/PSNR-example-comp-10.jpg"), 1
)
# Value expected: 29.73dB
print("-- First Test --")
print(f"PSNR value is {psnr(original, contrast)} dB")
# # Value expected: 31.53dB (Wikipedia Example)
print("\n-- Second Test --")
print(f"PSNR value is {psnr(original2, contrast2)} dB")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 86: https://projecteuler.net/problem=86
A spider, S, sits in one corner of a cuboid room, measuring 6 by 5 by 3, and a fly, F,
sits in the opposite corner. By travelling on the surfaces of the room the shortest
"straight line" distance from S to F is 10 and the path is shown on the diagram.

However, there are up to three "shortest" path candidates for any given cuboid and the
shortest route doesn't always have integer length.
It can be shown that there are exactly 2060 distinct cuboids, ignoring rotations, with
integer dimensions, up to a maximum size of M by M by M, for which the shortest route
has integer length when M = 100. This is the least value of M for which the number of
solutions first exceeds two thousand; the number of solutions when M = 99 is 1975.
Find the least value of M such that the number of solutions first exceeds one million.
Solution:
Label the 3 side-lengths of the cuboid a,b,c such that 1 <= a <= b <= c <= M.
By conceptually "opening up" the cuboid and laying out its faces on a plane,
it can be seen that the shortest distance between 2 opposite corners is
sqrt((a+b)^2 + c^2). This distance is an integer if and only if (a+b),c make up
the first 2 sides of a pythagorean triplet.
The second useful insight is rather than calculate the number of cuboids
with integral shortest distance for each maximum cuboid side-length M,
we can calculate this number iteratively each time we increase M, as follows.
The set of cuboids satisfying this property with maximum side-length M-1 is a
subset of the cuboids satisfying the property with maximum side-length M
(since any cuboids with side lengths <= M-1 are also <= M). To calculate the
number of cuboids in the larger set (corresponding to M) we need only consider
the cuboids which have at least one side of length M. Since we have ordered the
side lengths a <= b <= c, we can assume that c = M. Then we just need to count
the number of pairs a,b satisfying the conditions:
sqrt((a+b)^2 + M^2) is integer
1 <= a <= b <= M
To count the number of pairs (a,b) satisfying these conditions, write d = a+b.
Now we have:
1 <= a <= b <= M => 2 <= d <= 2*M
we can actually make the second equality strict,
since d = 2*M => d^2 + M^2 = 5M^2
=> shortest distance = M * sqrt(5)
=> not integral.
a + b = d => b = d - a
and a <= b
=> a <= d/2
also a <= M
=> a <= min(M, d//2)
a + b = d => a = d - b
and b <= M
=> a >= d - M
also a >= 1
=> a >= max(1, d - M)
So a is in range(max(1, d - M), min(M, d // 2) + 1)
For a given d, the number of cuboids satisfying the required property with c = M
and a + b = d is the length of this range, which is
min(M, d // 2) + 1 - max(1, d - M).
In the code below, d is sum_shortest_sides
and M is max_cuboid_size.
"""
from math import sqrt
def solution(limit: int = 1000000) -> int:
"""
Return the least value of M such that there are more than one million cuboids
of side lengths 1 <= a,b,c <= M such that the shortest distance between two
opposite vertices of the cuboid is integral.
>>> solution(100)
24
>>> solution(1000)
72
>>> solution(2000)
100
>>> solution(20000)
288
"""
num_cuboids: int = 0
max_cuboid_size: int = 0
sum_shortest_sides: int
while num_cuboids <= limit:
max_cuboid_size += 1
for sum_shortest_sides in range(2, 2 * max_cuboid_size + 1):
if sqrt(sum_shortest_sides ** 2 + max_cuboid_size ** 2).is_integer():
num_cuboids += (
min(max_cuboid_size, sum_shortest_sides // 2)
- max(1, sum_shortest_sides - max_cuboid_size)
+ 1
)
return max_cuboid_size
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 86: https://projecteuler.net/problem=86
A spider, S, sits in one corner of a cuboid room, measuring 6 by 5 by 3, and a fly, F,
sits in the opposite corner. By travelling on the surfaces of the room the shortest
"straight line" distance from S to F is 10 and the path is shown on the diagram.

However, there are up to three "shortest" path candidates for any given cuboid and the
shortest route doesn't always have integer length.
It can be shown that there are exactly 2060 distinct cuboids, ignoring rotations, with
integer dimensions, up to a maximum size of M by M by M, for which the shortest route
has integer length when M = 100. This is the least value of M for which the number of
solutions first exceeds two thousand; the number of solutions when M = 99 is 1975.
Find the least value of M such that the number of solutions first exceeds one million.
Solution:
Label the 3 side-lengths of the cuboid a,b,c such that 1 <= a <= b <= c <= M.
By conceptually "opening up" the cuboid and laying out its faces on a plane,
it can be seen that the shortest distance between 2 opposite corners is
sqrt((a+b)^2 + c^2). This distance is an integer if and only if (a+b),c make up
the first 2 sides of a pythagorean triplet.
The second useful insight is rather than calculate the number of cuboids
with integral shortest distance for each maximum cuboid side-length M,
we can calculate this number iteratively each time we increase M, as follows.
The set of cuboids satisfying this property with maximum side-length M-1 is a
subset of the cuboids satisfying the property with maximum side-length M
(since any cuboids with side lengths <= M-1 are also <= M). To calculate the
number of cuboids in the larger set (corresponding to M) we need only consider
the cuboids which have at least one side of length M. Since we have ordered the
side lengths a <= b <= c, we can assume that c = M. Then we just need to count
the number of pairs a,b satisfying the conditions:
sqrt((a+b)^2 + M^2) is integer
1 <= a <= b <= M
To count the number of pairs (a,b) satisfying these conditions, write d = a+b.
Now we have:
1 <= a <= b <= M => 2 <= d <= 2*M
we can actually make the second equality strict,
since d = 2*M => d^2 + M^2 = 5M^2
=> shortest distance = M * sqrt(5)
=> not integral.
a + b = d => b = d - a
and a <= b
=> a <= d/2
also a <= M
=> a <= min(M, d//2)
a + b = d => a = d - b
and b <= M
=> a >= d - M
also a >= 1
=> a >= max(1, d - M)
So a is in range(max(1, d - M), min(M, d // 2) + 1)
For a given d, the number of cuboids satisfying the required property with c = M
and a + b = d is the length of this range, which is
min(M, d // 2) + 1 - max(1, d - M).
In the code below, d is sum_shortest_sides
and M is max_cuboid_size.
"""
from math import sqrt
def solution(limit: int = 1000000) -> int:
"""
Return the least value of M such that there are more than one million cuboids
of side lengths 1 <= a,b,c <= M such that the shortest distance between two
opposite vertices of the cuboid is integral.
>>> solution(100)
24
>>> solution(1000)
72
>>> solution(2000)
100
>>> solution(20000)
288
"""
num_cuboids: int = 0
max_cuboid_size: int = 0
sum_shortest_sides: int
while num_cuboids <= limit:
max_cuboid_size += 1
for sum_shortest_sides in range(2, 2 * max_cuboid_size + 1):
if sqrt(sum_shortest_sides ** 2 + max_cuboid_size ** 2).is_integer():
num_cuboids += (
min(max_cuboid_size, sum_shortest_sides // 2)
- max(1, sum_shortest_sides - max_cuboid_size)
+ 1
)
return max_cuboid_size
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the Harmonic Series algorithm
https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
For doctests run following command:
python -m doctest -v harmonic_series.py
or
python3 -m doctest -v harmonic_series.py
For manual testing run:
python3 harmonic_series.py
"""
def harmonic_series(n_term: str) -> list:
"""Pure Python implementation of Harmonic Series algorithm
:param n_term: The last (nth) term of Harmonic Series
:return: The Harmonic Series starting from 1 to last (nth) term
Examples:
>>> harmonic_series(5)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.0)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.1)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(-5)
[]
>>> harmonic_series(0)
[]
>>> harmonic_series(1)
['1']
"""
if n_term == "":
return n_term
series = []
for temp in range(int(n_term)):
series.append(f"1/{temp + 1}" if series else "1")
return series
if __name__ == "__main__":
nth_term = input("Enter the last number (nth term) of the Harmonic Series")
print("Formula of Harmonic Series => 1+1/2+1/3 ..... 1/n")
print(harmonic_series(nth_term))
| """
This is a pure Python implementation of the Harmonic Series algorithm
https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
For doctests run following command:
python -m doctest -v harmonic_series.py
or
python3 -m doctest -v harmonic_series.py
For manual testing run:
python3 harmonic_series.py
"""
def harmonic_series(n_term: str) -> list:
"""Pure Python implementation of Harmonic Series algorithm
:param n_term: The last (nth) term of Harmonic Series
:return: The Harmonic Series starting from 1 to last (nth) term
Examples:
>>> harmonic_series(5)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.0)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(5.1)
['1', '1/2', '1/3', '1/4', '1/5']
>>> harmonic_series(-5)
[]
>>> harmonic_series(0)
[]
>>> harmonic_series(1)
['1']
"""
if n_term == "":
return n_term
series = []
for temp in range(int(n_term)):
series.append(f"1/{temp + 1}" if series else "1")
return series
if __name__ == "__main__":
nth_term = input("Enter the last number (nth term) of the Harmonic Series")
print("Formula of Harmonic Series => 1+1/2+1/3 ..... 1/n")
print(harmonic_series(nth_term))
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The nth term of the sequence of triangle numbers is given by, tn = ½n(n+1); so
the first ten triangle numbers are:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
By converting each letter in a word to a number corresponding to its
alphabetical position and adding these values we form a word value. For example,
the word value for SKY is 19 + 11 + 25 = 55 = t10. If the word value is a
triangle number then we shall call the word a triangle word.
Using words.txt (right click and 'Save Link/Target As...'), a 16K text file
containing nearly two-thousand common English words, how many are triangle
words?
"""
import os
# Precomputes a list of the 100 first triangular numbers
TRIANGULAR_NUMBERS = [int(0.5 * n * (n + 1)) for n in range(1, 101)]
def solution():
"""
Finds the amount of triangular words in the words file.
>>> solution()
162
"""
script_dir = os.path.dirname(os.path.realpath(__file__))
wordsFilePath = os.path.join(script_dir, "words.txt")
words = ""
with open(wordsFilePath) as f:
words = f.readline()
words = list(map(lambda word: word.strip('"'), words.strip("\r\n").split(",")))
words = list(
filter(
lambda word: word in TRIANGULAR_NUMBERS,
map(lambda word: sum(map(lambda x: ord(x) - 64, word)), words),
)
)
return len(words)
if __name__ == "__main__":
print(solution())
| """
The nth term of the sequence of triangle numbers is given by, tn = ½n(n+1); so
the first ten triangle numbers are:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
By converting each letter in a word to a number corresponding to its
alphabetical position and adding these values we form a word value. For example,
the word value for SKY is 19 + 11 + 25 = 55 = t10. If the word value is a
triangle number then we shall call the word a triangle word.
Using words.txt (right click and 'Save Link/Target As...'), a 16K text file
containing nearly two-thousand common English words, how many are triangle
words?
"""
import os
# Precomputes a list of the 100 first triangular numbers
TRIANGULAR_NUMBERS = [int(0.5 * n * (n + 1)) for n in range(1, 101)]
def solution():
"""
Finds the amount of triangular words in the words file.
>>> solution()
162
"""
script_dir = os.path.dirname(os.path.realpath(__file__))
wordsFilePath = os.path.join(script_dir, "words.txt")
words = ""
with open(wordsFilePath) as f:
words = f.readline()
words = list(map(lambda word: word.strip('"'), words.strip("\r\n").split(",")))
words = list(
filter(
lambda word: word in TRIANGULAR_NUMBERS,
map(lambda word: sum(map(lambda x: ord(x) - 64, word)), words),
)
)
return len(words)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from decimal import Decimal, getcontext
from math import ceil, factorial
def pi(precision: int) -> str:
"""
The Chudnovsky algorithm is a fast method for calculating the digits of PI,
based on Ramanujan’s PI formulae.
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
PI = constant_term / ((multinomial_term * linear_term) / exponential_term)
where constant_term = 426880 * sqrt(10005)
The linear_term and the exponential_term can be defined iteratively as follows:
L_k+1 = L_k + 545140134 where L_0 = 13591409
X_k+1 = X_k * -262537412640768000 where X_0 = 1
The multinomial_term is defined as follows:
6k! / ((3k)! * (k!) ^ 3)
where k is the k_th iteration.
This algorithm correctly calculates around 14 digits of PI per iteration
>>> pi(10)
'3.14159265'
>>> pi(100)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706'
>>> pi('hello')
Traceback (most recent call last):
...
TypeError: Undefined for non-integers
>>> pi(-1)
Traceback (most recent call last):
...
ValueError: Undefined for non-natural numbers
"""
if not isinstance(precision, int):
raise TypeError("Undefined for non-integers")
elif precision < 1:
raise ValueError("Undefined for non-natural numbers")
getcontext().prec = precision
num_iterations = ceil(precision / 14)
constant_term = 426880 * Decimal(10005).sqrt()
exponential_term = 1
linear_term = 13591409
partial_sum = Decimal(linear_term)
for k in range(1, num_iterations):
multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3)
linear_term += 545140134
exponential_term *= -262537412640768000
partial_sum += Decimal(multinomial_term * linear_term) / exponential_term
return str(constant_term / partial_sum)[:-1]
if __name__ == "__main__":
n = 50
print(f"The first {n} digits of pi is: {pi(n)}")
| from decimal import Decimal, getcontext
from math import ceil, factorial
def pi(precision: int) -> str:
"""
The Chudnovsky algorithm is a fast method for calculating the digits of PI,
based on Ramanujan’s PI formulae.
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
PI = constant_term / ((multinomial_term * linear_term) / exponential_term)
where constant_term = 426880 * sqrt(10005)
The linear_term and the exponential_term can be defined iteratively as follows:
L_k+1 = L_k + 545140134 where L_0 = 13591409
X_k+1 = X_k * -262537412640768000 where X_0 = 1
The multinomial_term is defined as follows:
6k! / ((3k)! * (k!) ^ 3)
where k is the k_th iteration.
This algorithm correctly calculates around 14 digits of PI per iteration
>>> pi(10)
'3.14159265'
>>> pi(100)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706'
>>> pi('hello')
Traceback (most recent call last):
...
TypeError: Undefined for non-integers
>>> pi(-1)
Traceback (most recent call last):
...
ValueError: Undefined for non-natural numbers
"""
if not isinstance(precision, int):
raise TypeError("Undefined for non-integers")
elif precision < 1:
raise ValueError("Undefined for non-natural numbers")
getcontext().prec = precision
num_iterations = ceil(precision / 14)
constant_term = 426880 * Decimal(10005).sqrt()
exponential_term = 1
linear_term = 13591409
partial_sum = Decimal(linear_term)
for k in range(1, num_iterations):
multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3)
linear_term += 545140134
exponential_term *= -262537412640768000
partial_sum += Decimal(multinomial_term * linear_term) / exponential_term
return str(constant_term / partial_sum)[:-1]
if __name__ == "__main__":
n = 50
print(f"The first {n} digits of pi is: {pi(n)}")
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This script demonstrates the implementation of the Softmax function.
Its a function that takes as input a vector of K real numbers, and normalizes
it into a probability distribution consisting of K probabilities proportional
to the exponentials of the input numbers. After softmax, the elements of the
vector always sum up to 1.
Script inspired from its corresponding Wikipedia article
https://en.wikipedia.org/wiki/Softmax_function
"""
import numpy as np
def softmax(vector):
"""
Implements the softmax function
Parameters:
vector (np.array,list,tuple): A numpy array of shape (1,n)
consisting of real values or a similar list,tuple
Returns:
softmax_vec (np.array): The input numpy array after applying
softmax.
The softmax vector adds up to one. We need to ceil to mitigate for
precision
>>> np.ceil(np.sum(softmax([1,2,3,4])))
1.0
>>> vec = np.array([5,5])
>>> softmax(vec)
array([0.5, 0.5])
>>> softmax([0])
array([1.])
"""
# Calculate e^x for each x in your vector where e is Euler's
# number (approximately 2.718)
exponentVector = np.exp(vector)
# Add up the all the exponentials
sumOfExponents = np.sum(exponentVector)
# Divide every exponent by the sum of all exponents
softmax_vector = exponentVector / sumOfExponents
return softmax_vector
if __name__ == "__main__":
print(softmax((0,)))
| """
This script demonstrates the implementation of the Softmax function.
Its a function that takes as input a vector of K real numbers, and normalizes
it into a probability distribution consisting of K probabilities proportional
to the exponentials of the input numbers. After softmax, the elements of the
vector always sum up to 1.
Script inspired from its corresponding Wikipedia article
https://en.wikipedia.org/wiki/Softmax_function
"""
import numpy as np
def softmax(vector):
"""
Implements the softmax function
Parameters:
vector (np.array,list,tuple): A numpy array of shape (1,n)
consisting of real values or a similar list,tuple
Returns:
softmax_vec (np.array): The input numpy array after applying
softmax.
The softmax vector adds up to one. We need to ceil to mitigate for
precision
>>> np.ceil(np.sum(softmax([1,2,3,4])))
1.0
>>> vec = np.array([5,5])
>>> softmax(vec)
array([0.5, 0.5])
>>> softmax([0])
array([1.])
"""
# Calculate e^x for each x in your vector where e is Euler's
# number (approximately 2.718)
exponentVector = np.exp(vector)
# Add up the all the exponentials
sumOfExponents = np.sum(exponentVector)
# Divide every exponent by the sum of all exponents
softmax_vector = exponentVector / sumOfExponents
return softmax_vector
if __name__ == "__main__":
print(softmax((0,)))
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is used to convert the currency using the Amdoren Currency API
https://www.amdoren.com
"""
import os
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
TESTING = os.getenv("CI", False)
API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
raise KeyError("Please put your API key in an environment variable.")
# Currency and their description
list_of_currencies = """
AED United Arab Emirates Dirham
AFN Afghan Afghani
ALL Albanian Lek
AMD Armenian Dram
ANG Netherlands Antillean Guilder
AOA Angolan Kwanza
ARS Argentine Peso
AUD Australian Dollar
AWG Aruban Florin
AZN Azerbaijani Manat
BAM Bosnia & Herzegovina Convertible Mark
BBD Barbadian Dollar
BDT Bangladeshi Taka
BGN Bulgarian Lev
BHD Bahraini Dinar
BIF Burundian Franc
BMD Bermudian Dollar
BND Brunei Dollar
BOB Bolivian Boliviano
BRL Brazilian Real
BSD Bahamian Dollar
BTN Bhutanese Ngultrum
BWP Botswana Pula
BYN Belarus Ruble
BZD Belize Dollar
CAD Canadian Dollar
CDF Congolese Franc
CHF Swiss Franc
CLP Chilean Peso
CNY Chinese Yuan
COP Colombian Peso
CRC Costa Rican Colon
CUC Cuban Convertible Peso
CVE Cape Verdean Escudo
CZK Czech Republic Koruna
DJF Djiboutian Franc
DKK Danish Krone
DOP Dominican Peso
DZD Algerian Dinar
EGP Egyptian Pound
ERN Eritrean Nakfa
ETB Ethiopian Birr
EUR Euro
FJD Fiji Dollar
GBP British Pound Sterling
GEL Georgian Lari
GHS Ghanaian Cedi
GIP Gibraltar Pound
GMD Gambian Dalasi
GNF Guinea Franc
GTQ Guatemalan Quetzal
GYD Guyanaese Dollar
HKD Hong Kong Dollar
HNL Honduran Lempira
HRK Croatian Kuna
HTG Haiti Gourde
HUF Hungarian Forint
IDR Indonesian Rupiah
ILS Israeli Shekel
INR Indian Rupee
IQD Iraqi Dinar
IRR Iranian Rial
ISK Icelandic Krona
JMD Jamaican Dollar
JOD Jordanian Dinar
JPY Japanese Yen
KES Kenyan Shilling
KGS Kyrgystani Som
KHR Cambodian Riel
KMF Comorian Franc
KPW North Korean Won
KRW South Korean Won
KWD Kuwaiti Dinar
KYD Cayman Islands Dollar
KZT Kazakhstan Tenge
LAK Laotian Kip
LBP Lebanese Pound
LKR Sri Lankan Rupee
LRD Liberian Dollar
LSL Lesotho Loti
LYD Libyan Dinar
MAD Moroccan Dirham
MDL Moldovan Leu
MGA Malagasy Ariary
MKD Macedonian Denar
MMK Myanma Kyat
MNT Mongolian Tugrik
MOP Macau Pataca
MRO Mauritanian Ouguiya
MUR Mauritian Rupee
MVR Maldivian Rufiyaa
MWK Malawi Kwacha
MXN Mexican Peso
MYR Malaysian Ringgit
MZN Mozambican Metical
NAD Namibian Dollar
NGN Nigerian Naira
NIO Nicaragua Cordoba
NOK Norwegian Krone
NPR Nepalese Rupee
NZD New Zealand Dollar
OMR Omani Rial
PAB Panamanian Balboa
PEN Peruvian Nuevo Sol
PGK Papua New Guinean Kina
PHP Philippine Peso
PKR Pakistani Rupee
PLN Polish Zloty
PYG Paraguayan Guarani
QAR Qatari Riyal
RON Romanian Leu
RSD Serbian Dinar
RUB Russian Ruble
RWF Rwanda Franc
SAR Saudi Riyal
SBD Solomon Islands Dollar
SCR Seychellois Rupee
SDG Sudanese Pound
SEK Swedish Krona
SGD Singapore Dollar
SHP Saint Helena Pound
SLL Sierra Leonean Leone
SOS Somali Shilling
SRD Surinamese Dollar
SSP South Sudanese Pound
STD Sao Tome and Principe Dobra
SYP Syrian Pound
SZL Swazi Lilangeni
THB Thai Baht
TJS Tajikistan Somoni
TMT Turkmenistani Manat
TND Tunisian Dinar
TOP Tonga Paanga
TRY Turkish Lira
TTD Trinidad and Tobago Dollar
TWD New Taiwan Dollar
TZS Tanzanian Shilling
UAH Ukrainian Hryvnia
UGX Ugandan Shilling
USD United States Dollar
UYU Uruguayan Peso
UZS Uzbekistan Som
VEF Venezuelan Bolivar
VND Vietnamese Dong
VUV Vanuatu Vatu
WST Samoan Tala
XAF Central African CFA franc
XCD East Caribbean Dollar
XOF West African CFA franc
XPF CFP Franc
YER Yemeni Rial
ZAR South African Rand
ZMW Zambian Kwacha
"""
def convert_currency(
from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = API_KEY
) -> str:
"""https://www.amdoren.com/currency-api/"""
params = locals()
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
)
)
| """
This is used to convert the currency using the Amdoren Currency API
https://www.amdoren.com
"""
import os
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
TESTING = os.getenv("CI", False)
API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
raise KeyError("Please put your API key in an environment variable.")
# Currency and their description
list_of_currencies = """
AED United Arab Emirates Dirham
AFN Afghan Afghani
ALL Albanian Lek
AMD Armenian Dram
ANG Netherlands Antillean Guilder
AOA Angolan Kwanza
ARS Argentine Peso
AUD Australian Dollar
AWG Aruban Florin
AZN Azerbaijani Manat
BAM Bosnia & Herzegovina Convertible Mark
BBD Barbadian Dollar
BDT Bangladeshi Taka
BGN Bulgarian Lev
BHD Bahraini Dinar
BIF Burundian Franc
BMD Bermudian Dollar
BND Brunei Dollar
BOB Bolivian Boliviano
BRL Brazilian Real
BSD Bahamian Dollar
BTN Bhutanese Ngultrum
BWP Botswana Pula
BYN Belarus Ruble
BZD Belize Dollar
CAD Canadian Dollar
CDF Congolese Franc
CHF Swiss Franc
CLP Chilean Peso
CNY Chinese Yuan
COP Colombian Peso
CRC Costa Rican Colon
CUC Cuban Convertible Peso
CVE Cape Verdean Escudo
CZK Czech Republic Koruna
DJF Djiboutian Franc
DKK Danish Krone
DOP Dominican Peso
DZD Algerian Dinar
EGP Egyptian Pound
ERN Eritrean Nakfa
ETB Ethiopian Birr
EUR Euro
FJD Fiji Dollar
GBP British Pound Sterling
GEL Georgian Lari
GHS Ghanaian Cedi
GIP Gibraltar Pound
GMD Gambian Dalasi
GNF Guinea Franc
GTQ Guatemalan Quetzal
GYD Guyanaese Dollar
HKD Hong Kong Dollar
HNL Honduran Lempira
HRK Croatian Kuna
HTG Haiti Gourde
HUF Hungarian Forint
IDR Indonesian Rupiah
ILS Israeli Shekel
INR Indian Rupee
IQD Iraqi Dinar
IRR Iranian Rial
ISK Icelandic Krona
JMD Jamaican Dollar
JOD Jordanian Dinar
JPY Japanese Yen
KES Kenyan Shilling
KGS Kyrgystani Som
KHR Cambodian Riel
KMF Comorian Franc
KPW North Korean Won
KRW South Korean Won
KWD Kuwaiti Dinar
KYD Cayman Islands Dollar
KZT Kazakhstan Tenge
LAK Laotian Kip
LBP Lebanese Pound
LKR Sri Lankan Rupee
LRD Liberian Dollar
LSL Lesotho Loti
LYD Libyan Dinar
MAD Moroccan Dirham
MDL Moldovan Leu
MGA Malagasy Ariary
MKD Macedonian Denar
MMK Myanma Kyat
MNT Mongolian Tugrik
MOP Macau Pataca
MRO Mauritanian Ouguiya
MUR Mauritian Rupee
MVR Maldivian Rufiyaa
MWK Malawi Kwacha
MXN Mexican Peso
MYR Malaysian Ringgit
MZN Mozambican Metical
NAD Namibian Dollar
NGN Nigerian Naira
NIO Nicaragua Cordoba
NOK Norwegian Krone
NPR Nepalese Rupee
NZD New Zealand Dollar
OMR Omani Rial
PAB Panamanian Balboa
PEN Peruvian Nuevo Sol
PGK Papua New Guinean Kina
PHP Philippine Peso
PKR Pakistani Rupee
PLN Polish Zloty
PYG Paraguayan Guarani
QAR Qatari Riyal
RON Romanian Leu
RSD Serbian Dinar
RUB Russian Ruble
RWF Rwanda Franc
SAR Saudi Riyal
SBD Solomon Islands Dollar
SCR Seychellois Rupee
SDG Sudanese Pound
SEK Swedish Krona
SGD Singapore Dollar
SHP Saint Helena Pound
SLL Sierra Leonean Leone
SOS Somali Shilling
SRD Surinamese Dollar
SSP South Sudanese Pound
STD Sao Tome and Principe Dobra
SYP Syrian Pound
SZL Swazi Lilangeni
THB Thai Baht
TJS Tajikistan Somoni
TMT Turkmenistani Manat
TND Tunisian Dinar
TOP Tonga Paanga
TRY Turkish Lira
TTD Trinidad and Tobago Dollar
TWD New Taiwan Dollar
TZS Tanzanian Shilling
UAH Ukrainian Hryvnia
UGX Ugandan Shilling
USD United States Dollar
UYU Uruguayan Peso
UZS Uzbekistan Som
VEF Venezuelan Bolivar
VND Vietnamese Dong
VUV Vanuatu Vatu
WST Samoan Tala
XAF Central African CFA franc
XCD East Caribbean Dollar
XOF West African CFA franc
XPF CFP Franc
YER Yemeni Rial
ZAR South African Rand
ZMW Zambian Kwacha
"""
def convert_currency(
from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = API_KEY
) -> str:
"""https://www.amdoren.com/currency-api/"""
params = locals()
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
)
)
| -1 |
TheAlgorithms/Python | 4,613 | Fix mypy error at maths | Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| imp2002 | "2021-08-15T04:46:15Z" | "2021-08-15T19:15:53Z" | 032999f36ed6eef61752e6bc5e399020988b06bd | d009cea391414bfef17520ba6b64e4c2d97163ed | Fix mypy error at maths. Related issue #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The Reverse Polish Nation also known as Polish postfix notation
or simply postfix notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
"""
def evaluate_postfix(postfix_notation: list) -> int:
"""
>>> evaluate_postfix(["2", "1", "+", "3", "*"])
9
>>> evaluate_postfix(["4", "13", "5", "/", "+"])
6
>>> evaluate_postfix([])
0
"""
if not postfix_notation:
return 0
operations = {"+", "-", "*", "/"}
stack = []
for token in postfix_notation:
if token in operations:
b, a = stack.pop(), stack.pop()
if token == "+":
stack.append(a + b)
elif token == "-":
stack.append(a - b)
elif token == "*":
stack.append(a * b)
else:
if a * b < 0 and a % b != 0:
stack.append(a // b + 1)
else:
stack.append(a // b)
else:
stack.append(int(token))
return stack.pop()
if __name__ == "__main__":
import doctest
doctest.testmod()
| from typing import Any, List
"""
The Reverse Polish Nation also known as Polish postfix notation
or simply postfix notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
"""
def evaluate_postfix(postfix_notation: list) -> int:
"""
>>> evaluate_postfix(["2", "1", "+", "3", "*"])
9
>>> evaluate_postfix(["4", "13", "5", "/", "+"])
6
>>> evaluate_postfix([])
0
"""
if not postfix_notation:
return 0
operations = {"+", "-", "*", "/"}
stack: List[Any] = []
for token in postfix_notation:
if token in operations:
b, a = stack.pop(), stack.pop()
if token == "+":
stack.append(a + b)
elif token == "-":
stack.append(a - b)
elif token == "*":
stack.append(a * b)
else:
if a * b < 0 and a % b != 0:
stack.append(a // b + 1)
else:
stack.append(a // b)
else:
stack.append(int(token))
return stack.pop()
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ A Stack using a linked list like structure """
from typing import Any
class Node:
def __init__(self, data):
self.data = data
self.next = None
def __str__(self):
return f"{self.data}"
class LinkedStack:
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top = None
def __iter__(self):
node = self.top
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: Any) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> Any:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> Any:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| """ A Stack using a linked list like structure """
from typing import Any, Optional
class Node:
def __init__(self, data):
self.data = data
self.next = None
def __str__(self):
return f"{self.data}"
class LinkedStack:
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top: Optional[Node] = None
def __iter__(self):
node = self.top
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: Any) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> Any:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> Any:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
assert self.top is not None
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| 1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class StackOverflowError(BaseException):
pass
class Stack:
"""A stack is an abstract data type that serves as a collection of
elements with two principal operations: push() and pop(). push() adds an
element to the top of the stack, and pop() removes an element from the top
of a stack. The order in which elements come off of a stack are
Last In, First Out (LIFO).
https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
"""
def __init__(self, limit: int = 10):
self.stack = []
self.limit = limit
def __bool__(self) -> bool:
return bool(self.stack)
def __str__(self) -> str:
return str(self.stack)
def push(self, data):
"""Push an element to the top of the stack."""
if len(self.stack) >= self.limit:
raise StackOverflowError
self.stack.append(data)
def pop(self):
"""Pop an element off of the top of the stack."""
return self.stack.pop()
def peek(self):
"""Peek at the top-most element of the stack."""
return self.stack[-1]
def is_empty(self) -> bool:
"""Check if a stack is empty."""
return not bool(self.stack)
def is_full(self) -> bool:
return self.size() == self.limit
def size(self) -> int:
"""Return the size of the stack."""
return len(self.stack)
def __contains__(self, item) -> bool:
"""Check if item is in stack"""
return item in self.stack
def test_stack() -> None:
"""
>>> test_stack()
"""
stack = Stack(10)
assert bool(stack) is False
assert stack.is_empty() is True
assert stack.is_full() is False
assert str(stack) == "[]"
try:
_ = stack.pop()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
_ = stack.peek()
assert False # This should not happen
except IndexError:
assert True # This should happen
for i in range(10):
assert stack.size() == i
stack.push(i)
assert bool(stack) is True
assert stack.is_empty() is False
assert stack.is_full() is True
assert str(stack) == str(list(range(10)))
assert stack.pop() == 9
assert stack.peek() == 8
stack.push(100)
assert str(stack) == str([0, 1, 2, 3, 4, 5, 6, 7, 8, 100])
try:
stack.push(200)
assert False # This should not happen
except StackOverflowError:
assert True # This should happen
assert stack.is_empty() is False
assert stack.size() == 10
assert 5 in stack
assert 55 not in stack
if __name__ == "__main__":
test_stack()
| from typing import List
class StackOverflowError(BaseException):
pass
class Stack:
"""A stack is an abstract data type that serves as a collection of
elements with two principal operations: push() and pop(). push() adds an
element to the top of the stack, and pop() removes an element from the top
of a stack. The order in which elements come off of a stack are
Last In, First Out (LIFO).
https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
"""
def __init__(self, limit: int = 10):
self.stack: List[int] = []
self.limit = limit
def __bool__(self) -> bool:
return bool(self.stack)
def __str__(self) -> str:
return str(self.stack)
def push(self, data):
"""Push an element to the top of the stack."""
if len(self.stack) >= self.limit:
raise StackOverflowError
self.stack.append(data)
def pop(self):
"""Pop an element off of the top of the stack."""
return self.stack.pop()
def peek(self):
"""Peek at the top-most element of the stack."""
return self.stack[-1]
def is_empty(self) -> bool:
"""Check if a stack is empty."""
return not bool(self.stack)
def is_full(self) -> bool:
return self.size() == self.limit
def size(self) -> int:
"""Return the size of the stack."""
return len(self.stack)
def __contains__(self, item) -> bool:
"""Check if item is in stack"""
return item in self.stack
def test_stack() -> None:
"""
>>> test_stack()
"""
stack = Stack(10)
assert bool(stack) is False
assert stack.is_empty() is True
assert stack.is_full() is False
assert str(stack) == "[]"
try:
_ = stack.pop()
assert False # This should not happen
except IndexError:
assert True # This should happen
try:
_ = stack.peek()
assert False # This should not happen
except IndexError:
assert True # This should happen
for i in range(10):
assert stack.size() == i
stack.push(i)
assert bool(stack) is True
assert stack.is_empty() is False
assert stack.is_full() is True
assert str(stack) == str(list(range(10)))
assert stack.pop() == 9
assert stack.peek() == 8
stack.push(100)
assert str(stack) == str([0, 1, 2, 3, 4, 5, 6, 7, 8, 100])
try:
stack.push(200)
assert False # This should not happen
except StackOverflowError:
assert True # This should happen
assert stack.is_empty() is False
assert stack.size() == 10
assert 5 in stack
assert 55 not in stack
if __name__ == "__main__":
test_stack()
| 1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Divide and Conquer algorithm
def find_min(nums, left, right):
"""
find min value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: min in nums
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_min(nums, 0, len(nums) - 1) == min(nums)
True
"""
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_min = find_min(nums, left, mid) # find min in range[left, mid]
right_min = find_min(nums, mid + 1, right) # find min in range[mid + 1, right]
return left_min if left_min <= right_min else right_min
if __name__ == "__main__":
nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
assert find_min(nums, 0, len(nums) - 1) == 1
| # Divide and Conquer algorithm
def find_min(nums, left, right):
"""
find min value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: min in nums
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_min(nums, 0, len(nums) - 1) == min(nums)
True
"""
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_min = find_min(nums, left, mid) # find min in range[left, mid]
right_min = find_min(nums, mid + 1, right) # find min in range[mid + 1, right]
return left_min if left_min <= right_min else right_min
if __name__ == "__main__":
nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
assert find_min(nums, 0, len(nums) - 1) == 1
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import os
import sys
import time
from . import transposition_cipher as transCipher
def main() -> None:
inputFile = "Prehistoric Men.txt"
outputFile = "Output.txt"
key = int(input("Enter key: "))
mode = input("Encrypt/Decrypt [e/d]: ")
if not os.path.exists(inputFile):
print("File %s does not exist. Quitting..." % inputFile)
sys.exit()
if os.path.exists(outputFile):
print("Overwrite %s? [y/n]" % outputFile)
response = input("> ")
if not response.lower().startswith("y"):
sys.exit()
startTime = time.time()
if mode.lower().startswith("e"):
with open(inputFile) as f:
content = f.read()
translated = transCipher.encryptMessage(key, content)
elif mode.lower().startswith("d"):
with open(outputFile) as f:
content = f.read()
translated = transCipher.decryptMessage(key, content)
with open(outputFile, "w") as outputObj:
outputObj.write(translated)
totalTime = round(time.time() - startTime, 2)
print(("Done (", totalTime, "seconds )"))
if __name__ == "__main__":
main()
| import os
import sys
import time
from . import transposition_cipher as transCipher
def main() -> None:
inputFile = "Prehistoric Men.txt"
outputFile = "Output.txt"
key = int(input("Enter key: "))
mode = input("Encrypt/Decrypt [e/d]: ")
if not os.path.exists(inputFile):
print("File %s does not exist. Quitting..." % inputFile)
sys.exit()
if os.path.exists(outputFile):
print("Overwrite %s? [y/n]" % outputFile)
response = input("> ")
if not response.lower().startswith("y"):
sys.exit()
startTime = time.time()
if mode.lower().startswith("e"):
with open(inputFile) as f:
content = f.read()
translated = transCipher.encryptMessage(key, content)
elif mode.lower().startswith("d"):
with open(outputFile) as f:
content = f.read()
translated = transCipher.decryptMessage(key, content)
with open(outputFile, "w") as outputObj:
outputObj.write(translated)
totalTime = round(time.time() - startTime, 2)
print(("Done (", totalTime, "seconds )"))
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution(n: int = 1000) -> int:
"""
Return the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = n
>>> solution(36)
1620
>>> solution(126)
66780
"""
product = -1
candidate = 0
for a in range(1, n // 3):
# Solving the two equations a**2+b**2=c**2 and a+b+c=N eliminating c
b = (n * n - 2 * a * n) // (2 * n - 2 * a)
c = n - a - b
if c * c == (a * a + b * b):
candidate = a * b * c
if candidate >= product:
product = candidate
return product
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution(n: int = 1000) -> int:
"""
Return the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = n
>>> solution(36)
1620
>>> solution(126)
66780
"""
product = -1
candidate = 0
for a in range(1, n // 3):
# Solving the two equations a**2+b**2=c**2 and a+b+c=N eliminating c
b = (n * n - 2 * a * n) // (2 * n - 2 * a)
c = n - a - b
if c * c == (a * a + b * b):
candidate = a * b * c
if candidate >= product:
product = candidate
return product
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Breadth-first search shortest path implementations.
doctest:
python -m doctest -v bfs_shortest_path.py
Manual test:
python bfs_shortest_path.py
"""
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
def bfs_shortest_path(graph: dict, start, goal) -> str:
"""Find shortest path between `start` and `goal` nodes.
Args:
graph (dict): node/list of neighboring nodes key/value pairs.
start: start node.
goal: target node.
Returns:
Shortest path between `start` and `goal` nodes as a string of nodes.
'Not found' string if no path found.
Example:
>>> bfs_shortest_path(graph, "G", "D")
['G', 'C', 'A', 'B', 'D']
"""
# keep track of explored nodes
explored = set()
# keep track of all the paths to be checked
queue = [[start]]
# return path if start is goal
if start == goal:
return "That was easy! Start = goal"
# keeps looping until all possible paths have been checked
while queue:
# pop the first path from the queue
path = queue.pop(0)
# get the last node from the path
node = path[-1]
if node not in explored:
neighbours = graph[node]
# go through all neighbour nodes, construct a new path and
# push it into the queue
for neighbour in neighbours:
new_path = list(path)
new_path.append(neighbour)
queue.append(new_path)
# return path if neighbour is goal
if neighbour == goal:
return new_path
# mark node as explored
explored.add(node)
# in case there's no path between the 2 nodes
return "So sorry, but a connecting path doesn't exist :("
def bfs_shortest_path_distance(graph: dict, start, target) -> int:
"""Find shortest path distance between `start` and `target` nodes.
Args:
graph: node/list of neighboring nodes key/value pairs.
start: node to start search from.
target: node to search for.
Returns:
Number of edges in shortest path between `start` and `target` nodes.
-1 if no path exists.
Example:
>>> bfs_shortest_path_distance(graph, "G", "D")
4
>>> bfs_shortest_path_distance(graph, "A", "A")
0
>>> bfs_shortest_path_distance(graph, "A", "H")
-1
"""
if not graph or start not in graph or target not in graph:
return -1
if start == target:
return 0
queue = [start]
visited = set(start)
# Keep tab on distances from `start` node.
dist = {start: 0, target: -1}
while queue:
node = queue.pop(0)
if node == target:
dist[target] = (
dist[node] if dist[target] == -1 else min(dist[target], dist[node])
)
for adjacent in graph[node]:
if adjacent not in visited:
visited.add(adjacent)
queue.append(adjacent)
dist[adjacent] = dist[node] + 1
return dist[target]
if __name__ == "__main__":
print(bfs_shortest_path(graph, "G", "D")) # returns ['G', 'C', 'A', 'B', 'D']
print(bfs_shortest_path_distance(graph, "G", "D")) # returns 4
| """Breadth-first search shortest path implementations.
doctest:
python -m doctest -v bfs_shortest_path.py
Manual test:
python bfs_shortest_path.py
"""
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
def bfs_shortest_path(graph: dict, start, goal) -> str:
"""Find shortest path between `start` and `goal` nodes.
Args:
graph (dict): node/list of neighboring nodes key/value pairs.
start: start node.
goal: target node.
Returns:
Shortest path between `start` and `goal` nodes as a string of nodes.
'Not found' string if no path found.
Example:
>>> bfs_shortest_path(graph, "G", "D")
['G', 'C', 'A', 'B', 'D']
"""
# keep track of explored nodes
explored = set()
# keep track of all the paths to be checked
queue = [[start]]
# return path if start is goal
if start == goal:
return "That was easy! Start = goal"
# keeps looping until all possible paths have been checked
while queue:
# pop the first path from the queue
path = queue.pop(0)
# get the last node from the path
node = path[-1]
if node not in explored:
neighbours = graph[node]
# go through all neighbour nodes, construct a new path and
# push it into the queue
for neighbour in neighbours:
new_path = list(path)
new_path.append(neighbour)
queue.append(new_path)
# return path if neighbour is goal
if neighbour == goal:
return new_path
# mark node as explored
explored.add(node)
# in case there's no path between the 2 nodes
return "So sorry, but a connecting path doesn't exist :("
def bfs_shortest_path_distance(graph: dict, start, target) -> int:
"""Find shortest path distance between `start` and `target` nodes.
Args:
graph: node/list of neighboring nodes key/value pairs.
start: node to start search from.
target: node to search for.
Returns:
Number of edges in shortest path between `start` and `target` nodes.
-1 if no path exists.
Example:
>>> bfs_shortest_path_distance(graph, "G", "D")
4
>>> bfs_shortest_path_distance(graph, "A", "A")
0
>>> bfs_shortest_path_distance(graph, "A", "H")
-1
"""
if not graph or start not in graph or target not in graph:
return -1
if start == target:
return 0
queue = [start]
visited = set(start)
# Keep tab on distances from `start` node.
dist = {start: 0, target: -1}
while queue:
node = queue.pop(0)
if node == target:
dist[target] = (
dist[node] if dist[target] == -1 else min(dist[target], dist[node])
)
for adjacent in graph[node]:
if adjacent not in visited:
visited.add(adjacent)
queue.append(adjacent)
dist[adjacent] = dist[node] + 1
return dist[target]
if __name__ == "__main__":
print(bfs_shortest_path(graph, "G", "D")) # returns ['G', 'C', 'A', 'B', 'D']
print(bfs_shortest_path_distance(graph, "G", "D")) # returns 4
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 203: https://projecteuler.net/problem=203
The binomial coefficients (n k) can be arranged in triangular form, Pascal's
triangle, like this:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
.........
It can be seen that the first eight rows of Pascal's triangle contain twelve
distinct numbers: 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, 21 and 35.
A positive integer n is called squarefree if no square of a prime divides n.
Of the twelve distinct numbers in the first eight rows of Pascal's triangle,
all except 4 and 20 are squarefree. The sum of the distinct squarefree numbers
in the first eight rows is 105.
Find the sum of the distinct squarefree numbers in the first 51 rows of
Pascal's triangle.
References:
- https://en.wikipedia.org/wiki/Pascal%27s_triangle
"""
import math
from typing import List, Set
def get_pascal_triangle_unique_coefficients(depth: int) -> Set[int]:
"""
Returns the unique coefficients of a Pascal's triangle of depth "depth".
The coefficients of this triangle are symmetric. A further improvement to this
method could be to calculate the coefficients once per level. Nonetheless,
the current implementation is fast enough for the original problem.
>>> get_pascal_triangle_unique_coefficients(1)
{1}
>>> get_pascal_triangle_unique_coefficients(2)
{1}
>>> get_pascal_triangle_unique_coefficients(3)
{1, 2}
>>> get_pascal_triangle_unique_coefficients(8)
{1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}
"""
coefficients = {1}
previous_coefficients = [1]
for step in range(2, depth + 1):
coefficients_begins_one = previous_coefficients + [0]
coefficients_ends_one = [0] + previous_coefficients
previous_coefficients = []
for x, y in zip(coefficients_begins_one, coefficients_ends_one):
coefficients.add(x + y)
previous_coefficients.append(x + y)
return coefficients
def get_primes_squared(max_number: int) -> List[int]:
"""
Calculates all primes between 2 and round(sqrt(max_number)) and returns
them squared up.
>>> get_primes_squared(2)
[]
>>> get_primes_squared(4)
[4]
>>> get_primes_squared(10)
[4, 9]
>>> get_primes_squared(100)
[4, 9, 25, 49]
"""
max_prime = round(math.sqrt(max_number))
non_primes = set()
primes = []
for num in range(2, max_prime + 1):
if num in non_primes:
continue
counter = 2
while num * counter <= max_prime:
non_primes.add(num * counter)
counter += 1
primes.append(num ** 2)
return primes
def get_squared_primes_to_use(
num_to_look: int, squared_primes: List[int], previous_index: int
) -> int:
"""
Returns an int indicating the last index on which squares of primes
in primes are lower than num_to_look.
This method supposes that squared_primes is sorted in ascending order and that
each num_to_look is provided in ascending order as well. Under these
assumptions, it needs a previous_index parameter that tells what was
the index returned by the method for the previous num_to_look.
If all the elements in squared_primes are greater than num_to_look, then the
method returns -1.
>>> get_squared_primes_to_use(1, [4, 9, 16, 25], 0)
-1
>>> get_squared_primes_to_use(4, [4, 9, 16, 25], 0)
1
>>> get_squared_primes_to_use(16, [4, 9, 16, 25], 1)
3
"""
idx = max(previous_index, 0)
while idx < len(squared_primes) and squared_primes[idx] <= num_to_look:
idx += 1
if idx == 0 and squared_primes[idx] > num_to_look:
return -1
if idx == len(squared_primes) and squared_primes[-1] > num_to_look:
return -1
return idx
def get_squarefree(
unique_coefficients: Set[int], squared_primes: List[int]
) -> Set[int]:
"""
Calculates the squarefree numbers inside unique_coefficients given a
list of square of primes.
Based on the definition of a non-squarefree number, then any non-squarefree
n can be decomposed as n = p*p*r, where p is positive prime number and r
is a positive integer.
Under the previous formula, any coefficient that is lower than p*p is
squarefree as r cannot be negative. On the contrary, if any r exists such
that n = p*p*r, then the number is non-squarefree.
>>> get_squarefree({1}, [])
set()
>>> get_squarefree({1, 2}, [])
set()
>>> get_squarefree({1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}, [4, 9, 25])
{1, 2, 3, 5, 6, 7, 35, 10, 15, 21}
"""
if len(squared_primes) == 0:
return set()
non_squarefrees = set()
prime_squared_idx = 0
for num in sorted(unique_coefficients):
prime_squared_idx = get_squared_primes_to_use(
num, squared_primes, prime_squared_idx
)
if prime_squared_idx == -1:
continue
if any(num % prime == 0 for prime in squared_primes[:prime_squared_idx]):
non_squarefrees.add(num)
return unique_coefficients.difference(non_squarefrees)
def solution(n: int = 51) -> int:
"""
Returns the sum of squarefrees for a given Pascal's Triangle of depth n.
>>> solution(1)
0
>>> solution(8)
105
>>> solution(9)
175
"""
unique_coefficients = get_pascal_triangle_unique_coefficients(n)
primes = get_primes_squared(max(unique_coefficients))
squarefrees = get_squarefree(unique_coefficients, primes)
return sum(squarefrees)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 203: https://projecteuler.net/problem=203
The binomial coefficients (n k) can be arranged in triangular form, Pascal's
triangle, like this:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
.........
It can be seen that the first eight rows of Pascal's triangle contain twelve
distinct numbers: 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, 21 and 35.
A positive integer n is called squarefree if no square of a prime divides n.
Of the twelve distinct numbers in the first eight rows of Pascal's triangle,
all except 4 and 20 are squarefree. The sum of the distinct squarefree numbers
in the first eight rows is 105.
Find the sum of the distinct squarefree numbers in the first 51 rows of
Pascal's triangle.
References:
- https://en.wikipedia.org/wiki/Pascal%27s_triangle
"""
import math
from typing import List, Set
def get_pascal_triangle_unique_coefficients(depth: int) -> Set[int]:
"""
Returns the unique coefficients of a Pascal's triangle of depth "depth".
The coefficients of this triangle are symmetric. A further improvement to this
method could be to calculate the coefficients once per level. Nonetheless,
the current implementation is fast enough for the original problem.
>>> get_pascal_triangle_unique_coefficients(1)
{1}
>>> get_pascal_triangle_unique_coefficients(2)
{1}
>>> get_pascal_triangle_unique_coefficients(3)
{1, 2}
>>> get_pascal_triangle_unique_coefficients(8)
{1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}
"""
coefficients = {1}
previous_coefficients = [1]
for step in range(2, depth + 1):
coefficients_begins_one = previous_coefficients + [0]
coefficients_ends_one = [0] + previous_coefficients
previous_coefficients = []
for x, y in zip(coefficients_begins_one, coefficients_ends_one):
coefficients.add(x + y)
previous_coefficients.append(x + y)
return coefficients
def get_primes_squared(max_number: int) -> List[int]:
"""
Calculates all primes between 2 and round(sqrt(max_number)) and returns
them squared up.
>>> get_primes_squared(2)
[]
>>> get_primes_squared(4)
[4]
>>> get_primes_squared(10)
[4, 9]
>>> get_primes_squared(100)
[4, 9, 25, 49]
"""
max_prime = round(math.sqrt(max_number))
non_primes = set()
primes = []
for num in range(2, max_prime + 1):
if num in non_primes:
continue
counter = 2
while num * counter <= max_prime:
non_primes.add(num * counter)
counter += 1
primes.append(num ** 2)
return primes
def get_squared_primes_to_use(
num_to_look: int, squared_primes: List[int], previous_index: int
) -> int:
"""
Returns an int indicating the last index on which squares of primes
in primes are lower than num_to_look.
This method supposes that squared_primes is sorted in ascending order and that
each num_to_look is provided in ascending order as well. Under these
assumptions, it needs a previous_index parameter that tells what was
the index returned by the method for the previous num_to_look.
If all the elements in squared_primes are greater than num_to_look, then the
method returns -1.
>>> get_squared_primes_to_use(1, [4, 9, 16, 25], 0)
-1
>>> get_squared_primes_to_use(4, [4, 9, 16, 25], 0)
1
>>> get_squared_primes_to_use(16, [4, 9, 16, 25], 1)
3
"""
idx = max(previous_index, 0)
while idx < len(squared_primes) and squared_primes[idx] <= num_to_look:
idx += 1
if idx == 0 and squared_primes[idx] > num_to_look:
return -1
if idx == len(squared_primes) and squared_primes[-1] > num_to_look:
return -1
return idx
def get_squarefree(
unique_coefficients: Set[int], squared_primes: List[int]
) -> Set[int]:
"""
Calculates the squarefree numbers inside unique_coefficients given a
list of square of primes.
Based on the definition of a non-squarefree number, then any non-squarefree
n can be decomposed as n = p*p*r, where p is positive prime number and r
is a positive integer.
Under the previous formula, any coefficient that is lower than p*p is
squarefree as r cannot be negative. On the contrary, if any r exists such
that n = p*p*r, then the number is non-squarefree.
>>> get_squarefree({1}, [])
set()
>>> get_squarefree({1, 2}, [])
set()
>>> get_squarefree({1, 2, 3, 4, 5, 6, 7, 35, 10, 15, 20, 21}, [4, 9, 25])
{1, 2, 3, 5, 6, 7, 35, 10, 15, 21}
"""
if len(squared_primes) == 0:
return set()
non_squarefrees = set()
prime_squared_idx = 0
for num in sorted(unique_coefficients):
prime_squared_idx = get_squared_primes_to_use(
num, squared_primes, prime_squared_idx
)
if prime_squared_idx == -1:
continue
if any(num % prime == 0 for prime in squared_primes[:prime_squared_idx]):
non_squarefrees.add(num)
return unique_coefficients.difference(non_squarefrees)
def solution(n: int = 51) -> int:
"""
Returns the sum of squarefrees for a given Pascal's Triangle of depth n.
>>> solution(1)
0
>>> solution(8)
105
>>> solution(9)
175
"""
unique_coefficients = get_pascal_triangle_unique_coefficients(n)
primes = get_primes_squared(max(unique_coefficients))
squarefrees = get_squarefree(unique_coefficients, primes)
return sum(squarefrees)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Find the area of various geometric shapes
"""
from math import pi, sqrt
def surface_area_cube(side_length: float) -> float:
"""
Calculate the Surface Area of a Cube.
>>> surface_area_cube(1)
6
>>> surface_area_cube(3)
54
>>> surface_area_cube(-1)
Traceback (most recent call last):
...
ValueError: surface_area_cube() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("surface_area_cube() only accepts non-negative values")
return 6 * side_length ** 2
def surface_area_sphere(radius: float) -> float:
"""
Calculate the Surface Area of a Sphere.
Wikipedia reference: https://en.wikipedia.org/wiki/Sphere
Formula: 4 * pi * r^2
>>> surface_area_sphere(5)
314.1592653589793
>>> surface_area_sphere(1)
12.566370614359172
>>> surface_area_sphere(-1)
Traceback (most recent call last):
...
ValueError: surface_area_sphere() only accepts non-negative values
"""
if radius < 0:
raise ValueError("surface_area_sphere() only accepts non-negative values")
return 4 * pi * radius ** 2
def area_rectangle(length: float, width: float) -> float:
"""
Calculate the area of a rectangle.
>>> area_rectangle(10, 20)
200
>>> area_rectangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
"""
if length < 0 or width < 0:
raise ValueError("area_rectangle() only accepts non-negative values")
return length * width
def area_square(side_length: float) -> float:
"""
Calculate the area of a square.
>>> area_square(10)
100
>>> area_square(-1)
Traceback (most recent call last):
...
ValueError: area_square() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("area_square() only accepts non-negative values")
return side_length ** 2
def area_triangle(base: float, height: float) -> float:
"""
Calculate the area of a triangle given the base and height.
>>> area_triangle(10, 10)
50.0
>>> area_triangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_triangle() only accepts non-negative values")
return (base * height) / 2
def area_triangle_three_sides(side1: float, side2: float, side3: float) -> float:
"""
Calculate area of triangle when the length of 3 sides are known.
This function uses Heron's formula: https://en.wikipedia.org/wiki/Heron%27s_formula
>>> area_triangle_three_sides(5, 12, 13)
30.0
>>> area_triangle_three_sides(10, 11, 12)
51.521233486786784
>>> area_triangle_three_sides(-1, -2, -1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
>>> area_triangle_three_sides(1, -2, 1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
"""
if side1 < 0 or side2 < 0 or side3 < 0:
raise ValueError("area_triangle_three_sides() only accepts non-negative values")
elif side1 + side2 < side3 or side1 + side3 < side2 or side2 + side3 < side1:
raise ValueError("Given three sides do not form a triangle")
semi_perimeter = (side1 + side2 + side3) / 2
area = sqrt(
semi_perimeter
* (semi_perimeter - side1)
* (semi_perimeter - side2)
* (semi_perimeter - side3)
)
return area
def area_parallelogram(base: float, height: float) -> float:
"""
Calculate the area of a parallelogram.
>>> area_parallelogram(10, 20)
200
>>> area_parallelogram(-1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(-1, 2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_parallelogram() only accepts non-negative values")
return base * height
def area_trapezium(base1: float, base2: float, height: float) -> float:
"""
Calculate the area of a trapezium.
>>> area_trapezium(10, 20, 30)
450.0
>>> area_trapezium(-1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
"""
if base1 < 0 or base2 < 0 or height < 0:
raise ValueError("area_trapezium() only accepts non-negative values")
return 1 / 2 * (base1 + base2) * height
def area_circle(radius: float) -> float:
"""
Calculate the area of a circle.
>>> area_circle(20)
1256.6370614359173
>>> area_circle(-1)
Traceback (most recent call last):
...
ValueError: area_circle() only accepts non-negative values
"""
if radius < 0:
raise ValueError("area_circle() only accepts non-negative values")
return pi * radius ** 2
def area_ellipse(radius_x: float, radius_y: float) -> float:
"""
Calculate the area of a ellipse.
>>> area_ellipse(10, 10)
314.1592653589793
>>> area_ellipse(10, 20)
628.3185307179587
>>> area_ellipse(-10, 20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(-10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
"""
if radius_x < 0 or radius_y < 0:
raise ValueError("area_ellipse() only accepts non-negative values")
return pi * radius_x * radius_y
def area_rhombus(diagonal_1: float, diagonal_2: float) -> float:
"""
Calculate the area of a rhombus.
>>> area_rhombus(10, 20)
100.0
>>> area_rhombus(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
"""
if diagonal_1 < 0 or diagonal_2 < 0:
raise ValueError("area_rhombus() only accepts non-negative values")
return 1 / 2 * diagonal_1 * diagonal_2
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True) # verbose so we can see methods missing tests
print("[DEMO] Areas of various geometric shapes: \n")
print(f"Rectangle: {area_rectangle(10, 20) = }")
print(f"Square: {area_square(10) = }")
print(f"Triangle: {area_triangle(10, 10) = }")
print(f"Triangle: {area_triangle_three_sides(5, 12, 13) = }")
print(f"Parallelogram: {area_parallelogram(10, 20) = }")
print(f"Trapezium: {area_trapezium(10, 20, 30) = }")
print(f"Circle: {area_circle(20) = }")
print("\nSurface Areas of various geometric shapes: \n")
print(f"Cube: {surface_area_cube(20) = }")
print(f"Sphere: {surface_area_sphere(20) = }")
print(f"Rhombus: {area_rhombus(10, 20) = }")
| """
Find the area of various geometric shapes
"""
from math import pi, sqrt
def surface_area_cube(side_length: float) -> float:
"""
Calculate the Surface Area of a Cube.
>>> surface_area_cube(1)
6
>>> surface_area_cube(3)
54
>>> surface_area_cube(-1)
Traceback (most recent call last):
...
ValueError: surface_area_cube() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("surface_area_cube() only accepts non-negative values")
return 6 * side_length ** 2
def surface_area_sphere(radius: float) -> float:
"""
Calculate the Surface Area of a Sphere.
Wikipedia reference: https://en.wikipedia.org/wiki/Sphere
Formula: 4 * pi * r^2
>>> surface_area_sphere(5)
314.1592653589793
>>> surface_area_sphere(1)
12.566370614359172
>>> surface_area_sphere(-1)
Traceback (most recent call last):
...
ValueError: surface_area_sphere() only accepts non-negative values
"""
if radius < 0:
raise ValueError("surface_area_sphere() only accepts non-negative values")
return 4 * pi * radius ** 2
def area_rectangle(length: float, width: float) -> float:
"""
Calculate the area of a rectangle.
>>> area_rectangle(10, 20)
200
>>> area_rectangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
"""
if length < 0 or width < 0:
raise ValueError("area_rectangle() only accepts non-negative values")
return length * width
def area_square(side_length: float) -> float:
"""
Calculate the area of a square.
>>> area_square(10)
100
>>> area_square(-1)
Traceback (most recent call last):
...
ValueError: area_square() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("area_square() only accepts non-negative values")
return side_length ** 2
def area_triangle(base: float, height: float) -> float:
"""
Calculate the area of a triangle given the base and height.
>>> area_triangle(10, 10)
50.0
>>> area_triangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_triangle() only accepts non-negative values")
return (base * height) / 2
def area_triangle_three_sides(side1: float, side2: float, side3: float) -> float:
"""
Calculate area of triangle when the length of 3 sides are known.
This function uses Heron's formula: https://en.wikipedia.org/wiki/Heron%27s_formula
>>> area_triangle_three_sides(5, 12, 13)
30.0
>>> area_triangle_three_sides(10, 11, 12)
51.521233486786784
>>> area_triangle_three_sides(-1, -2, -1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
>>> area_triangle_three_sides(1, -2, 1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
"""
if side1 < 0 or side2 < 0 or side3 < 0:
raise ValueError("area_triangle_three_sides() only accepts non-negative values")
elif side1 + side2 < side3 or side1 + side3 < side2 or side2 + side3 < side1:
raise ValueError("Given three sides do not form a triangle")
semi_perimeter = (side1 + side2 + side3) / 2
area = sqrt(
semi_perimeter
* (semi_perimeter - side1)
* (semi_perimeter - side2)
* (semi_perimeter - side3)
)
return area
def area_parallelogram(base: float, height: float) -> float:
"""
Calculate the area of a parallelogram.
>>> area_parallelogram(10, 20)
200
>>> area_parallelogram(-1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(-1, 2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_parallelogram() only accepts non-negative values")
return base * height
def area_trapezium(base1: float, base2: float, height: float) -> float:
"""
Calculate the area of a trapezium.
>>> area_trapezium(10, 20, 30)
450.0
>>> area_trapezium(-1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
"""
if base1 < 0 or base2 < 0 or height < 0:
raise ValueError("area_trapezium() only accepts non-negative values")
return 1 / 2 * (base1 + base2) * height
def area_circle(radius: float) -> float:
"""
Calculate the area of a circle.
>>> area_circle(20)
1256.6370614359173
>>> area_circle(-1)
Traceback (most recent call last):
...
ValueError: area_circle() only accepts non-negative values
"""
if radius < 0:
raise ValueError("area_circle() only accepts non-negative values")
return pi * radius ** 2
def area_ellipse(radius_x: float, radius_y: float) -> float:
"""
Calculate the area of a ellipse.
>>> area_ellipse(10, 10)
314.1592653589793
>>> area_ellipse(10, 20)
628.3185307179587
>>> area_ellipse(-10, 20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(-10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
"""
if radius_x < 0 or radius_y < 0:
raise ValueError("area_ellipse() only accepts non-negative values")
return pi * radius_x * radius_y
def area_rhombus(diagonal_1: float, diagonal_2: float) -> float:
"""
Calculate the area of a rhombus.
>>> area_rhombus(10, 20)
100.0
>>> area_rhombus(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
"""
if diagonal_1 < 0 or diagonal_2 < 0:
raise ValueError("area_rhombus() only accepts non-negative values")
return 1 / 2 * diagonal_1 * diagonal_2
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True) # verbose so we can see methods missing tests
print("[DEMO] Areas of various geometric shapes: \n")
print(f"Rectangle: {area_rectangle(10, 20) = }")
print(f"Square: {area_square(10) = }")
print(f"Triangle: {area_triangle(10, 10) = }")
print(f"Triangle: {area_triangle_three_sides(5, 12, 13) = }")
print(f"Parallelogram: {area_parallelogram(10, 20) = }")
print(f"Trapezium: {area_trapezium(10, 20, 30) = }")
print(f"Circle: {area_circle(20) = }")
print("\nSurface Areas of various geometric shapes: \n")
print(f"Cube: {surface_area_cube(20) = }")
print(f"Sphere: {surface_area_sphere(20) = }")
print(f"Rhombus: {area_rhombus(10, 20) = }")
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Min heap data structure
# with decrease key functionality - in O(log(n)) time
class Node:
def __init__(self, name, val):
self.name = name
self.val = val
def __str__(self):
return f"{self.__class__.__name__}({self.name}, {self.val})"
def __lt__(self, other):
return self.val < other.val
class MinHeap:
"""
>>> r = Node("R", -1)
>>> b = Node("B", 6)
>>> a = Node("A", 3)
>>> x = Node("X", 1)
>>> e = Node("E", 4)
>>> print(b)
Node(B, 6)
>>> myMinHeap = MinHeap([r, b, a, x, e])
>>> myMinHeap.decrease_key(b, -17)
>>> print(b)
Node(B, -17)
>>> print(myMinHeap["B"])
-17
"""
def __init__(self, array):
self.idx_of_element = {}
self.heap_dict = {}
self.heap = self.build_heap(array)
def __getitem__(self, key):
return self.get_value(key)
def get_parent_idx(self, idx):
return (idx - 1) // 2
def get_left_child_idx(self, idx):
return idx * 2 + 1
def get_right_child_idx(self, idx):
return idx * 2 + 2
def get_value(self, key):
return self.heap_dict[key]
def build_heap(self, array):
lastIdx = len(array) - 1
startFrom = self.get_parent_idx(lastIdx)
for idx, i in enumerate(array):
self.idx_of_element[i] = idx
self.heap_dict[i.name] = i.val
for i in range(startFrom, -1, -1):
self.sift_down(i, array)
return array
# this is min-heapify method
def sift_down(self, idx, array):
while True:
l = self.get_left_child_idx(idx) # noqa: E741
r = self.get_right_child_idx(idx)
smallest = idx
if l < len(array) and array[l] < array[idx]:
smallest = l
if r < len(array) and array[r] < array[smallest]:
smallest = r
if smallest != idx:
array[idx], array[smallest] = array[smallest], array[idx]
(
self.idx_of_element[array[idx]],
self.idx_of_element[array[smallest]],
) = (
self.idx_of_element[array[smallest]],
self.idx_of_element[array[idx]],
)
idx = smallest
else:
break
def sift_up(self, idx):
p = self.get_parent_idx(idx)
while p >= 0 and self.heap[p] > self.heap[idx]:
self.heap[p], self.heap[idx] = self.heap[idx], self.heap[p]
self.idx_of_element[self.heap[p]], self.idx_of_element[self.heap[idx]] = (
self.idx_of_element[self.heap[idx]],
self.idx_of_element[self.heap[p]],
)
idx = p
p = self.get_parent_idx(idx)
def peek(self):
return self.heap[0]
def remove(self):
self.heap[0], self.heap[-1] = self.heap[-1], self.heap[0]
self.idx_of_element[self.heap[0]], self.idx_of_element[self.heap[-1]] = (
self.idx_of_element[self.heap[-1]],
self.idx_of_element[self.heap[0]],
)
x = self.heap.pop()
del self.idx_of_element[x]
self.sift_down(0, self.heap)
return x
def insert(self, node):
self.heap.append(node)
self.idx_of_element[node] = len(self.heap) - 1
self.heap_dict[node.name] = node.val
self.sift_up(len(self.heap) - 1)
def is_empty(self):
return True if len(self.heap) == 0 else False
def decrease_key(self, node, newValue):
assert (
self.heap[self.idx_of_element[node]].val > newValue
), "newValue must be less that current value"
node.val = newValue
self.heap_dict[node.name] = newValue
self.sift_up(self.idx_of_element[node])
# USAGE
r = Node("R", -1)
b = Node("B", 6)
a = Node("A", 3)
x = Node("X", 1)
e = Node("E", 4)
# Use one of these two ways to generate Min-Heap
# Generating Min-Heap from array
myMinHeap = MinHeap([r, b, a, x, e])
# Generating Min-Heap by Insert method
# myMinHeap.insert(a)
# myMinHeap.insert(b)
# myMinHeap.insert(x)
# myMinHeap.insert(r)
# myMinHeap.insert(e)
# Before
print("Min Heap - before decrease key")
for i in myMinHeap.heap:
print(i)
print("Min Heap - After decrease key of node [B -> -17]")
myMinHeap.decrease_key(b, -17)
# After
for i in myMinHeap.heap:
print(i)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Min heap data structure
# with decrease key functionality - in O(log(n)) time
class Node:
def __init__(self, name, val):
self.name = name
self.val = val
def __str__(self):
return f"{self.__class__.__name__}({self.name}, {self.val})"
def __lt__(self, other):
return self.val < other.val
class MinHeap:
"""
>>> r = Node("R", -1)
>>> b = Node("B", 6)
>>> a = Node("A", 3)
>>> x = Node("X", 1)
>>> e = Node("E", 4)
>>> print(b)
Node(B, 6)
>>> myMinHeap = MinHeap([r, b, a, x, e])
>>> myMinHeap.decrease_key(b, -17)
>>> print(b)
Node(B, -17)
>>> print(myMinHeap["B"])
-17
"""
def __init__(self, array):
self.idx_of_element = {}
self.heap_dict = {}
self.heap = self.build_heap(array)
def __getitem__(self, key):
return self.get_value(key)
def get_parent_idx(self, idx):
return (idx - 1) // 2
def get_left_child_idx(self, idx):
return idx * 2 + 1
def get_right_child_idx(self, idx):
return idx * 2 + 2
def get_value(self, key):
return self.heap_dict[key]
def build_heap(self, array):
lastIdx = len(array) - 1
startFrom = self.get_parent_idx(lastIdx)
for idx, i in enumerate(array):
self.idx_of_element[i] = idx
self.heap_dict[i.name] = i.val
for i in range(startFrom, -1, -1):
self.sift_down(i, array)
return array
# this is min-heapify method
def sift_down(self, idx, array):
while True:
l = self.get_left_child_idx(idx) # noqa: E741
r = self.get_right_child_idx(idx)
smallest = idx
if l < len(array) and array[l] < array[idx]:
smallest = l
if r < len(array) and array[r] < array[smallest]:
smallest = r
if smallest != idx:
array[idx], array[smallest] = array[smallest], array[idx]
(
self.idx_of_element[array[idx]],
self.idx_of_element[array[smallest]],
) = (
self.idx_of_element[array[smallest]],
self.idx_of_element[array[idx]],
)
idx = smallest
else:
break
def sift_up(self, idx):
p = self.get_parent_idx(idx)
while p >= 0 and self.heap[p] > self.heap[idx]:
self.heap[p], self.heap[idx] = self.heap[idx], self.heap[p]
self.idx_of_element[self.heap[p]], self.idx_of_element[self.heap[idx]] = (
self.idx_of_element[self.heap[idx]],
self.idx_of_element[self.heap[p]],
)
idx = p
p = self.get_parent_idx(idx)
def peek(self):
return self.heap[0]
def remove(self):
self.heap[0], self.heap[-1] = self.heap[-1], self.heap[0]
self.idx_of_element[self.heap[0]], self.idx_of_element[self.heap[-1]] = (
self.idx_of_element[self.heap[-1]],
self.idx_of_element[self.heap[0]],
)
x = self.heap.pop()
del self.idx_of_element[x]
self.sift_down(0, self.heap)
return x
def insert(self, node):
self.heap.append(node)
self.idx_of_element[node] = len(self.heap) - 1
self.heap_dict[node.name] = node.val
self.sift_up(len(self.heap) - 1)
def is_empty(self):
return True if len(self.heap) == 0 else False
def decrease_key(self, node, newValue):
assert (
self.heap[self.idx_of_element[node]].val > newValue
), "newValue must be less that current value"
node.val = newValue
self.heap_dict[node.name] = newValue
self.sift_up(self.idx_of_element[node])
# USAGE
r = Node("R", -1)
b = Node("B", 6)
a = Node("A", 3)
x = Node("X", 1)
e = Node("E", 4)
# Use one of these two ways to generate Min-Heap
# Generating Min-Heap from array
myMinHeap = MinHeap([r, b, a, x, e])
# Generating Min-Heap by Insert method
# myMinHeap.insert(a)
# myMinHeap.insert(b)
# myMinHeap.insert(x)
# myMinHeap.insert(r)
# myMinHeap.insert(e)
# Before
print("Min Heap - before decrease key")
for i in myMinHeap.heap:
print(i)
print("Min Heap - After decrease key of node [B -> -17]")
myMinHeap.decrease_key(b, -17)
# After
for i in myMinHeap.heap:
print(i)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from decimal import Decimal
from math import * # noqa: F401, F403
from typing import Union
from sympy import diff
def newton_raphson(
func: str, a: Union[float, Decimal], precision: float = 10 ** -10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from decimal import Decimal
from math import * # noqa: F401, F403
from typing import Union
from sympy import diff
def newton_raphson(
func: str, a: Union[float, Decimal], precision: float = 10 ** -10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, N: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if N == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(N / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif N == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(N / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, N: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if N == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(N / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif N == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(N / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A pure Python implementation of the quick sort algorithm
For doctests run following command:
python3 -m doctest -v quick_sort.py
For manual testing run:
python3 quick_sort.py
"""
from typing import List
def quick_sort(collection: list) -> list:
"""A pure Python implementation of quick sort algorithm
:param collection: a mutable collection of comparable items
:return: the same collection ordered by ascending
Examples:
>>> quick_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> quick_sort([])
[]
>>> quick_sort([-2, 5, 0, -45])
[-45, -2, 0, 5]
"""
if len(collection) < 2:
return collection
pivot = collection.pop() # Use the last element as the first pivot
greater: List[int] = [] # All elements greater than pivot
lesser: List[int] = [] # All elements less than or equal to pivot
for element in collection:
(greater if element > pivot else lesser).append(element)
return quick_sort(lesser) + [pivot] + quick_sort(greater)
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(quick_sort(unsorted))
| """
A pure Python implementation of the quick sort algorithm
For doctests run following command:
python3 -m doctest -v quick_sort.py
For manual testing run:
python3 quick_sort.py
"""
from typing import List
def quick_sort(collection: list) -> list:
"""A pure Python implementation of quick sort algorithm
:param collection: a mutable collection of comparable items
:return: the same collection ordered by ascending
Examples:
>>> quick_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> quick_sort([])
[]
>>> quick_sort([-2, 5, 0, -45])
[-45, -2, 0, 5]
"""
if len(collection) < 2:
return collection
pivot = collection.pop() # Use the last element as the first pivot
greater: List[int] = [] # All elements greater than pivot
lesser: List[int] = [] # All elements less than or equal to pivot
for element in collection:
(greater if element > pivot else lesser).append(element)
return quick_sort(lesser) + [pivot] + quick_sort(greater)
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(quick_sort(unsorted))
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, N: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, N - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g ** t) % N
y = math.gcd(x - 1, N)
if x > 1 and y > 1:
p = y
q = N // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
An RSA prime factor algorithm.
The program can efficiently factor RSA prime number given the private key d and
public key e.
Source: on page 3 of https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf
More readable source: https://www.di-mgt.com.au/rsa_factorize_n.html
large number can take minutes to factor, therefore are not included in doctest.
"""
from __future__ import annotations
import math
import random
def rsafactor(d: int, e: int, N: int) -> list[int]:
"""
This function returns the factors of N, where p*q=N
Return: [p, q]
We call N the RSA modulus, e the encryption exponent, and d the decryption exponent.
The pair (N, e) is the public key. As its name suggests, it is public and is used to
encrypt messages.
The pair (N, d) is the secret key or private key and is known only to the recipient
of encrypted messages.
>>> rsafactor(3, 16971, 25777)
[149, 173]
>>> rsafactor(7331, 11, 27233)
[113, 241]
>>> rsafactor(4021, 13, 17711)
[89, 199]
"""
k = d * e - 1
p = 0
q = 0
while p == 0:
g = random.randint(2, N - 1)
t = k
while True:
if t % 2 == 0:
t = t // 2
x = (g ** t) % N
y = math.gcd(x - 1, N)
if x > 1 and y > 1:
p = y
q = N // y
break # find the correct factors
else:
break # t is not divisible by 2, break and choose another g
return sorted([p, q])
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Callable
def bisection(function: Callable[[float], float], a: float, b: float) -> float:
"""
finds where function becomes 0 in [a,b] using bolzano
>>> bisection(lambda x: x ** 3 - 1, -5, 5)
1.0000000149011612
>>> bisection(lambda x: x ** 3 - 1, 2, 1000)
Traceback (most recent call last):
...
ValueError: could not find root in given interval.
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 0, 2)
1.0
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 2, 4)
3.0
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 4, 1000)
Traceback (most recent call last):
...
ValueError: could not find root in given interval.
"""
start: float = a
end: float = b
if function(a) == 0: # one of the a or b is a root for the function
return a
elif function(b) == 0:
return b
elif (
function(a) * function(b) > 0
): # if none of these are root and they are both positive or negative,
# then this algorithm can't find the root
raise ValueError("could not find root in given interval.")
else:
mid: float = start + (end - start) / 2.0
while abs(start - mid) > 10 ** -7: # until precisely equals to 10^-7
if function(mid) == 0:
return mid
elif function(mid) * function(start) < 0:
end = mid
else:
start = mid
mid = start + (end - start) / 2.0
return mid
def f(x: float) -> float:
return x ** 3 - 2 * x - 5
if __name__ == "__main__":
print(bisection(f, 1, 1000))
import doctest
doctest.testmod()
| from typing import Callable
def bisection(function: Callable[[float], float], a: float, b: float) -> float:
"""
finds where function becomes 0 in [a,b] using bolzano
>>> bisection(lambda x: x ** 3 - 1, -5, 5)
1.0000000149011612
>>> bisection(lambda x: x ** 3 - 1, 2, 1000)
Traceback (most recent call last):
...
ValueError: could not find root in given interval.
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 0, 2)
1.0
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 2, 4)
3.0
>>> bisection(lambda x: x ** 2 - 4 * x + 3, 4, 1000)
Traceback (most recent call last):
...
ValueError: could not find root in given interval.
"""
start: float = a
end: float = b
if function(a) == 0: # one of the a or b is a root for the function
return a
elif function(b) == 0:
return b
elif (
function(a) * function(b) > 0
): # if none of these are root and they are both positive or negative,
# then this algorithm can't find the root
raise ValueError("could not find root in given interval.")
else:
mid: float = start + (end - start) / 2.0
while abs(start - mid) > 10 ** -7: # until precisely equals to 10^-7
if function(mid) == 0:
return mid
elif function(mid) * function(start) < 0:
end = mid
else:
start = mid
mid = start + (end - start) / 2.0
return mid
def f(x: float) -> float:
return x ** 3 - 2 * x - 5
if __name__ == "__main__":
print(bisection(f, 1, 1000))
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://www.hackerrank.com/challenges/abbr/problem
You can perform the following operation on some string, :
1. Capitalize zero or more of 's lowercase letters at some index i
(i.e., make them uppercase).
2. Delete all of the remaining lowercase letters in .
Example:
a=daBcd and b="ABC"
daBcd -> capitalize a and c(dABCd) -> remove d (ABC)
"""
def abbr(a: str, b: str) -> bool:
"""
>>> abbr("daBcd", "ABC")
True
>>> abbr("dBcd", "ABC")
False
"""
n = len(a)
m = len(b)
dp = [[False for _ in range(m + 1)] for _ in range(n + 1)]
dp[0][0] = True
for i in range(n):
for j in range(m + 1):
if dp[i][j]:
if j < m and a[i].upper() == b[j]:
dp[i + 1][j + 1] = True
if a[i].islower():
dp[i + 1][j] = True
return dp[n][m]
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://www.hackerrank.com/challenges/abbr/problem
You can perform the following operation on some string, :
1. Capitalize zero or more of 's lowercase letters at some index i
(i.e., make them uppercase).
2. Delete all of the remaining lowercase letters in .
Example:
a=daBcd and b="ABC"
daBcd -> capitalize a and c(dABCd) -> remove d (ABC)
"""
def abbr(a: str, b: str) -> bool:
"""
>>> abbr("daBcd", "ABC")
True
>>> abbr("dBcd", "ABC")
False
"""
n = len(a)
m = len(b)
dp = [[False for _ in range(m + 1)] for _ in range(n + 1)]
dp[0][0] = True
for i in range(n):
for j in range(m + 1):
if dp[i][j]:
if j < m and a[i].upper() == b[j]:
dp[i + 1][j + 1] = True
if a[i].islower():
dp[i + 1][j] = True
return dp[n][m]
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 13: https://projecteuler.net/problem=13
Problem Statement:
Work out the first ten digits of the sum of the following one-hundred 50-digit
numbers.
"""
import os
def solution():
"""
Returns the first ten digits of the sum of the array elements
from the file num.txt
>>> solution()
'5537376230'
"""
file_path = os.path.join(os.path.dirname(__file__), "num.txt")
with open(file_path) as file_hand:
return str(sum([int(line) for line in file_hand]))[:10]
if __name__ == "__main__":
print(solution())
| """
Problem 13: https://projecteuler.net/problem=13
Problem Statement:
Work out the first ten digits of the sum of the following one-hundred 50-digit
numbers.
"""
import os
def solution():
"""
Returns the first ten digits of the sum of the array elements
from the file num.txt
>>> solution()
'5537376230'
"""
file_path = os.path.join(os.path.dirname(__file__), "num.txt")
with open(file_path) as file_hand:
return str(sum([int(line) for line in file_hand]))[:10]
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 20: https://projecteuler.net/problem=20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
"""
from math import factorial
def solution(num: int = 100) -> int:
"""Returns the sum of the digits in the factorial of num
>>> solution(100)
648
>>> solution(50)
216
>>> solution(10)
27
>>> solution(5)
3
>>> solution(3)
6
>>> solution(2)
2
>>> solution(1)
1
"""
return sum([int(x) for x in str(factorial(num))])
if __name__ == "__main__":
print(solution(int(input("Enter the Number: ").strip())))
| """
Problem 20: https://projecteuler.net/problem=20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
"""
from math import factorial
def solution(num: int = 100) -> int:
"""Returns the sum of the digits in the factorial of num
>>> solution(100)
648
>>> solution(50)
216
>>> solution(10)
27
>>> solution(5)
3
>>> solution(3)
6
>>> solution(2)
2
>>> solution(1)
1
"""
return sum([int(x) for x in str(factorial(num))])
if __name__ == "__main__":
print(solution(int(input("Enter the Number: ").strip())))
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
def fibonacci(n: int) -> int:
"""
Computes the Fibonacci number for input n by iterating through n numbers
and creating an array of ints using the Fibonacci formula.
Returns the nth element of the array.
>>> fibonacci(2)
1
>>> fibonacci(3)
2
>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(12)
144
"""
if n == 1 or type(n) is not int:
return 0
elif n == 2:
return 1
else:
sequence = [0, 1]
for i in range(2, n + 1):
sequence.append(sequence[i - 1] + sequence[i - 2])
return sequence[n]
def fibonacci_digits_index(n: int) -> int:
"""
Computes incrementing Fibonacci numbers starting from 3 until the length
of the resulting Fibonacci result is the input value n. Returns the term
of the Fibonacci sequence where this occurs.
>>> fibonacci_digits_index(1000)
4782
>>> fibonacci_digits_index(100)
476
>>> fibonacci_digits_index(50)
237
>>> fibonacci_digits_index(3)
12
"""
digits = 0
index = 2
while digits < n:
index += 1
digits = len(str(fibonacci(index)))
return index
def solution(n: int = 1000) -> int:
"""
Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
return fibonacci_digits_index(n)
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| """
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fn−1 + Fn−2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000
digits?
"""
def fibonacci(n: int) -> int:
"""
Computes the Fibonacci number for input n by iterating through n numbers
and creating an array of ints using the Fibonacci formula.
Returns the nth element of the array.
>>> fibonacci(2)
1
>>> fibonacci(3)
2
>>> fibonacci(5)
5
>>> fibonacci(10)
55
>>> fibonacci(12)
144
"""
if n == 1 or type(n) is not int:
return 0
elif n == 2:
return 1
else:
sequence = [0, 1]
for i in range(2, n + 1):
sequence.append(sequence[i - 1] + sequence[i - 2])
return sequence[n]
def fibonacci_digits_index(n: int) -> int:
"""
Computes incrementing Fibonacci numbers starting from 3 until the length
of the resulting Fibonacci result is the input value n. Returns the term
of the Fibonacci sequence where this occurs.
>>> fibonacci_digits_index(1000)
4782
>>> fibonacci_digits_index(100)
476
>>> fibonacci_digits_index(50)
237
>>> fibonacci_digits_index(3)
12
"""
digits = 0
index = 2
while digits < n:
index += 1
digits = len(str(fibonacci(index)))
return index
def solution(n: int = 1000) -> int:
"""
Returns the index of the first term in the Fibonacci sequence to contain
n digits.
>>> solution(1000)
4782
>>> solution(100)
476
>>> solution(50)
237
>>> solution(3)
12
"""
return fibonacci_digits_index(n)
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| 8C TS KC 9H 4S 7D 2S 5D 3S AC
5C AD 5D AC 9C 7C 5H 8D TD KS
3H 7H 6S KC JS QH TD JC 2D 8S
TH 8H 5C QS TC 9H 4D JC KS JS
7C 5H KC QH JD AS KH 4C AD 4S
5H KS 9C 7D 9H 8D 3S 5D 5C AH
6H 4H 5C 3H 2H 3S QH 5S 6S AS
TD 8C 4H 7C TC KC 4C 3H 7S KS
7C 9C 6D KD 3H 4C QS QC AC KH
JC 6S 5H 2H 2D KD 9D 7C AS JS
AD QH TH 9D 8H TS 6D 3S AS AC
2H 4S 5C 5S TC KC JD 6C TS 3C
QD AS 6H JS 2C 3D 9H KC 4H 8S
KD 8S 9S 7C 2S 3S 6D 6S 4H KC
3C 8C 2D 7D 4D 9S 4S QH 4H JD
8C KC 7S TC 2D TS 8H QD AC 5C
3D KH QD 6C 6S AD AS 8H 2H QS
6S 8D 4C 8S 6C QH TC 6D 7D 9D
2S 8D 8C 4C TS 9S 9D 9C AC 3D
3C QS 2S 4H JH 3D 2D TD 8S 9H
5H QS 8S 6D 3C 8C JD AS 7H 7D
6H TD 9D AS JH 6C QC 9S KD JC
AH 8S QS 4D TH AC TS 3C 3D 5C
5S 4D JS 3D 8H 6C TS 3S AD 8C
6D 7C 5D 5H 3S 5C JC 2H 5S 3D
5H 6H 2S KS 3D 5D JD 7H JS 8H
KH 4H AS JS QS QC TC 6D 7C KS
3D QS TS 2H JS 4D AS 9S JC KD
QD 5H 4D 5D KH 7H 3D JS KD 4H
2C 9H 6H 5C 9D 6C JC 2D TH 9S
7D 6D AS QD JH 4D JS 7C QS 5C
3H KH QD AD 8C 8H 3S TH 9D 5S
AH 9S 4D 9D 8S 4H JS 3C TC 8D
2C KS 5H QD 3S TS 9H AH AD 8S
5C 7H 5D KD 9H 4D 3D 2D KS AD
KS KC 9S 6D 2C QH 9D 9H TS TC
9C 6H 5D QH 4D AD 6D QC JS KH
9S 3H 9D JD 5C 4D 9H AS TC QH
2C 6D JC 9C 3C AD 9S KH 9D 7D
KC 9C 7C JC JS KD 3H AS 3C 7D
QD KH QS 2C 3S 8S 8H 9H 9C JC
QH 8D 3C KC 4C 4H 6D AD 9H 9D
3S KS QS 7H KH 7D 5H 5D JD AD
2H 2C 6H TH TC 7D 8D 4H 8C AS
4S 2H AC QC 3S 6D TH 4D 4C KH
4D TC KS AS 7C 3C 6D 2D 9H 6C
8C TD 5D QS 2C 7H 4C 9C 3H 9H
5H JH TS 7S TD 6H AD QD 8H 8S
5S AD 9C 8C 7C 8D 5H 9D 8S 2S
4H KH KS 9S 2S KC 5S AD 4S 7D
QS 9C QD 6H JS 5D AC 8D 2S AS
KH AC JC 3S 9D 9S 3C 9C 5S JS
AD 3C 3D KS 3S 5C 9C 8C TS 4S
JH 8D 5D 6H KD QS QD 3D 6C KC
8S JD 6C 3S 8C TC QC 3C QH JS
KC JC 8H 2S 9H 9C JH 8S 8C 9S
8S 2H QH 4D QC 9D KC AS TH 3C
8S 6H TH 7C 2H 6S 3C 3H AS 7S
QH 5S JS 4H 5H TS 8H AH AC JC
9D 8H 2S 4S TC JC 3C 7H 3H 5C
3D AD 3C 3S 4C QC AS 5D TH 8C
6S 9D 4C JS KH AH TS JD 8H AD
4C 6S 9D 7S AC 4D 3D 3S TC JD
AD 7H 6H 4H JH KC TD TS 7D 6S
8H JH TC 3S 8D 8C 9S 2C 5C 4D
2C 9D KC QH TH QS JC 9C 4H TS
QS 3C QD 8H KH 4H 8D TD 8S AC
7C 3C TH 5S 8H 8C 9C JD TC KD
QC TC JD TS 8C 3H 6H KD 7C TD
JH QS KS 9C 6D 6S AS 9H KH 6H
2H 4D AH 2D JH 6H TD 5D 4H JD
KD 8C 9S JH QD JS 2C QS 5C 7C
4S TC 7H 8D 2S 6H 7S 9C 7C KC
8C 5D 7H 4S TD QC 8S JS 4H KS
AD 8S JH 6D TD KD 7C 6C 2D 7D
JC 6H 6S JS 4H QH 9H AH 4C 3C
6H 5H AS 7C 7S 3D KH KC 5D 5C
JC 3D TD AS 4D 6D 6S QH JD KS
8C 7S 8S QH 2S JD 5C 7H AH QD
8S 3C 6H 6C 2C 8D TD 7D 4C 4D
5D QH KH 7C 2S 7H JS 6D QC QD
AD 6C 6S 7D TH 6H 2H 8H KH 4H
KS JS KD 5D 2D KH 7D 9C 8C 3D
9C 6D QD 3C KS 3S 7S AH JD 2D
AH QH AS JC 8S 8H 4C KC TH 7D
JC 5H TD 7C 5D KD 4C AD 8H JS
KC 2H AC AH 7D JH KH 5D 7S 6D
9S 5S 9C 6H 8S TD JD 9H 6C AC
7D 8S 6D TS KD 7H AC 5S 7C 5D
AH QC JC 4C TC 8C 2H TS 2C 7D
KD KC 6S 3D 7D 2S 8S 3H 5S 5C
8S 5D 8H 4C 6H KC 3H 7C 5S KD
JH 8C 3D 3C 6C KC TD 7H 7C 4C
JC KC 6H TS QS TD KS 8H 8C 9S
6C 5S 9C QH 7D AH KS KC 9S 2C
4D 4S 8H TD 9C 3S 7D 9D AS TH
6S 7D 3C 6H 5D KD 2C 5C 9D 9C
2H KC 3D AD 3H QD QS 8D JC 4S
8C 3H 9C 7C AD 5D JC 9D JS AS
5D 9H 5C 7H 6S 6C QC JC QD 9S
JC QS JH 2C 6S 9C QC 3D 4S TC
4H 5S 8D 3D 4D 2S KC 2H JS 2C
TD 3S TH KD 4D 7H JH JS KS AC
7S 8C 9S 2D 8S 7D 5C AD 9D AS
8C 7H 2S 6C TH 3H 4C 3S 8H AC
KD 5H JC 8H JD 2D 4H TD JH 5C
3D AS QH KS 7H JD 8S 5S 6D 5H
9S 6S TC QS JC 5C 5D 9C TH 8C
5H 3S JH 9H 2S 2C 6S 7S AS KS
8C QD JC QS TC QC 4H AC KH 6C
TC 5H 7D JH 4H 2H 8D JC KS 4D
5S 9C KH KD 9H 5C TS 3D 7D 2D
5H AS TC 4D 8C 2C TS 9D 3H 8D
6H 8D 2D 9H JD 6C 4S 5H 5S 6D
AD 9C JC 7D 6H 9S 6D JS 9H 3C
AD JH TC QS 4C 5D 9S 7C 9C AH
KD 6H 2H TH 8S QD KS 9D 9H AS
4H 8H 8D 5H 6C AH 5S AS AD 8S
QS 5D 4S 2H TD KS 5H AC 3H JC
9C 7D QD KD AC 6D 5H QH 6H 5S
KC AH QH 2H 7D QS 3H KS 7S JD
6C 8S 3H 6D KS QD 5D 5C 8H TC
9H 4D 4S 6S 9D KH QC 4H 6C JD
TD 2D QH 4S 6H JH KD 3C QD 8C
4S 6H 7C QD 9D AS AH 6S AD 3C
2C KC TH 6H 8D AH 5C 6D 8S 5D
TD TS 7C AD JC QD 9H 3C KC 7H
5D 4D 5S 8H 4H 7D 3H JD KD 2D
JH TD 6H QS 4S KD 5C 8S 7D 8H
AC 3D AS 8C TD 7H KH 5D 6C JD
9D KS 7C 6D QH TC JD KD AS KC
JH 8S 5S 7S 7D AS 2D 3D AD 2H
2H 5D AS 3C QD KC 6H 9H 9S 2C
9D 5D TH 4C JH 3H 8D TC 8H 9H
6H KD 2C TD 2H 6C 9D 2D JS 8C
KD 7S 3C 7C AS QH TS AD 8C 2S
QS 8H 6C JS 4C 9S QC AD TD TS
2H 7C TS TC 8C 3C 9H 2D 6D JC
TC 2H 8D JH KS 6D 3H TD TH 8H
9D TD 9H QC 5D 6C 8H 8C KC TS
2H 8C 3D AH 4D TH TC 7D 8H KC
TS 5C 2D 8C 6S KH AH 5H 6H KC
5S 5D AH TC 4C JD 8D 6H 8C 6C
KC QD 3D 8H 2D JC 9H 4H AD 2S
TD 6S 7D JS KD 4H QS 2S 3S 8C
4C 9H JH TS 3S 4H QC 5S 9S 9C
2C KD 9H JS 9S 3H JC TS 5D AC
AS 2H 5D AD 5H JC 7S TD JS 4C
2D 4S 8H 3D 7D 2C AD KD 9C TS
7H QD JH 5H JS AC 3D TH 4C 8H
6D KH KC QD 5C AD 7C 2D 4H AC
3D 9D TC 8S QD 2C JC 4H JD AH
6C TD 5S TC 8S AH 2C 5D AS AC
TH 7S 3D AS 6C 4C 7H 7D 4H AH
5C 2H KS 6H 7S 4H 5H 3D 3C 7H
3C 9S AC 7S QH 2H 3D 6S 3S 3H
2D 3H AS 2C 6H TC JS 6S 9C 6C
QH KD QD 6D AC 6H KH 2C TS 8C
8H 7D 3S 9H 5D 3H 4S QC 9S 5H
2D 9D 7H 6H 3C 8S 5H 4D 3S 4S
KD 9S 4S TC 7S QC 3S 8S 2H 7H
TC 3D 8C 3H 6C 2H 6H KS KD 4D
KC 3D 9S 3H JS 4S 8H 2D 6C 8S
6H QS 6C TC QD 9H 7D 7C 5H 4D
TD 9D 8D 6S 6C TC 5D TS JS 8H
4H KC JD 9H TC 2C 6S 5H 8H AS
JS 9C 5C 6S 9D JD 8H KC 4C 6D
4D 8D 8S 6C 7C 6H 7H 8H 5C KC
TC 3D JC 6D KS 9S 6H 7S 9C 2C
6C 3S KD 5H TS 7D 9H 9S 6H KH
3D QD 4C 6H TS AC 3S 5C 2H KD
4C AS JS 9S 7C TS 7H 9H JC KS
4H 8C JD 3H 6H AD 9S 4S 5S KS
4C 2C 7D 3D AS 9C 2S QS KC 6C
8S 5H 3D 2S AC 9D 6S 3S 4D TD
QD TH 7S TS 3D AC 7H 6C 5D QC
TC QD AD 9C QS 5C 8D KD 3D 3C
9D 8H AS 3S 7C 8S JD 2D 8D KC
4C TH AC QH JS 8D 7D 7S 9C KH
9D 8D 4C JH 2C 2S QD KD TS 4H
4D 6D 5D 2D JH 3S 8S 3H TC KH
AD 4D 2C QS 8C KD JH JD AH 5C
5C 6C 5H 2H JH 4H KS 7C TC 3H
3C 4C QC 5D JH 9C QD KH 8D TC
3H 9C JS 7H QH AS 7C 9H 5H JC
2D 5S QD 4S 3C KC 6S 6C 5C 4C
5D KH 2D TS 8S 9C AS 9S 7C 4C
7C AH 8C 8D 5S KD QH QS JH 2C
8C 9D AH 2H AC QC 5S 8H 7H 2C
QD 9H 5S QS QC 9C 5H JC TH 4H
6C 6S 3H 5H 3S 6H KS 8D AC 7S
AC QH 7H 8C 4S KC 6C 3D 3S TC
9D 3D JS TH AC 5H 3H 8S 3S TC
QD KH JS KS 9S QC 8D AH 3C AC
5H 6C KH 3S 9S JH 2D QD AS 8C
6C 4D 7S 7H 5S JC 6S 9H 4H JH
AH 5S 6H 9S AD 3S TH 2H 9D 8C
4C 8D 9H 7C QC AD 4S 9C KC 5S
9D 6H 4D TC 4C JH 2S 5D 3S AS
2H 6C 7C KH 5C AD QS TH JD 8S
3S 4S 7S AH AS KC JS 2S AD TH
JS KC 2S 7D 8C 5C 9C TS 5H 9D
7S 9S 4D TD JH JS KH 6H 5D 2C
JD JS JC TH 2D 3D QD 8C AC 5H
7S KH 5S 9D 5D TD 4S 6H 3C 2D
4S 5D AC 8D 4D 7C AD AS AH 9C
6S TH TS KS 2C QC AH AS 3C 4S
2H 8C 3S JC 5C 7C 3H 3C KH JH
7S 3H JC 5S 6H 4C 2S 4D KC 7H
4D 7C 4H 9S 8S 6S AD TC 6C JC
KH QS 3S TC 4C 8H 8S AC 3C TS
QD QS TH 3C TS 7H 7D AH TD JC
TD JD QC 4D 9S 7S TS AD 7D AC
AH 7H 4S 6D 7C 2H 9D KS JC TD
7C AH JD 4H 6D QS TS 2H 2C 5C
TC KC 8C 9S 4C JS 3C JC 6S AH
AS 7D QC 3D 5S JC JD 9D TD KH
TH 3C 2S 6H AH AC 5H 5C 7S 8H
QC 2D AC QD 2S 3S JD QS 6S 8H
KC 4H 3C 9D JS 6H 3S 8S AS 8C
7H KC 7D JD 2H JC QH 5S 3H QS
9H TD 3S 8H 7S AC 5C 6C AH 7C
8D 9H AH JD TD QS 7D 3S 9C 8S
AH QH 3C JD KC 4S 5S 5D TD KS
9H 7H 6S JH TH 4C 7C AD 5C 2D
7C KD 5S TC 9D 6S 6C 5D 2S TH
KC 9H 8D 5H 7H 4H QC 3D 7C AS
6S 8S QC TD 4S 5C TH QS QD 2S
8S 5H TH QC 9H 6S KC 7D 7C 5C
7H KD AH 4D KH 5C 4S 2D KC QH
6S 2C TD JC AS 4D 6C 8C 4H 5S
JC TC JD 5S 6S 8D AS 9D AD 3S
6D 6H 5D 5S TC 3D 7D QS 9D QD
4S 6C 8S 3S 7S AD KS 2D 7D 7C
KC QH JC AC QD 5D 8D QS 7H 7D
JS AH 8S 5H 3D TD 3H 4S 6C JH
4S QS 7D AS 9H JS KS 6D TC 5C
2D 5C 6H TC 4D QH 3D 9H 8S 6C
6D 7H TC TH 5S JD 5C 9C KS KD
8D TD QH 6S 4S 6C 8S KC 5C TC
5S 3D KS AC 4S 7D QD 4C TH 2S
TS 8H 9S 6S 7S QH 3C AH 7H 8C
4C 8C TS JS QC 3D 7D 5D 7S JH
8S 7S 9D QC AC 7C 6D 2H JH KC
JS KD 3C 6S 4S 7C AH QC KS 5H
KS 6S 4H JD QS TC 8H KC 6H AS
KH 7C TC 6S TD JC 5C 7D AH 3S
3H 4C 4H TC TH 6S 7H 6D 9C QH
7D 5H 4S 8C JS 4D 3D 8S QH KC
3H 6S AD 7H 3S QC 8S 4S 7S JS
3S JD KH TH 6H QS 9C 6C 2D QD
4S QH 4D 5H KC 7D 6D 8D TH 5S
TD AD 6S 7H KD KH 9H 5S KC JC
3H QC AS TS 4S QD KS 9C 7S KC
TS 6S QC 6C TH TC 9D 5C 5D KD
JS 3S 4H KD 4C QD 6D 9S JC 9D
8S JS 6D 4H JH 6H 6S 6C KS KH
AC 7D 5D TC 9S KH 6S QD 6H AS
AS 7H 6D QH 8D TH 2S KH 5C 5H
4C 7C 3D QC TC 4S KH 8C 2D JS
6H 5D 7S 5H 9C 9H JH 8S TH 7H
AS JS 2S QD KH 8H 4S AC 8D 8S
3H 4C TD KD 8C JC 5C QS 2D JD
TS 7D 5D 6C 2C QS 2H 3C AH KS
4S 7C 9C 7D JH 6C 5C 8H 9D QD
2S TD 7S 6D 9C 9S QS KH QH 5C
JC 6S 9C QH JH 8D 7S JS KH 2H
8D 5H TH KC 4D 4S 3S 6S 3D QS
2D JD 4C TD 7C 6D TH 7S JC AH
QS 7S 4C TH 9D TS AD 4D 3H 6H
2D 3H 7D JD 3D AS 2S 9C QC 8S
4H 9H 9C 2C 7S JH KD 5C 5D 6H
TC 9H 8H JC 3C 9S 8D KS AD KC
TS 5H JD QS QH QC 8D 5D KH AH
5D AS 8S 6S 4C AH QC QD TH 7H
3H 4H 7D 6S 4S 9H AS 8H JS 9D
JD 8C 2C 9D 7D 5H 5S 9S JC KD
KD 9C 4S QD AH 7C AD 9D AC TD
6S 4H 4S 9C 8D KS TC 9D JH 7C
5S JC 5H 4S QH AC 2C JS 2S 9S
8C 5H AS QD AD 5C 7D 8S QC TD
JC 4C 8D 5C KH QS 4D 6H 2H 2C
TH 4S 2D KC 3H QD AC 7H AD 9D
KH QD AS 8H TH KC 8D 7S QH 8C
JC 6C 7D 8C KH AD QS 2H 6S 2D
JC KH 2D 7D JS QC 5H 4C 5D AD
TS 3S AD 4S TD 2D TH 6S 9H JH
9H 2D QS 2C 4S 3D KH AS AC 9D
KH 6S 8H 4S KD 7D 9D TS QD QC
JH 5H AH KS AS AD JC QC 5S KH
5D 7D 6D KS KD 3D 7C 4D JD 3S
AC JS 8D 5H 9C 3H 4H 4D TS 2C
6H KS KH 9D 7C 2S 6S 8S 2H 3D
6H AC JS 7S 3S TD 8H 3H 4H TH
9H TC QC KC 5C KS 6H 4H AC 8S
TC 7D QH 4S JC TS 6D 6C AC KH
QH 7D 7C JH QS QD TH 3H 5D KS
3D 5S 8D JS 4C 2C KS 7H 9C 4H
5H 8S 4H TD 2C 3S QD QC 3H KC
QC JS KD 9C AD 5S 9D 7D 7H TS
8C JC KH 7C 7S 6C TS 2C QD TH
5S 9D TH 3C 7S QH 8S 9C 2H 5H
5D 9H 6H 2S JS KH 3H 7C 2H 5S
JD 5D 5S 2C TC 2S 6S 6C 3C 8S
4D KH 8H 4H 2D KS 3H 5C 2S 9H
3S 2D TD 7H 8S 6H JD KC 9C 8D
6S QD JH 7C 9H 5H 8S 8H TH TD
QS 7S TD 7D TS JC KD 7C 3C 2C
3C JD 8S 4H 2D 2S TD AS 4D AC
AH KS 6C 4C 4S 7D 8C 9H 6H AS
5S 3C 9S 2C QS KD 4D 4S AC 5D
2D TS 2C JS KH QH 5D 8C AS KC
KD 3H 6C TH 8S 7S KH 6H 9S AC
6H 7S 6C QS AH 2S 2H 4H 5D 5H
5H JC QD 2C 2S JD AS QC 6S 7D
6C TC AS KD 8H 9D 2C 7D JH 9S
2H 4C 6C AH 8S TD 3H TH 7C TS
KD 4S TS 6C QH 8D 9D 9C AH 7D
6D JS 5C QD QC 9C 5D 8C 2H KD
3C QH JH AD 6S AH KC 8S 6D 6H
3D 7C 4C 7S 5S 3S 6S 5H JC 3C
QH 7C 5H 3C 3S 8C TS 4C KD 9C
QD 3S 7S 5H 7H QH JC 7C 8C KD
3C KD KH 2S 4C TS AC 6S 2C 7C
2C KH 3C 4C 6H 4D 5H 5S 7S QD
4D 7C 8S QD TS 9D KS 6H KD 3C
QS 4D TS 7S 4C 3H QD 8D 9S TC
TS QH AC 6S 3C 9H 9D QS 8S 6H
3S 7S 5D 4S JS 2D 6C QH 6S TH
4C 4H AS JS 5D 3D TS 9C AC 8S
6S 9C 7C 3S 5C QS AD AS 6H 3C
9S 8C 7H 3H 6S 7C AS 9H JD KH
3D 3H 7S 4D 6C 7C AC 2H 9C TH
4H 5S 3H AC TC TH 9C 9H 9S 8D
8D 9H 5H 4D 6C 2H QD 6S 5D 3S
4C 5C JD QS 4D 3H TH AC QH 8C
QC 5S 3C 7H AD 4C KS 4H JD 6D
QS AH 3H KS 9H 2S JS JH 5H 2H
2H 5S TH 6S TS 3S KS 3C 5H JS
2D 9S 7H 3D KC JH 6D 7D JS TD
AC JS 8H 2C 8C JH JC 2D TH 7S
5D 9S 8H 2H 3D TC AH JC KD 9C
9D QD JC 2H 6D KH TS 9S QH TH
2C 8D 4S JD 5H 3H TH TC 9C KC
AS 3D 9H 7D 4D TH KH 2H 7S 3H
4H 7S KS 2S JS TS 8S 2H QD 8D
5S 6H JH KS 8H 2S QC AC 6S 3S
JC AS AD QS 8H 6C KH 4C 4D QD
2S 3D TS TD 9S KS 6S QS 5C 8D
3C 6D 4S QC KC JH QD TH KH AD
9H AH 4D KS 2S 8D JH JC 7C QS
2D 6C TH 3C 8H QD QH 2S 3S KS
6H 5D 9S 4C TS TD JS QD 9D JD
5H 8H KH 8S KS 7C TD AD 4S KD
2C 7C JC 5S AS 6C 7D 8S 5H 9C
6S QD 9S TS KH QS 5S QH 3C KC
7D 3H 3C KD 5C AS JH 7H 6H JD
9D 5C 9H KC 8H KS 4S AD 4D 2S
3S JD QD 8D 2S 7C 5S 6S 5H TS
6D 9S KC TD 3S 6H QD JD 5C 8D
5H 9D TS KD 8D 6H TD QC 4C 7D
6D 4S JD 9D AH 9S AS TD 9H QD
2D 5S 2H 9C 6H 9S TD QC 7D TC
3S 2H KS TS 2C 9C 8S JS 9D 7D
3C KC 6D 5D 6C 6H 8S AS 7S QS
JH 9S 2H 8D 4C 8H 9H AD TH KH
QC AS 2S JS 5C 6H KD 3H 7H 2C
QD 8H 2S 8D 3S 6D AH 2C TC 5C
JD JS TS 8S 3H 5D TD KC JC 6H
6S QS TC 3H 5D AH JC 7C 7D 4H
7C 5D 8H 9C 2H 9H JH KH 5S 2C
9C 7H 6S TH 3S QC QD 4C AC JD
2H 5D 9S 7D KC 3S QS 2D AS KH
2S 4S 2H 7D 5C TD TH QH 9S 4D
6D 3S TS 6H 4H KS 9D 8H 5S 2D
9H KS 4H 3S 5C 5D KH 6H 6S JS
KC AS 8C 4C JC KH QC TH QD AH
6S KH 9S 2C 5H TC 3C 7H JC 4D
JD 4S 6S 5S 8D 7H 7S 4D 4C 2H
7H 9H 5D KH 9C 7C TS TC 7S 5H
4C 8D QC TS 4S 9H 3D AD JS 7C
8C QS 5C 5D 3H JS AH KC 4S 9D
TS JD 8S QS TH JH KH 2D QD JS
JD QC 5D 6S 9H 3S 2C 8H 9S TS
2S 4C AD 7H JC 5C 2D 6D 4H 3D
7S JS 2C 4H 8C AD QD 9C 3S TD
JD TS 4C 6H 9H 7D QD 6D 3C AS
AS 7C 4C 6S 5D 5S 5C JS QC 4S
KD 6S 9S 7C 3C 5S 7D JH QD JS
4S 7S JH 2C 8S 5D 7H 3D QH AD
TD 6H 2H 8D 4H 2D 7C AD KH 5D
TS 3S 5H 2C QD AH 2S 5C KH TD
KC 4D 8C 5D AS 6C 2H 2S 9H 7C
KD JS QC TS QS KH JH 2C 5D AD
3S 5H KC 6C 9H 3H 2H AD 7D 7S
7S JS JH KD 8S 7D 2S 9H 7C 2H
9H 2D 8D QC 6S AD AS 8H 5H 6C
2S 7H 6C 6D 7D 8C 5D 9D JC 3C
7C 9C 7H JD 2H KD 3S KH AD 4S
QH AS 9H 4D JD KS KD TS KH 5H
4C 8H 5S 3S 3D 7D TD AD 7S KC
JS 8S 5S JC 8H TH 9C 4D 5D KC
7C 5S 9C QD 2C QH JS 5H 8D KH
TD 2S KS 3D AD KC 7S TC 3C 5D
4C 2S AD QS 6C 9S QD TH QH 5C
8C AD QS 2D 2S KC JD KS 6C JC
8D 4D JS 2H 5D QD 7S 7D QH TS
6S 7H 3S 8C 8S 9D QS 8H 6C 9S
4S TC 2S 5C QD 4D QS 6D TH 6S
3S 5C 9D 6H 8D 4C 7D TC 7C TD
AH 6S AS 7H 5S KD 3H 5H AC 4C
8D 8S AH KS QS 2C AD 6H 7D 5D
6H 9H 9S 2H QS 8S 9C 5D 2D KD
TS QC 5S JH 7D 7S TH 9S 9H AC
7H 3H 6S KC 4D 6D 5C 4S QD TS
TD 2S 7C QD 3H JH 9D 4H 7S 7H
KS 3D 4H 5H TC 2S AS 2D 6D 7D
8H 3C 7H TD 3H AD KC TH 9C KH
TC 4C 2C 9S 9D 9C 5C 2H JD 3C
3H AC TS 5D AD 8D 6H QC 6S 8C
2S TS 3S JD 7H 8S QH 4C 5S 8D
AC 4S 6C 3C KH 3D 7C 2D 8S 2H
4H 6C 8S TH 2H 4S 8H 9S 3H 7S
7C 4C 9C 2C 5C AS 5D KD 4D QH
9H 4H TS AS 7D 8D 5D 9S 8C 2H
QC KD AC AD 2H 7S AS 3S 2D 9S
2H QC 8H TC 6D QD QS 5D KH 3C
TH JD QS 4C 2S 5S AD 7H 3S AS
7H JS 3D 6C 3S 6D AS 9S AC QS
9C TS AS 8C TC 8S 6H 9D 8D 6C
4D JD 9C KC 7C 6D KS 3S 8C AS
3H 6S TC 8D TS 3S KC 9S 7C AS
8C QC 4H 4S 8S 6C 3S TC AH AC
4D 7D 5C AS 2H 6S TS QC AD TC
QD QC 8S 4S TH 3D AH TS JH 4H
5C 2D 9S 2C 3H 3C 9D QD QH 7D
KC 9H 6C KD 7S 3C 4D AS TC 2D
3D JS 4D 9D KS 7D TH QC 3H 3C
8D 5S 2H 9D 3H 8C 4C 4H 3C TH
JC TH 4S 6S JD 2D 4D 6C 3D 4C
TS 3S 2D 4H AC 2C 6S 2H JH 6H
TD 8S AD TC AH AC JH 9S 6S 7S
6C KC 4S JD 8D 9H 5S 7H QH AH
KD 8D TS JH 5C 5H 3H AD AS JS
2D 4H 3D 6C 8C 7S AD 5D 5C 8S
TD 5D 7S 9C 4S 5H 6C 8C 4C 8S
JS QH 9C AS 5C QS JC 3D QC 7C
JC 9C KH JH QS QC 2C TS 3D AD
5D JH AC 5C 9S TS 4C JD 8C KS
KC AS 2D KH 9H 2C 5S 4D 3D 6H
TH AH 2D 8S JC 3D 8C QH 7S 3S
8H QD 4H JC AS KH KS 3C 9S 6D
9S QH 7D 9C 4S AC 7H KH 4D KD
AH AD TH 6D 9C 9S KD KS QH 4H
QD 6H 9C 7C QS 6D 6S 9D 5S JH
AH 8D 5H QD 2H JC KS 4H KH 5S
5C 2S JS 8D 9C 8C 3D AS KC AH
JD 9S 2H QS 8H 5S 8C TH 5C 4C
QC QS 8C 2S 2C 3S 9C 4C KS KH
2D 5D 8S AH AD TD 2C JS KS 8C
TC 5S 5H 8H QC 9H 6H JD 4H 9S
3C JH 4H 9H AH 4S 2H 4C 8D AC
8S TH 4D 7D 6D QD QS 7S TC 7C
KH 6D 2D JD 5H JS QD JH 4H 4S
9C 7S JH 4S 3S TS QC 8C TC 4H
QH 9D 4D JH QS 3S 2C 7C 6C 2D
4H 9S JD 5C 5H AH 9D TS 2D 4C
KS JH TS 5D 2D AH JS 7H AS 8D
JS AH 8C AD KS 5S 8H 2C 6C TH
2H 5D AD AC KS 3D 8H TS 6H QC
6D 4H TS 9C 5H JS JH 6S JD 4C
JH QH 4H 2C 6D 3C 5D 4C QS KC
6H 4H 6C 7H 6S 2S 8S KH QC 8C
3H 3D 5D KS 4H TD AD 3S 4D TS
5S 7C 8S 7D 2C KS 7S 6C 8C JS
5D 2H 3S 7C 5C QD 5H 6D 9C 9H
JS 2S KD 9S 8D TD TS AC 8C 9D
5H QD 2S AC 8C 9H KS 7C 4S 3C
KH AS 3H 8S 9C JS QS 4S AD 4D
AS 2S TD AD 4D 9H JC 4C 5H QS
5D 7C 4H TC 2D 6C JS 4S KC 3S
4C 2C 5D AC 9H 3D JD 8S QS QH
2C 8S 6H 3C QH 6D TC KD AC AH
QC 6C 3S QS 4S AC 8D 5C AD KH
5S 4C AC KH AS QC 2C 5C 8D 9C
8H JD 3C KH 8D 5C 9C QD QH 9D
7H TS 2C 8C 4S TD JC 9C 5H QH
JS 4S 2C 7C TH 6C AS KS 7S JD
JH 7C 9H 7H TC 5H 3D 6D 5D 4D
2C QD JH 2H 9D 5S 3D TD AD KS
JD QH 3S 4D TH 7D 6S QS KS 4H
TC KS 5S 8D 8H AD 2S 2D 4C JH
5S JH TC 3S 2D QS 9D 4C KD 9S
AC KH 3H AS 9D KC 9H QD 6C 6S
9H 7S 3D 5C 7D KC TD 8H 4H 6S
3C 7H 8H TC QD 4D 7S 6S QH 6C
6D AD 4C QD 6C 5D 7D 9D KS TS
JH 2H JD 9S 7S TS KH 8D 5D 8H
2D 9S 4C 7D 9D 5H QD 6D AC 6S
7S 6D JC QD JH 4C 6S QS 2H 7D
8C TD JH KD 2H 5C QS 2C JS 7S
TC 5H 4H JH QD 3S 5S 5D 8S KH
KS KH 7C 2C 5D JH 6S 9C 6D JC
5H AH JD 9C JS KC 2H 6H 4D 5S
AS 3C TH QC 6H 9C 8S 8C TD 7C
KC 2C QD 9C KH 4D 7S 3C TS 9H
9C QC 2S TS 8C TD 9S QD 3S 3C
4D 9D TH JH AH 6S 2S JD QH JS
QD 9H 6C KD 7D 7H 5D 6S 8H AH
8H 3C 4S 2H 5H QS QH 7S 4H AC
QS 3C 7S 9S 4H 3S AH KS 9D 7C
AD 5S 6S 2H 2D 5H TC 4S 3C 8C
QH TS 6S 4D JS KS JH AS 8S 6D
2C 8S 2S TD 5H AS TC TS 6C KC
KC TS 8H 2H 3H 7C 4C 5S TH TD
KD AD KH 7H 7S 5D 5H 5S 2D 9C
AD 9S 3D 7S 8C QC 7C 9C KD KS
3C QC 9S 8C 4D 5C AS QD 6C 2C
2H KC 8S JD 7S AC 8D 5C 2S 4D
9D QH 3D 2S TC 3S KS 3C 9H TD
KD 6S AC 2C 7H 5H 3S 6C 6H 8C
QH TC 8S 6S KH TH 4H 5D TS 4D
8C JS 4H 6H 2C 2H 7D AC QD 3D
QS KC 6S 2D 5S 4H TD 3H JH 4C
7S 5H 7H 8H KH 6H QS TH KD 7D
5H AD KD 7C KH 5S TD 6D 3C 6C
8C 9C 5H JD 7C KC KH 7H 2H 3S
7S 4H AD 4D 8S QS TH 3D 7H 5S
8D TC KS KD 9S 6D AD JD 5C 2S
7H 8H 6C QD 2H 6H 9D TC 9S 7C
8D 6D 4C 7C 6C 3C TH KH JS JH
5S 3S 8S JS 9H AS AD 8H 7S KD
JH 7C 2C KC 5H AS AD 9C 9S JS
AD AC 2C 6S QD 7C 3H TH KS KD
9D JD 4H 8H 4C KH 7S TS 8C KC
3S 5S 2H 7S 6H 7D KS 5C 6D AD
5S 8C 9H QS 7H 7S 2H 6C 7D TD
QS 5S TD AC 9D KC 3D TC 2D 4D
TD 2H 7D JD QD 4C 7H 5D KC 3D
4C 3H 8S KD QH 5S QC 9H TC 5H
9C QD TH 5H TS 5C 9H AH QH 2C
4D 6S 3C AC 6C 3D 2C 2H TD TH
AC 9C 5D QC 4D AD 8D 6D 8C KC
AD 3C 4H AC 8D 8H 7S 9S TD JC
4H 9H QH JS 2D TH TD TC KD KS
5S 6S 9S 8D TH AS KH 5H 5C 8S
JD 2S 9S 6S 5S 8S 5D 7S 7H 9D
5D 8C 4C 9D AD TS 2C 7D KD TC
8S QS 4D KC 5C 8D 4S KH JD KD
AS 5C AD QH 7D 2H 9S 7H 7C TC
2S 8S JD KH 7S 6C 6D AD 5D QC
9H 6H 3S 8C 8H AH TC 4H JS TD
2C TS 4D 7H 2D QC 9C 5D TH 7C
6C 8H QC 5D TS JH 5C 5H 9H 4S
2D QC 7H AS JS 8S 2H 4C 4H 8D
JS 6S AC KD 3D 3C 4S 7H TH KC
QH KH 6S QS 5S 4H 3C QD 3S 3H
7H AS KH 8C 4H 9C 5S 3D 6S TS
9C 7C 3H 5S QD 2C 3D AD AC 5H
JH TD 2D 4C TS 3H KH AD 3S 7S
AS 4C 5H 4D 6S KD JC 3C 6H 2D
3H 6S 8C 2D TH 4S AH QH AD 5H
7C 2S 9H 7H KC 5C 6D 5S 3H JC
3C TC 9C 4H QD TD JH 6D 9H 5S
7C 6S 5C 5D 6C 4S 7H 9H 6H AH
AD 2H 7D KC 2C 4C 2S 9S 7H 3S
TH 4C 8S 6S 3S AD KS AS JH TD
5C TD 4S 4D AD 6S 5D TC 9C 7D
8H 3S 4D 4S 5S 6H 5C AC 3H 3D
9H 3C AC 4S QS 8S 9D QH 5H 4D
JC 6C 5H TS AC 9C JD 8C 7C QD
8S 8H 9C JD 2D QC QH 6H 3C 8D
KS JS 2H 6H 5H QH QS 3H 7C 6D
TC 3H 4S 7H QC 2H 3S 8C JS KH
AH 8H 5S 4C 9H JD 3H 7S JC AC
3C 2D 4C 5S 6C 4S QS 3S JD 3D
5H 2D TC AH KS 6D 7H AD 8C 6H
6C 7S 3C JD 7C 8H KS KH AH 6D
AH 7D 3H 8H 8S 7H QS 5H 9D 2D
JD AC 4H 7S 8S 9S KS AS 9D QH
7S 2C 8S 5S JH QS JC AH KD 4C
AH 2S 9H 4H 8D TS TD 6H QH JD
4H JC 3H QS 6D 7S 9C 8S 9D 8D
5H TD 4S 9S 4C 8C 8D 7H 3H 3D
QS KH 3S 2C 2S 3C 7S TD 4S QD
7C TD 4D 5S KH AC AS 7H 4C 6C
2S 5H 6D JD 9H QS 8S 2C 2H TD
2S TS 6H 9H 7S 4H JC 4C 5D 5S
2C 5H 7D 4H 3S QH JC JS 6D 8H
4C QH 7C QD 3S AD TH 8S 5S TS
9H TC 2S TD JC 7D 3S 3D TH QH
7D 4C 8S 5C JH 8H 6S 3S KC 3H
JC 3H KH TC QH TH 6H 2C AC 5H
QS 2H 9D 2C AS 6S 6C 2S 8C 8S
9H 7D QC TH 4H KD QS AC 7S 3C
4D JH 6S 5S 8H KS 9S QC 3S AS
JD 2D 6S 7S TC 9H KC 3H 7D KD
2H KH 7C 4D 4S 3H JS QD 7D KC
4C JC AS 9D 3C JS 6C 8H QD 4D
AH JS 3S 6C 4C 3D JH 6D 9C 9H
9H 2D 8C 7H 5S KS 6H 9C 2S TC
6C 8C AD 7H 6H 3D KH AS 5D TH
KS 8C 3S TS 8S 4D 5S 9S 6C 4H
9H 4S 4H 5C 7D KC 2D 2H 9D JH
5C JS TC 9D 9H 5H 7S KH JC 6S
7C 9H 8H 4D JC KH JD 2H TD TC
8H 6C 2H 2C KH 6H 9D QS QH 5H
AC 7D 2S 3D QD JC 2D 8D JD JH
2H JC 2D 7H 2C 3C 8D KD TD 4H
3S 4H 6D 8D TS 3H TD 3D 6H TH
JH JC 3S AC QH 9H 7H 8S QC 2C
7H TD QS 4S 8S 9C 2S 5D 4D 2H
3D TS 3H 2S QC 8H 6H KC JC KS
5D JD 7D TC 8C 6C 9S 3D 8D AC
8H 6H JH 6C 5D 8D 8S 4H AD 2C
9D 4H 2D 2C 3S TS AS TC 3C 5D
4D TH 5H KS QS 6C 4S 2H 3D AD
5C KC 6H 2C 5S 3C 4D 2D 9H 9S
JD 4C 3H TH QH 9H 5S AH 8S AC
7D 9S 6S 2H TD 9C 4H 8H QS 4C
3C 6H 5D 4H 8C 9C KC 6S QD QS
3S 9H KD TC 2D JS 8C 6S 4H 4S
2S 4C 8S QS 6H KH 3H TH 8C 5D
2C KH 5S 3S 7S 7H 6C 9D QD 8D
8H KS AC 2D KH TS 6C JS KC 7H
9C KS 5C TD QC AH 6C 5H 9S 7C
5D 4D 3H 4H 6S 7C 7S AH QD TD
2H 7D QC 6S TC TS AH 7S 9D 3H
TH 5H QD 9S KS 7S 7C 6H 8C TD
TH 2D 4D QC 5C 7D JD AH 9C 4H
4H 3H AH 8D 6H QC QH 9H 2H 2C
2D AD 4C TS 6H 7S TH 4H QS TD
3C KD 2H 3H QS JD TC QC 5D 8H
KS JC QD TH 9S KD 8D 8C 2D 9C
3C QD KD 6D 4D 8D AH AD QC 8S
8H 3S 9D 2S 3H KS 6H 4C 7C KC
TH 9S 5C 3D 7D 6H AC 7S 4D 2C
5C 3D JD 4D 2D 6D 5H 9H 4C KH
AS 7H TD 6C 2H 3D QD KS 4C 4S
JC 3C AC 7C JD JS 8H 9S QC 5D
JD 6S 5S 2H AS 8C 7D 5H JH 3D
8D TC 5S 9S 8S 3H JC 5H 7S AS
5C TD 3D 7D 4H 8D 7H 4D 5D JS
QS 9C KS TD 2S 8S 5C 2H 4H AS
TH 7S 4H 7D 3H JD KD 5D 2S KC
JD 7H 4S 8H 4C JS 6H QH 5S 4H
2C QS 8C 5S 3H QC 2S 6C QD AD
8C 3D JD TC 4H 2H AD 5S AC 2S
5D 2C JS 2D AD 9D 3D 4C 4S JH
8D 5H 5D 6H 7S 4D KS 9D TD JD
3D 6D 9C 2S AS 7D 5S 5C 8H JD
7C 8S 3S 6S 5H JD TC AD 7H 7S
2S 9D TS 4D AC 8D 6C QD JD 3H
9S KH 2C 3C AC 3D 5H 6H 8D 5D
KS 3D 2D 6S AS 4C 2S 7C 7H KH
AC 2H 3S JC 5C QH 4D 2D 5H 7S
TS AS JD 8C 6H JC 8S 5S 2C 5D
7S QH 7H 6C QC 8H 2D 7C JD 2S
2C QD 2S 2H JC 9C 5D 2D JD JH
7C 5C 9C 8S 7D 6D 8D 6C 9S JH
2C AD 6S 5H 3S KS 7S 9D KH 4C
7H 6C 2C 5C TH 9D 8D 3S QC AH
5S KC 6H TC 5H 8S TH 6D 3C AH
9C KD 4H AD TD 9S 4S 7D 6H 5D
7H 5C 5H 6D AS 4C KD KH 4H 9D
3C 2S 5C 6C JD QS 2H 9D 7D 3H
AC 2S 6S 7S JS QD 5C QS 6H AD
5H TH QC 7H TC 3S 7C 6D KC 3D
4H 3D QC 9S 8H 2C 3S JC KS 5C
4S 6S 2C 6H 8S 3S 3D 9H 3H JS
4S 8C 4D 2D 8H 9H 7D 9D AH TS
9S 2C 9H 4C 8D AS 7D 3D 6D 5S
6S 4C 7H 8C 3H 5H JC AH 9D 9C
2S 7C 5S JD 8C 3S 3D 4D 7D 6S
3C KC 4S 5D 7D 3D JD 7H 3H 4H
9C 9H 4H 4D TH 6D QD 8S 9S 7S
2H AC 8S 4S AD 8C 2C AH 7D TC
TS 9H 3C AD KS TC 3D 8C 8H JD
QC 8D 2C 3C 7D 7C JD 9H 9C 6C
AH 6S JS JH 5D AS QC 2C JD TD
9H KD 2H 5D 2D 3S 7D TC AH TS
TD 8H AS 5D AH QC AC 6S TC 5H
KS 4S 7H 4D 8D 9C TC 2H 6H 3H
3H KD 4S QD QH 3D 8H 8C TD 7S
8S JD TC AH JS QS 2D KH KS 4D
3C AD JC KD JS KH 4S TH 9H 2C
QC 5S JS 9S KS AS 7C QD 2S JD
KC 5S QS 3S 2D AC 5D 9H 8H KS
6H 9C TC AD 2C 6D 5S JD 6C 7C
QS KH TD QD 2C 3H 8S 2S QC AH
9D 9H JH TC QH 3C 2S JS 5C 7H
6C 3S 3D 2S 4S QD 2D TH 5D 2C
2D 6H 6D 2S JC QH AS 7H 4H KH
5H 6S KS AD TC TS 7C AC 4S 4H
AD 3C 4H QS 8C 9D KS 2H 2D 4D
4S 9D 6C 6D 9C AC 8D 3H 7H KD
JC AH 6C TS JD 6D AD 3S 5D QD
JC JH JD 3S 7S 8S JS QC 3H 4S
JD TH 5C 2C AD JS 7H 9S 2H 7S
8D 3S JH 4D QC AS JD 2C KC 6H
2C AC 5H KD 5S 7H QD JH AH 2D
JC QH 8D 8S TC 5H 5C AH 8C 6C
3H JS 8S QD JH 3C 4H 6D 5C 3S
6D 4S 4C AH 5H 5S 3H JD 7C 8D
8H AH 2H 3H JS 3C 7D QC 4H KD
6S 2H KD 5H 8H 2D 3C 8S 7S QD
2S 7S KC QC AH TC QS 6D 4C 8D
5S 9H 2C 3S QD 7S 6C 2H 7C 9D
3C 6C 5C 5S JD JC KS 3S 5D TS
7C KS 6S 5S 2S 2D TC 2H 5H QS
AS 7H 6S TS 5H 9S 9D 3C KD 2H
4S JS QS 3S 4H 7C 2S AC 6S 9D
8C JH 2H 5H 7C 5D QH QS KH QC
3S TD 3H 7C KC 8D 5H 8S KH 8C
4H KH JD TS 3C 7H AS QC JS 5S
AH 9D 2C 8D 4D 2D 6H 6C KC 6S
2S 6H 9D 3S 7H 4D KH 8H KD 3D
9C TC AC JH KH 4D JD 5H TD 3S
7S 4H 9D AS 4C 7D QS 9S 2S KH
3S 8D 8S KS 8C JC 5C KH 2H 5D
8S QH 2C 4D KC JS QC 9D AC 6H
8S 8C 7C JS JD 6S 4C 9C AC 4S
QH 5D 2C 7D JC 8S 2D JS JH 4C
JS 4C 7S TS JH KC KH 5H QD 4S
QD 8C 8D 2D 6S TD 9D AC QH 5S
QH QC JS 3D 3C 5C 4H KH 8S 7H
7C 2C 5S JC 8S 3H QC 5D 2H KC
5S 8D KD 6H 4H QD QH 6D AH 3D
7S KS 6C 2S 4D AC QS 5H TS JD
7C 2D TC 5D QS AC JS QC 6C KC
2C KS 4D 3H TS 8S AD 4H 7S 9S
QD 9H QH 5H 4H 4D KH 3S JC AD
4D AC KC 8D 6D 4C 2D KH 2C JD
2C 9H 2D AH 3H 6D 9C 7D TC KS
8C 3H KD 7C 5C 2S 4S 5H AS AH
TH JD 4H KD 3H TC 5C 3S AC KH
6D 7H AH 7S QC 6H 2D TD JD AS
JH 5D 7H TC 9S 7D JC AS 5S KH
2H 8C AD TH 6H QD KD 9H 6S 6C
QH KC 9D 4D 3S JS JH 4H 2C 9H
TC 7H KH 4H JC 7D 9S 3H QS 7S
AD 7D JH 6C 7H 4H 3S 3H 4D QH
JD 2H 5C AS 6C QC 4D 3C TC JH
AC JD 3H 6H 4C JC AD 7D 7H 9H
4H TC TS 2C 8C 6S KS 2H JD 9S
4C 3H QS QC 9S 9H 6D KC 9D 9C
5C AD 8C 2C QH TH QD JC 8D 8H
QC 2C 2S QD 9C 4D 3S 8D JH QS
9D 3S 2C 7S 7C JC TD 3C TC 9H
3C TS 8H 5C 4C 2C 6S 8D 7C 4H
KS 7H 2H TC 4H 2C 3S AS AH QS
8C 2D 2H 2C 4S 4C 6S 7D 5S 3S
TH QC 5D TD 3C QS KD KC KS AS
4D AH KD 9H KS 5C 4C 6H JC 7S
KC 4H 5C QS TC 2H JC 9S AH QH
4S 9H 3H 5H 3C QD 2H QC JH 8H
5D AS 7H 2C 3D JH 6H 4C 6S 7D
9C JD 9H AH JS 8S QH 3H KS 8H
3S AC QC TS 4D AD 3D AH 8S 9H
7H 3H QS 9C 9S 5H JH JS AH AC
8D 3C JD 2H AC 9C 7H 5S 4D 8H
7C JH 9H 6C JS 9S 7H 8C 9D 4H
2D AS 9S 6H 4D JS JH 9H AD QD
6H 7S JH KH AH 7H TD 5S 6S 2C
8H JH 6S 5H 5S 9D TC 4C QC 9S
7D 2C KD 3H 5H AS QD 7H JS 4D
TS QH 6C 8H TH 5H 3C 3H 9C 9D
AD KH JS 5D 3H AS AC 9S 5C KC
2C KH 8C JC QS 6D AH 2D KC TC
9D 3H 2S 7C 4D 6D KH KS 8D 7D
9H 2S TC JH AC QC 3H 5S 3S 8H
3S AS KD 8H 4C 3H 7C JH QH TS
7S 6D 7H 9D JH 4C 3D 3S 6C AS
4S 2H 2C 4C 8S 5H KC 8C QC QD
3H 3S 6C QS QC 2D 6S 5D 2C 9D
2H 8D JH 2S 3H 2D 6C 5C 7S AD
9H JS 5D QH 8S TS 2H 7S 6S AD
6D QC 9S 7H 5H 5C 7D KC JD 4H
QC 5S 9H 9C 4D 6S KS 2S 4C 7C
9H 7C 4H 8D 3S 6H 5C 8H JS 7S
2D 6H JS TD 4H 4D JC TH 5H KC
AC 7C 8D TH 3H 9S 2D 4C KC 4D
KD QS 9C 7S 3D KS AD TS 4C 4H
QH 9C 8H 2S 7D KS 7H 5D KD 4C
9C 2S 2H JC 6S 6C TC QC JH 5C
7S AC 8H KC 8S 6H QS JC 3D 6S
JS 2D JH 8C 4S 6H 8H 6D 5D AD
6H 7D 2S 4H 9H 7C AS AC 8H 5S
3C JS 4S 6D 5H 2S QH 6S 9C 2C
3D 5S 6S 9S 4C QS 8D QD 8S TC
9C 3D AH 9H 5S 2C 7D AD JC 3S
7H TC AS 3C 6S 6D 7S KH KC 9H
3S TC 8H 6S 5H JH 8C 7D AC 2S
QD 9D 9C 3S JC 8C KS 8H 5D 4D
JS AH JD 6D 9D 8C 9H 9S 8H 3H
2D 6S 4C 4D 8S AD 4S TC AH 9H
TS AC QC TH KC 6D 4H 7S 8C 2H
3C QD JS 9D 5S JC AH 2H TS 9H
3H 4D QH 5D 9C 5H 7D 4S JC 3S
8S TH 3H 7C 2H JD JS TS AC 8D
9C 2H TD KC JD 2S 8C 5S AD 2C
3D KD 7C 5H 4D QH QD TC 6H 7D
7H 2C KC 5S KD 6H AH QC 7S QH
6H 5C AC 5H 2C 9C 2D 7C TD 2S
4D 9D AH 3D 7C JD 4H 8C 4C KS
TH 3C JS QH 8H 4C AS 3D QS QC
4D 7S 5H JH 6D 7D 6H JS KH 3C
QD 8S 7D 2H 2C 7C JC 2S 5H 8C
QH 8S 9D TC 2H AD 7C 8D QD 6S
3S 7C AD 9H 2H 9S JD TS 4C 2D
3S AS 4H QC 2C 8H 8S 7S TD TC
JH TH TD 3S 4D 4H 5S 5D QS 2C
8C QD QH TC 6D 4S 9S 9D 4H QC
8C JS 9D 6H JD 3H AD 6S TD QC
KC 8S 3D 7C TD 7D 8D 9H 4S 3S
6C 4S 3D 9D KD TC KC KS AC 5S
7C 6S QH 3D JS KD 6H 6D 2D 8C
JD 2S 5S 4H 8S AC 2D 6S TS 5C
5H 8C 5S 3C 4S 3D 7C 8D AS 3H
AS TS 7C 3H AD 7D JC QS 6C 6H
3S 9S 4C AC QH 5H 5D 9H TS 4H
6C 5C 7H 7S TD AD JD 5S 2H 2S
7D 6C KC 3S JD 8D 8S TS QS KH
8S QS 8D 6C TH AC AH 2C 8H 9S
7H TD KH QH 8S 3D 4D AH JD AS
TS 3D 2H JC 2S JH KH 6C QC JS
KC TH 2D 6H 7S 2S TC 8C 9D QS
3C 9D 6S KH 8H 6D 5D TH 2C 2H
6H TC 7D AD 4D 8S TS 9H TD 7S
JS 6D JD JC 2H AC 6C 3D KH 8D
KH JD 9S 5D 4H 4C 3H 7S QS 5C
4H JD 5D 3S 3C 4D KH QH QS 7S
JD TS 8S QD AH 4C 6H 3S 5S 2C
QS 3D JD AS 8D TH 7C 6S QC KS
7S 2H 8C QC 7H AC 6D 2D TH KH
5S 6C 7H KH 7D AH 8C 5C 7S 3D
3C KD AD 7D 6C 4D KS 2D 8C 4S
7C 8D 5S 2D 2S AH AD 2C 9D TD
3C AD 4S KS JH 7C 5C 8C 9C TH
AS TD 4D 7C JD 8C QH 3C 5H 9S
3H 9C 8S 9S 6S QD KS AH 5H JH
QC 9C 5S 4H 2H TD 7D AS 8C 9D
8C 2C 9D KD TC 7S 3D KH QC 3C
4D AS 4C QS 5S 9D 6S JD QH KS
6D AH 6C 4C 5H TS 9H 7D 3D 5S
QS JD 7C 8D 9C AC 3S 6S 6C KH
8H JH 5D 9S 6D AS 6S 3S QC 7H
QD AD 5C JH 2H AH 4H AS KC 2C
JH 9C 2C 6H 2D JS 5D 9H KC 6D
7D 9D KD TH 3H AS 6S QC 6H AD
JD 4H 7D KC 3H JS 3C TH 3D QS
4C 3H 8C QD 5H 6H AS 8H AD JD
TH 8S KD 5D QC 7D JS 5S 5H TS
7D KC 9D QS 3H 3C 6D TS 7S AH
7C 4H 7H AH QC AC 4D 5D 6D TH
3C 4H 2S KD 8H 5H JH TC 6C JD
4S 8C 3D 4H JS TD 7S JH QS KD
7C QC KD 4D 7H 6S AD TD TC KH
5H 9H KC 3H 4D 3D AD 6S QD 6H
TH 7C 6H TS QH 5S 2C KC TD 6S
7C 4D 5S JD JH 7D AC KD KH 4H
7D 6C 8D 8H 5C JH 8S QD TH JD
8D 7D 6C 7C 9D KD AS 5C QH JH
9S 2C 8C 3C 4C KS JH 2D 8D 4H
7S 6C JH KH 8H 3H 9D 2D AH 6D
4D TC 9C 8D 7H TD KS TH KD 3C
JD 9H 8D QD AS KD 9D 2C 2S 9C
8D 3H 5C 7H KS 5H QH 2D 8C 9H
2D TH 6D QD 6C KC 3H 3S AD 4C
4H 3H JS 9D 3C TC 5H QH QC JC
3D 5C 6H 3S 3C JC 5S 7S 2S QH
AC 5C 8C 4D 5D 4H 2S QD 3C 3H
2C TD AH 9C KD JS 6S QD 4C QC
QS 8C 3S 4H TC JS 3H 7C JC AD
5H 4D 9C KS JC TD 9S TS 8S 9H
QD TS 7D AS AC 2C TD 6H 8H AH
6S AD 8C 4S 9H 8D 9D KH 8S 3C
QS 4D 2D 7S KH JS JC AD 4C 3C
QS 9S 7H KC TD TH 5H JS AC JH
6D AC 2S QS 7C AS KS 6S KH 5S
6D 8H KH 3C QS 2H 5C 9C 9D 6C
JS 2C 4C 6H 7D JC AC QD TD 3H
4H QC 8H JD 4C KD KS 5C KC 7S
6D 2D 3H 2S QD 5S 7H AS TH 6S
AS 6D 8D 2C 8S TD 8H QD JC AH
9C 9H 2D TD QH 2H 5C TC 3D 8H
KC 8S 3D KH 2S TS TC 6S 4D JH
9H 9D QS AC KC 6H 5D 4D 8D AH
9S 5C QS 4H 7C 7D 2H 8S AD JS
3D AC 9S AS 2C 2D 2H 3H JC KH
7H QH KH JD TC KS 5S 8H 4C 8D
2H 7H 3S 2S 5H QS 3C AS 9H KD
AD 3D JD 6H 5S 9C 6D AC 9S 3S
3D 5D 9C 2D AC 4S 2S AD 6C 6S
QC 4C 2D 3H 6S KC QH QD 2H JH
QC 3C 8S 4D 9S 2H 5C 8H QS QD
6D KD 6S 7H 3S KH 2H 5C JC 6C
3S 9S TC 6S 8H 2D AD 7S 8S TS
3C 6H 9C 3H 5C JC 8H QH TD QD
3C JS QD 5D TD 2C KH 9H TH AS
9S TC JD 3D 5C 5H AD QH 9H KC
TC 7H 4H 8H 3H TD 6S AC 7C 2S
QS 9D 5D 3C JC KS 4D 6C JH 2S
9S 6S 3C 7H TS 4C KD 6D 3D 9C
2D 9H AH AC 7H 2S JH 3S 7C QC
QD 9H 3C 2H AC AS 8S KD 8C KH
2D 7S TD TH 6D JD 8D 4D 2H 5S
8S QH KD JD QS JH 4D KC 5H 3S
3C KH QC 6D 8H 3S AH 7D TD 2D
5S 9H QH 4S 6S 6C 6D TS TH 7S
6C 4C 6D QS JS 9C TS 3H 8D 8S
JS 5C 7S AS 2C AH 2H AD 5S TC
KD 6C 9C 9D TS 2S JC 4H 2C QD
QS 9H TC 3H KC KS 4H 3C AD TH
KH 9C 2H KD 9D TC 7S KC JH 2D
7C 3S KC AS 8C 5D 9C 9S QH 3H
2D 8C TD 4C 2H QC 5D TC 2C 7D
KS 4D 6C QH TD KH 5D 7C AD 8D
2S 9S 8S 4C 8C 3D 6H QD 7C 7H
6C 8S QH 5H TS 5C 3C 4S 2S 2H
8S 6S 2H JC 3S 3H 9D 8C 2S 7H
QC 2C 8H 9C AC JD 4C 4H 6S 3S
3H 3S 7D 4C 9S 5H 8H JC 3D TC
QH 2S 2D 9S KD QD 9H AD 6D 9C
8D 2D KS 9S JC 4C JD KC 4S TH
KH TS 6D 4D 5C KD 5H AS 9H AD
QD JS 7C 6D 5D 5C TH 5H QH QS
9D QH KH 5H JH 4C 4D TC TH 6C
KH AS TS 9D KD 9C 7S 4D 8H 5S
KH AS 2S 7D 9D 4C TS TH AH 7C
KS 4D AC 8S 9S 8D TH QH 9D 5C
5D 5C 8C QS TC 4C 3D 3S 2C 8D
9D KS 2D 3C KC 4S 8C KH 6C JC
8H AH 6H 7D 7S QD 3C 4C 6C KC
3H 2C QH 8H AS 7D 4C 8C 4H KC
QD 5S 4H 2C TD AH JH QH 4C 8S
3H QS 5S JS 8H 2S 9H 9C 3S 2C
6H TS 7S JC QD AC TD KC 5S 3H
QH AS QS 7D JC KC 2C 4C 5C 5S
QH 3D AS JS 4H 8D 7H JC 2S 9C
5D 4D 2S 4S 9D 9C 2D QS 8H 7H
6D 7H 3H JS TS AC 2D JH 7C 8S
JH 5H KC 3C TC 5S 9H 4C 8H 9D
8S KC 5H 9H AD KS 9D KH 8D AH
JC 2H 9H KS 6S 3H QC 5H AH 9C
5C KH 5S AD 6C JC 9H QC 9C TD
5S 5D JC QH 2D KS 8H QS 2H TS
JH 5H 5S AH 7H 3C 8S AS TD KH
6H 3D JD 2C 4C KC 7S AH 6C JH
4C KS 9D AD 7S KC 7D 8H 3S 9C
7H 5C 5H 3C 8H QC 3D KH 6D JC
2D 4H 5D 7D QC AD AH 9H QH 8H
KD 8C JS 9D 3S 3C 2H 5D 6D 2S
8S 6S TS 3C 6H 8D 5S 3H TD 6C
KS 3D JH 9C 7C 9S QS 5S 4H 6H
7S 6S TH 4S KC KD 3S JC JH KS
7C 3C 2S 6D QH 2C 7S 5H 8H AH
KC 8D QD 6D KH 5C 7H 9D 3D 9C
6H 2D 8S JS 9S 2S 6D KC 7C TC
KD 9C JH 7H KC 8S 2S 7S 3D 6H
4H 9H 2D 4C 8H 7H 5S 8S 2H 8D
AD 7C 3C 7S 5S 4D 9H 3D JC KH
5D AS 7D 6D 9C JC 4C QH QS KH
KD JD 7D 3D QS QC 8S 6D JS QD
6S 8C 5S QH TH 9H AS AC 2C JD
QC KS QH 7S 3C 4C 5C KC 5D AH
6C 4H 9D AH 2C 3H KD 3D TS 5C
TD 8S QS AS JS 3H KD AC 4H KS
7D 5D TS 9H 4H 4C 9C 2H 8C QC
2C 7D 9H 4D KS 4C QH AD KD JS
QD AD AH KH 9D JS 9H JC KD JD
8S 3C 4S TS 7S 4D 5C 2S 6H 7C
JS 7S 5C KD 6D QH 8S TD 2H 6S
QH 6C TC 6H TD 4C 9D 2H QC 8H
3D TS 4D 2H 6H 6S 2C 7H 8S 6C
9H 9D JD JH 3S AH 2C 6S 3H 8S
2C QS 8C 5S 3H 2S 7D 3C AD 4S
5C QC QH AS TS 4S 6S 4C 5H JS
JH 5C TD 4C 6H JS KD KH QS 4H
TC KH JC 4D 9H 9D 8D KC 3C 8H
2H TC 8S AD 9S 4H TS 7H 2C 5C
4H 2S 6C 5S KS AH 9C 7C 8H KD
TS QH TD QS 3C JH AH 2C 8D 7D
5D KC 3H 5S AC 4S 7H QS 4C 2H
3D 7D QC KH JH 6D 6C TD TH KD
5S 8D TH 6C 9D 7D KH 8C 9S 6D
JD QS 7S QC 2S QH JC 4S KS 8D
7S 5S 9S JD KD 9C JC AD 2D 7C
4S 5H AH JH 9C 5D TD 7C 2D 6S
KC 6C 7H 6S 9C QD 5S 4H KS TD
6S 8D KS 2D TH TD 9H JD TS 3S
KH JS 4H 5D 9D TC TD QC JD TS
QS QD AC AD 4C 6S 2D AS 3H KC
4C 7C 3C TD QS 9C KC AS 8D AD
KC 7H QC 6D 8H 6S 5S AH 7S 8C
3S AD 9H JC 6D JD AS KH 6S JH
AD 3D TS KS 7H JH 2D JS QD AC
9C JD 7C 6D TC 6H 6C JC 3D 3S
QC KC 3S JC KD 2C 8D AH QS TS
AS KD 3D JD 8H 7C 8C 5C QD 6C
| 8C TS KC 9H 4S 7D 2S 5D 3S AC
5C AD 5D AC 9C 7C 5H 8D TD KS
3H 7H 6S KC JS QH TD JC 2D 8S
TH 8H 5C QS TC 9H 4D JC KS JS
7C 5H KC QH JD AS KH 4C AD 4S
5H KS 9C 7D 9H 8D 3S 5D 5C AH
6H 4H 5C 3H 2H 3S QH 5S 6S AS
TD 8C 4H 7C TC KC 4C 3H 7S KS
7C 9C 6D KD 3H 4C QS QC AC KH
JC 6S 5H 2H 2D KD 9D 7C AS JS
AD QH TH 9D 8H TS 6D 3S AS AC
2H 4S 5C 5S TC KC JD 6C TS 3C
QD AS 6H JS 2C 3D 9H KC 4H 8S
KD 8S 9S 7C 2S 3S 6D 6S 4H KC
3C 8C 2D 7D 4D 9S 4S QH 4H JD
8C KC 7S TC 2D TS 8H QD AC 5C
3D KH QD 6C 6S AD AS 8H 2H QS
6S 8D 4C 8S 6C QH TC 6D 7D 9D
2S 8D 8C 4C TS 9S 9D 9C AC 3D
3C QS 2S 4H JH 3D 2D TD 8S 9H
5H QS 8S 6D 3C 8C JD AS 7H 7D
6H TD 9D AS JH 6C QC 9S KD JC
AH 8S QS 4D TH AC TS 3C 3D 5C
5S 4D JS 3D 8H 6C TS 3S AD 8C
6D 7C 5D 5H 3S 5C JC 2H 5S 3D
5H 6H 2S KS 3D 5D JD 7H JS 8H
KH 4H AS JS QS QC TC 6D 7C KS
3D QS TS 2H JS 4D AS 9S JC KD
QD 5H 4D 5D KH 7H 3D JS KD 4H
2C 9H 6H 5C 9D 6C JC 2D TH 9S
7D 6D AS QD JH 4D JS 7C QS 5C
3H KH QD AD 8C 8H 3S TH 9D 5S
AH 9S 4D 9D 8S 4H JS 3C TC 8D
2C KS 5H QD 3S TS 9H AH AD 8S
5C 7H 5D KD 9H 4D 3D 2D KS AD
KS KC 9S 6D 2C QH 9D 9H TS TC
9C 6H 5D QH 4D AD 6D QC JS KH
9S 3H 9D JD 5C 4D 9H AS TC QH
2C 6D JC 9C 3C AD 9S KH 9D 7D
KC 9C 7C JC JS KD 3H AS 3C 7D
QD KH QS 2C 3S 8S 8H 9H 9C JC
QH 8D 3C KC 4C 4H 6D AD 9H 9D
3S KS QS 7H KH 7D 5H 5D JD AD
2H 2C 6H TH TC 7D 8D 4H 8C AS
4S 2H AC QC 3S 6D TH 4D 4C KH
4D TC KS AS 7C 3C 6D 2D 9H 6C
8C TD 5D QS 2C 7H 4C 9C 3H 9H
5H JH TS 7S TD 6H AD QD 8H 8S
5S AD 9C 8C 7C 8D 5H 9D 8S 2S
4H KH KS 9S 2S KC 5S AD 4S 7D
QS 9C QD 6H JS 5D AC 8D 2S AS
KH AC JC 3S 9D 9S 3C 9C 5S JS
AD 3C 3D KS 3S 5C 9C 8C TS 4S
JH 8D 5D 6H KD QS QD 3D 6C KC
8S JD 6C 3S 8C TC QC 3C QH JS
KC JC 8H 2S 9H 9C JH 8S 8C 9S
8S 2H QH 4D QC 9D KC AS TH 3C
8S 6H TH 7C 2H 6S 3C 3H AS 7S
QH 5S JS 4H 5H TS 8H AH AC JC
9D 8H 2S 4S TC JC 3C 7H 3H 5C
3D AD 3C 3S 4C QC AS 5D TH 8C
6S 9D 4C JS KH AH TS JD 8H AD
4C 6S 9D 7S AC 4D 3D 3S TC JD
AD 7H 6H 4H JH KC TD TS 7D 6S
8H JH TC 3S 8D 8C 9S 2C 5C 4D
2C 9D KC QH TH QS JC 9C 4H TS
QS 3C QD 8H KH 4H 8D TD 8S AC
7C 3C TH 5S 8H 8C 9C JD TC KD
QC TC JD TS 8C 3H 6H KD 7C TD
JH QS KS 9C 6D 6S AS 9H KH 6H
2H 4D AH 2D JH 6H TD 5D 4H JD
KD 8C 9S JH QD JS 2C QS 5C 7C
4S TC 7H 8D 2S 6H 7S 9C 7C KC
8C 5D 7H 4S TD QC 8S JS 4H KS
AD 8S JH 6D TD KD 7C 6C 2D 7D
JC 6H 6S JS 4H QH 9H AH 4C 3C
6H 5H AS 7C 7S 3D KH KC 5D 5C
JC 3D TD AS 4D 6D 6S QH JD KS
8C 7S 8S QH 2S JD 5C 7H AH QD
8S 3C 6H 6C 2C 8D TD 7D 4C 4D
5D QH KH 7C 2S 7H JS 6D QC QD
AD 6C 6S 7D TH 6H 2H 8H KH 4H
KS JS KD 5D 2D KH 7D 9C 8C 3D
9C 6D QD 3C KS 3S 7S AH JD 2D
AH QH AS JC 8S 8H 4C KC TH 7D
JC 5H TD 7C 5D KD 4C AD 8H JS
KC 2H AC AH 7D JH KH 5D 7S 6D
9S 5S 9C 6H 8S TD JD 9H 6C AC
7D 8S 6D TS KD 7H AC 5S 7C 5D
AH QC JC 4C TC 8C 2H TS 2C 7D
KD KC 6S 3D 7D 2S 8S 3H 5S 5C
8S 5D 8H 4C 6H KC 3H 7C 5S KD
JH 8C 3D 3C 6C KC TD 7H 7C 4C
JC KC 6H TS QS TD KS 8H 8C 9S
6C 5S 9C QH 7D AH KS KC 9S 2C
4D 4S 8H TD 9C 3S 7D 9D AS TH
6S 7D 3C 6H 5D KD 2C 5C 9D 9C
2H KC 3D AD 3H QD QS 8D JC 4S
8C 3H 9C 7C AD 5D JC 9D JS AS
5D 9H 5C 7H 6S 6C QC JC QD 9S
JC QS JH 2C 6S 9C QC 3D 4S TC
4H 5S 8D 3D 4D 2S KC 2H JS 2C
TD 3S TH KD 4D 7H JH JS KS AC
7S 8C 9S 2D 8S 7D 5C AD 9D AS
8C 7H 2S 6C TH 3H 4C 3S 8H AC
KD 5H JC 8H JD 2D 4H TD JH 5C
3D AS QH KS 7H JD 8S 5S 6D 5H
9S 6S TC QS JC 5C 5D 9C TH 8C
5H 3S JH 9H 2S 2C 6S 7S AS KS
8C QD JC QS TC QC 4H AC KH 6C
TC 5H 7D JH 4H 2H 8D JC KS 4D
5S 9C KH KD 9H 5C TS 3D 7D 2D
5H AS TC 4D 8C 2C TS 9D 3H 8D
6H 8D 2D 9H JD 6C 4S 5H 5S 6D
AD 9C JC 7D 6H 9S 6D JS 9H 3C
AD JH TC QS 4C 5D 9S 7C 9C AH
KD 6H 2H TH 8S QD KS 9D 9H AS
4H 8H 8D 5H 6C AH 5S AS AD 8S
QS 5D 4S 2H TD KS 5H AC 3H JC
9C 7D QD KD AC 6D 5H QH 6H 5S
KC AH QH 2H 7D QS 3H KS 7S JD
6C 8S 3H 6D KS QD 5D 5C 8H TC
9H 4D 4S 6S 9D KH QC 4H 6C JD
TD 2D QH 4S 6H JH KD 3C QD 8C
4S 6H 7C QD 9D AS AH 6S AD 3C
2C KC TH 6H 8D AH 5C 6D 8S 5D
TD TS 7C AD JC QD 9H 3C KC 7H
5D 4D 5S 8H 4H 7D 3H JD KD 2D
JH TD 6H QS 4S KD 5C 8S 7D 8H
AC 3D AS 8C TD 7H KH 5D 6C JD
9D KS 7C 6D QH TC JD KD AS KC
JH 8S 5S 7S 7D AS 2D 3D AD 2H
2H 5D AS 3C QD KC 6H 9H 9S 2C
9D 5D TH 4C JH 3H 8D TC 8H 9H
6H KD 2C TD 2H 6C 9D 2D JS 8C
KD 7S 3C 7C AS QH TS AD 8C 2S
QS 8H 6C JS 4C 9S QC AD TD TS
2H 7C TS TC 8C 3C 9H 2D 6D JC
TC 2H 8D JH KS 6D 3H TD TH 8H
9D TD 9H QC 5D 6C 8H 8C KC TS
2H 8C 3D AH 4D TH TC 7D 8H KC
TS 5C 2D 8C 6S KH AH 5H 6H KC
5S 5D AH TC 4C JD 8D 6H 8C 6C
KC QD 3D 8H 2D JC 9H 4H AD 2S
TD 6S 7D JS KD 4H QS 2S 3S 8C
4C 9H JH TS 3S 4H QC 5S 9S 9C
2C KD 9H JS 9S 3H JC TS 5D AC
AS 2H 5D AD 5H JC 7S TD JS 4C
2D 4S 8H 3D 7D 2C AD KD 9C TS
7H QD JH 5H JS AC 3D TH 4C 8H
6D KH KC QD 5C AD 7C 2D 4H AC
3D 9D TC 8S QD 2C JC 4H JD AH
6C TD 5S TC 8S AH 2C 5D AS AC
TH 7S 3D AS 6C 4C 7H 7D 4H AH
5C 2H KS 6H 7S 4H 5H 3D 3C 7H
3C 9S AC 7S QH 2H 3D 6S 3S 3H
2D 3H AS 2C 6H TC JS 6S 9C 6C
QH KD QD 6D AC 6H KH 2C TS 8C
8H 7D 3S 9H 5D 3H 4S QC 9S 5H
2D 9D 7H 6H 3C 8S 5H 4D 3S 4S
KD 9S 4S TC 7S QC 3S 8S 2H 7H
TC 3D 8C 3H 6C 2H 6H KS KD 4D
KC 3D 9S 3H JS 4S 8H 2D 6C 8S
6H QS 6C TC QD 9H 7D 7C 5H 4D
TD 9D 8D 6S 6C TC 5D TS JS 8H
4H KC JD 9H TC 2C 6S 5H 8H AS
JS 9C 5C 6S 9D JD 8H KC 4C 6D
4D 8D 8S 6C 7C 6H 7H 8H 5C KC
TC 3D JC 6D KS 9S 6H 7S 9C 2C
6C 3S KD 5H TS 7D 9H 9S 6H KH
3D QD 4C 6H TS AC 3S 5C 2H KD
4C AS JS 9S 7C TS 7H 9H JC KS
4H 8C JD 3H 6H AD 9S 4S 5S KS
4C 2C 7D 3D AS 9C 2S QS KC 6C
8S 5H 3D 2S AC 9D 6S 3S 4D TD
QD TH 7S TS 3D AC 7H 6C 5D QC
TC QD AD 9C QS 5C 8D KD 3D 3C
9D 8H AS 3S 7C 8S JD 2D 8D KC
4C TH AC QH JS 8D 7D 7S 9C KH
9D 8D 4C JH 2C 2S QD KD TS 4H
4D 6D 5D 2D JH 3S 8S 3H TC KH
AD 4D 2C QS 8C KD JH JD AH 5C
5C 6C 5H 2H JH 4H KS 7C TC 3H
3C 4C QC 5D JH 9C QD KH 8D TC
3H 9C JS 7H QH AS 7C 9H 5H JC
2D 5S QD 4S 3C KC 6S 6C 5C 4C
5D KH 2D TS 8S 9C AS 9S 7C 4C
7C AH 8C 8D 5S KD QH QS JH 2C
8C 9D AH 2H AC QC 5S 8H 7H 2C
QD 9H 5S QS QC 9C 5H JC TH 4H
6C 6S 3H 5H 3S 6H KS 8D AC 7S
AC QH 7H 8C 4S KC 6C 3D 3S TC
9D 3D JS TH AC 5H 3H 8S 3S TC
QD KH JS KS 9S QC 8D AH 3C AC
5H 6C KH 3S 9S JH 2D QD AS 8C
6C 4D 7S 7H 5S JC 6S 9H 4H JH
AH 5S 6H 9S AD 3S TH 2H 9D 8C
4C 8D 9H 7C QC AD 4S 9C KC 5S
9D 6H 4D TC 4C JH 2S 5D 3S AS
2H 6C 7C KH 5C AD QS TH JD 8S
3S 4S 7S AH AS KC JS 2S AD TH
JS KC 2S 7D 8C 5C 9C TS 5H 9D
7S 9S 4D TD JH JS KH 6H 5D 2C
JD JS JC TH 2D 3D QD 8C AC 5H
7S KH 5S 9D 5D TD 4S 6H 3C 2D
4S 5D AC 8D 4D 7C AD AS AH 9C
6S TH TS KS 2C QC AH AS 3C 4S
2H 8C 3S JC 5C 7C 3H 3C KH JH
7S 3H JC 5S 6H 4C 2S 4D KC 7H
4D 7C 4H 9S 8S 6S AD TC 6C JC
KH QS 3S TC 4C 8H 8S AC 3C TS
QD QS TH 3C TS 7H 7D AH TD JC
TD JD QC 4D 9S 7S TS AD 7D AC
AH 7H 4S 6D 7C 2H 9D KS JC TD
7C AH JD 4H 6D QS TS 2H 2C 5C
TC KC 8C 9S 4C JS 3C JC 6S AH
AS 7D QC 3D 5S JC JD 9D TD KH
TH 3C 2S 6H AH AC 5H 5C 7S 8H
QC 2D AC QD 2S 3S JD QS 6S 8H
KC 4H 3C 9D JS 6H 3S 8S AS 8C
7H KC 7D JD 2H JC QH 5S 3H QS
9H TD 3S 8H 7S AC 5C 6C AH 7C
8D 9H AH JD TD QS 7D 3S 9C 8S
AH QH 3C JD KC 4S 5S 5D TD KS
9H 7H 6S JH TH 4C 7C AD 5C 2D
7C KD 5S TC 9D 6S 6C 5D 2S TH
KC 9H 8D 5H 7H 4H QC 3D 7C AS
6S 8S QC TD 4S 5C TH QS QD 2S
8S 5H TH QC 9H 6S KC 7D 7C 5C
7H KD AH 4D KH 5C 4S 2D KC QH
6S 2C TD JC AS 4D 6C 8C 4H 5S
JC TC JD 5S 6S 8D AS 9D AD 3S
6D 6H 5D 5S TC 3D 7D QS 9D QD
4S 6C 8S 3S 7S AD KS 2D 7D 7C
KC QH JC AC QD 5D 8D QS 7H 7D
JS AH 8S 5H 3D TD 3H 4S 6C JH
4S QS 7D AS 9H JS KS 6D TC 5C
2D 5C 6H TC 4D QH 3D 9H 8S 6C
6D 7H TC TH 5S JD 5C 9C KS KD
8D TD QH 6S 4S 6C 8S KC 5C TC
5S 3D KS AC 4S 7D QD 4C TH 2S
TS 8H 9S 6S 7S QH 3C AH 7H 8C
4C 8C TS JS QC 3D 7D 5D 7S JH
8S 7S 9D QC AC 7C 6D 2H JH KC
JS KD 3C 6S 4S 7C AH QC KS 5H
KS 6S 4H JD QS TC 8H KC 6H AS
KH 7C TC 6S TD JC 5C 7D AH 3S
3H 4C 4H TC TH 6S 7H 6D 9C QH
7D 5H 4S 8C JS 4D 3D 8S QH KC
3H 6S AD 7H 3S QC 8S 4S 7S JS
3S JD KH TH 6H QS 9C 6C 2D QD
4S QH 4D 5H KC 7D 6D 8D TH 5S
TD AD 6S 7H KD KH 9H 5S KC JC
3H QC AS TS 4S QD KS 9C 7S KC
TS 6S QC 6C TH TC 9D 5C 5D KD
JS 3S 4H KD 4C QD 6D 9S JC 9D
8S JS 6D 4H JH 6H 6S 6C KS KH
AC 7D 5D TC 9S KH 6S QD 6H AS
AS 7H 6D QH 8D TH 2S KH 5C 5H
4C 7C 3D QC TC 4S KH 8C 2D JS
6H 5D 7S 5H 9C 9H JH 8S TH 7H
AS JS 2S QD KH 8H 4S AC 8D 8S
3H 4C TD KD 8C JC 5C QS 2D JD
TS 7D 5D 6C 2C QS 2H 3C AH KS
4S 7C 9C 7D JH 6C 5C 8H 9D QD
2S TD 7S 6D 9C 9S QS KH QH 5C
JC 6S 9C QH JH 8D 7S JS KH 2H
8D 5H TH KC 4D 4S 3S 6S 3D QS
2D JD 4C TD 7C 6D TH 7S JC AH
QS 7S 4C TH 9D TS AD 4D 3H 6H
2D 3H 7D JD 3D AS 2S 9C QC 8S
4H 9H 9C 2C 7S JH KD 5C 5D 6H
TC 9H 8H JC 3C 9S 8D KS AD KC
TS 5H JD QS QH QC 8D 5D KH AH
5D AS 8S 6S 4C AH QC QD TH 7H
3H 4H 7D 6S 4S 9H AS 8H JS 9D
JD 8C 2C 9D 7D 5H 5S 9S JC KD
KD 9C 4S QD AH 7C AD 9D AC TD
6S 4H 4S 9C 8D KS TC 9D JH 7C
5S JC 5H 4S QH AC 2C JS 2S 9S
8C 5H AS QD AD 5C 7D 8S QC TD
JC 4C 8D 5C KH QS 4D 6H 2H 2C
TH 4S 2D KC 3H QD AC 7H AD 9D
KH QD AS 8H TH KC 8D 7S QH 8C
JC 6C 7D 8C KH AD QS 2H 6S 2D
JC KH 2D 7D JS QC 5H 4C 5D AD
TS 3S AD 4S TD 2D TH 6S 9H JH
9H 2D QS 2C 4S 3D KH AS AC 9D
KH 6S 8H 4S KD 7D 9D TS QD QC
JH 5H AH KS AS AD JC QC 5S KH
5D 7D 6D KS KD 3D 7C 4D JD 3S
AC JS 8D 5H 9C 3H 4H 4D TS 2C
6H KS KH 9D 7C 2S 6S 8S 2H 3D
6H AC JS 7S 3S TD 8H 3H 4H TH
9H TC QC KC 5C KS 6H 4H AC 8S
TC 7D QH 4S JC TS 6D 6C AC KH
QH 7D 7C JH QS QD TH 3H 5D KS
3D 5S 8D JS 4C 2C KS 7H 9C 4H
5H 8S 4H TD 2C 3S QD QC 3H KC
QC JS KD 9C AD 5S 9D 7D 7H TS
8C JC KH 7C 7S 6C TS 2C QD TH
5S 9D TH 3C 7S QH 8S 9C 2H 5H
5D 9H 6H 2S JS KH 3H 7C 2H 5S
JD 5D 5S 2C TC 2S 6S 6C 3C 8S
4D KH 8H 4H 2D KS 3H 5C 2S 9H
3S 2D TD 7H 8S 6H JD KC 9C 8D
6S QD JH 7C 9H 5H 8S 8H TH TD
QS 7S TD 7D TS JC KD 7C 3C 2C
3C JD 8S 4H 2D 2S TD AS 4D AC
AH KS 6C 4C 4S 7D 8C 9H 6H AS
5S 3C 9S 2C QS KD 4D 4S AC 5D
2D TS 2C JS KH QH 5D 8C AS KC
KD 3H 6C TH 8S 7S KH 6H 9S AC
6H 7S 6C QS AH 2S 2H 4H 5D 5H
5H JC QD 2C 2S JD AS QC 6S 7D
6C TC AS KD 8H 9D 2C 7D JH 9S
2H 4C 6C AH 8S TD 3H TH 7C TS
KD 4S TS 6C QH 8D 9D 9C AH 7D
6D JS 5C QD QC 9C 5D 8C 2H KD
3C QH JH AD 6S AH KC 8S 6D 6H
3D 7C 4C 7S 5S 3S 6S 5H JC 3C
QH 7C 5H 3C 3S 8C TS 4C KD 9C
QD 3S 7S 5H 7H QH JC 7C 8C KD
3C KD KH 2S 4C TS AC 6S 2C 7C
2C KH 3C 4C 6H 4D 5H 5S 7S QD
4D 7C 8S QD TS 9D KS 6H KD 3C
QS 4D TS 7S 4C 3H QD 8D 9S TC
TS QH AC 6S 3C 9H 9D QS 8S 6H
3S 7S 5D 4S JS 2D 6C QH 6S TH
4C 4H AS JS 5D 3D TS 9C AC 8S
6S 9C 7C 3S 5C QS AD AS 6H 3C
9S 8C 7H 3H 6S 7C AS 9H JD KH
3D 3H 7S 4D 6C 7C AC 2H 9C TH
4H 5S 3H AC TC TH 9C 9H 9S 8D
8D 9H 5H 4D 6C 2H QD 6S 5D 3S
4C 5C JD QS 4D 3H TH AC QH 8C
QC 5S 3C 7H AD 4C KS 4H JD 6D
QS AH 3H KS 9H 2S JS JH 5H 2H
2H 5S TH 6S TS 3S KS 3C 5H JS
2D 9S 7H 3D KC JH 6D 7D JS TD
AC JS 8H 2C 8C JH JC 2D TH 7S
5D 9S 8H 2H 3D TC AH JC KD 9C
9D QD JC 2H 6D KH TS 9S QH TH
2C 8D 4S JD 5H 3H TH TC 9C KC
AS 3D 9H 7D 4D TH KH 2H 7S 3H
4H 7S KS 2S JS TS 8S 2H QD 8D
5S 6H JH KS 8H 2S QC AC 6S 3S
JC AS AD QS 8H 6C KH 4C 4D QD
2S 3D TS TD 9S KS 6S QS 5C 8D
3C 6D 4S QC KC JH QD TH KH AD
9H AH 4D KS 2S 8D JH JC 7C QS
2D 6C TH 3C 8H QD QH 2S 3S KS
6H 5D 9S 4C TS TD JS QD 9D JD
5H 8H KH 8S KS 7C TD AD 4S KD
2C 7C JC 5S AS 6C 7D 8S 5H 9C
6S QD 9S TS KH QS 5S QH 3C KC
7D 3H 3C KD 5C AS JH 7H 6H JD
9D 5C 9H KC 8H KS 4S AD 4D 2S
3S JD QD 8D 2S 7C 5S 6S 5H TS
6D 9S KC TD 3S 6H QD JD 5C 8D
5H 9D TS KD 8D 6H TD QC 4C 7D
6D 4S JD 9D AH 9S AS TD 9H QD
2D 5S 2H 9C 6H 9S TD QC 7D TC
3S 2H KS TS 2C 9C 8S JS 9D 7D
3C KC 6D 5D 6C 6H 8S AS 7S QS
JH 9S 2H 8D 4C 8H 9H AD TH KH
QC AS 2S JS 5C 6H KD 3H 7H 2C
QD 8H 2S 8D 3S 6D AH 2C TC 5C
JD JS TS 8S 3H 5D TD KC JC 6H
6S QS TC 3H 5D AH JC 7C 7D 4H
7C 5D 8H 9C 2H 9H JH KH 5S 2C
9C 7H 6S TH 3S QC QD 4C AC JD
2H 5D 9S 7D KC 3S QS 2D AS KH
2S 4S 2H 7D 5C TD TH QH 9S 4D
6D 3S TS 6H 4H KS 9D 8H 5S 2D
9H KS 4H 3S 5C 5D KH 6H 6S JS
KC AS 8C 4C JC KH QC TH QD AH
6S KH 9S 2C 5H TC 3C 7H JC 4D
JD 4S 6S 5S 8D 7H 7S 4D 4C 2H
7H 9H 5D KH 9C 7C TS TC 7S 5H
4C 8D QC TS 4S 9H 3D AD JS 7C
8C QS 5C 5D 3H JS AH KC 4S 9D
TS JD 8S QS TH JH KH 2D QD JS
JD QC 5D 6S 9H 3S 2C 8H 9S TS
2S 4C AD 7H JC 5C 2D 6D 4H 3D
7S JS 2C 4H 8C AD QD 9C 3S TD
JD TS 4C 6H 9H 7D QD 6D 3C AS
AS 7C 4C 6S 5D 5S 5C JS QC 4S
KD 6S 9S 7C 3C 5S 7D JH QD JS
4S 7S JH 2C 8S 5D 7H 3D QH AD
TD 6H 2H 8D 4H 2D 7C AD KH 5D
TS 3S 5H 2C QD AH 2S 5C KH TD
KC 4D 8C 5D AS 6C 2H 2S 9H 7C
KD JS QC TS QS KH JH 2C 5D AD
3S 5H KC 6C 9H 3H 2H AD 7D 7S
7S JS JH KD 8S 7D 2S 9H 7C 2H
9H 2D 8D QC 6S AD AS 8H 5H 6C
2S 7H 6C 6D 7D 8C 5D 9D JC 3C
7C 9C 7H JD 2H KD 3S KH AD 4S
QH AS 9H 4D JD KS KD TS KH 5H
4C 8H 5S 3S 3D 7D TD AD 7S KC
JS 8S 5S JC 8H TH 9C 4D 5D KC
7C 5S 9C QD 2C QH JS 5H 8D KH
TD 2S KS 3D AD KC 7S TC 3C 5D
4C 2S AD QS 6C 9S QD TH QH 5C
8C AD QS 2D 2S KC JD KS 6C JC
8D 4D JS 2H 5D QD 7S 7D QH TS
6S 7H 3S 8C 8S 9D QS 8H 6C 9S
4S TC 2S 5C QD 4D QS 6D TH 6S
3S 5C 9D 6H 8D 4C 7D TC 7C TD
AH 6S AS 7H 5S KD 3H 5H AC 4C
8D 8S AH KS QS 2C AD 6H 7D 5D
6H 9H 9S 2H QS 8S 9C 5D 2D KD
TS QC 5S JH 7D 7S TH 9S 9H AC
7H 3H 6S KC 4D 6D 5C 4S QD TS
TD 2S 7C QD 3H JH 9D 4H 7S 7H
KS 3D 4H 5H TC 2S AS 2D 6D 7D
8H 3C 7H TD 3H AD KC TH 9C KH
TC 4C 2C 9S 9D 9C 5C 2H JD 3C
3H AC TS 5D AD 8D 6H QC 6S 8C
2S TS 3S JD 7H 8S QH 4C 5S 8D
AC 4S 6C 3C KH 3D 7C 2D 8S 2H
4H 6C 8S TH 2H 4S 8H 9S 3H 7S
7C 4C 9C 2C 5C AS 5D KD 4D QH
9H 4H TS AS 7D 8D 5D 9S 8C 2H
QC KD AC AD 2H 7S AS 3S 2D 9S
2H QC 8H TC 6D QD QS 5D KH 3C
TH JD QS 4C 2S 5S AD 7H 3S AS
7H JS 3D 6C 3S 6D AS 9S AC QS
9C TS AS 8C TC 8S 6H 9D 8D 6C
4D JD 9C KC 7C 6D KS 3S 8C AS
3H 6S TC 8D TS 3S KC 9S 7C AS
8C QC 4H 4S 8S 6C 3S TC AH AC
4D 7D 5C AS 2H 6S TS QC AD TC
QD QC 8S 4S TH 3D AH TS JH 4H
5C 2D 9S 2C 3H 3C 9D QD QH 7D
KC 9H 6C KD 7S 3C 4D AS TC 2D
3D JS 4D 9D KS 7D TH QC 3H 3C
8D 5S 2H 9D 3H 8C 4C 4H 3C TH
JC TH 4S 6S JD 2D 4D 6C 3D 4C
TS 3S 2D 4H AC 2C 6S 2H JH 6H
TD 8S AD TC AH AC JH 9S 6S 7S
6C KC 4S JD 8D 9H 5S 7H QH AH
KD 8D TS JH 5C 5H 3H AD AS JS
2D 4H 3D 6C 8C 7S AD 5D 5C 8S
TD 5D 7S 9C 4S 5H 6C 8C 4C 8S
JS QH 9C AS 5C QS JC 3D QC 7C
JC 9C KH JH QS QC 2C TS 3D AD
5D JH AC 5C 9S TS 4C JD 8C KS
KC AS 2D KH 9H 2C 5S 4D 3D 6H
TH AH 2D 8S JC 3D 8C QH 7S 3S
8H QD 4H JC AS KH KS 3C 9S 6D
9S QH 7D 9C 4S AC 7H KH 4D KD
AH AD TH 6D 9C 9S KD KS QH 4H
QD 6H 9C 7C QS 6D 6S 9D 5S JH
AH 8D 5H QD 2H JC KS 4H KH 5S
5C 2S JS 8D 9C 8C 3D AS KC AH
JD 9S 2H QS 8H 5S 8C TH 5C 4C
QC QS 8C 2S 2C 3S 9C 4C KS KH
2D 5D 8S AH AD TD 2C JS KS 8C
TC 5S 5H 8H QC 9H 6H JD 4H 9S
3C JH 4H 9H AH 4S 2H 4C 8D AC
8S TH 4D 7D 6D QD QS 7S TC 7C
KH 6D 2D JD 5H JS QD JH 4H 4S
9C 7S JH 4S 3S TS QC 8C TC 4H
QH 9D 4D JH QS 3S 2C 7C 6C 2D
4H 9S JD 5C 5H AH 9D TS 2D 4C
KS JH TS 5D 2D AH JS 7H AS 8D
JS AH 8C AD KS 5S 8H 2C 6C TH
2H 5D AD AC KS 3D 8H TS 6H QC
6D 4H TS 9C 5H JS JH 6S JD 4C
JH QH 4H 2C 6D 3C 5D 4C QS KC
6H 4H 6C 7H 6S 2S 8S KH QC 8C
3H 3D 5D KS 4H TD AD 3S 4D TS
5S 7C 8S 7D 2C KS 7S 6C 8C JS
5D 2H 3S 7C 5C QD 5H 6D 9C 9H
JS 2S KD 9S 8D TD TS AC 8C 9D
5H QD 2S AC 8C 9H KS 7C 4S 3C
KH AS 3H 8S 9C JS QS 4S AD 4D
AS 2S TD AD 4D 9H JC 4C 5H QS
5D 7C 4H TC 2D 6C JS 4S KC 3S
4C 2C 5D AC 9H 3D JD 8S QS QH
2C 8S 6H 3C QH 6D TC KD AC AH
QC 6C 3S QS 4S AC 8D 5C AD KH
5S 4C AC KH AS QC 2C 5C 8D 9C
8H JD 3C KH 8D 5C 9C QD QH 9D
7H TS 2C 8C 4S TD JC 9C 5H QH
JS 4S 2C 7C TH 6C AS KS 7S JD
JH 7C 9H 7H TC 5H 3D 6D 5D 4D
2C QD JH 2H 9D 5S 3D TD AD KS
JD QH 3S 4D TH 7D 6S QS KS 4H
TC KS 5S 8D 8H AD 2S 2D 4C JH
5S JH TC 3S 2D QS 9D 4C KD 9S
AC KH 3H AS 9D KC 9H QD 6C 6S
9H 7S 3D 5C 7D KC TD 8H 4H 6S
3C 7H 8H TC QD 4D 7S 6S QH 6C
6D AD 4C QD 6C 5D 7D 9D KS TS
JH 2H JD 9S 7S TS KH 8D 5D 8H
2D 9S 4C 7D 9D 5H QD 6D AC 6S
7S 6D JC QD JH 4C 6S QS 2H 7D
8C TD JH KD 2H 5C QS 2C JS 7S
TC 5H 4H JH QD 3S 5S 5D 8S KH
KS KH 7C 2C 5D JH 6S 9C 6D JC
5H AH JD 9C JS KC 2H 6H 4D 5S
AS 3C TH QC 6H 9C 8S 8C TD 7C
KC 2C QD 9C KH 4D 7S 3C TS 9H
9C QC 2S TS 8C TD 9S QD 3S 3C
4D 9D TH JH AH 6S 2S JD QH JS
QD 9H 6C KD 7D 7H 5D 6S 8H AH
8H 3C 4S 2H 5H QS QH 7S 4H AC
QS 3C 7S 9S 4H 3S AH KS 9D 7C
AD 5S 6S 2H 2D 5H TC 4S 3C 8C
QH TS 6S 4D JS KS JH AS 8S 6D
2C 8S 2S TD 5H AS TC TS 6C KC
KC TS 8H 2H 3H 7C 4C 5S TH TD
KD AD KH 7H 7S 5D 5H 5S 2D 9C
AD 9S 3D 7S 8C QC 7C 9C KD KS
3C QC 9S 8C 4D 5C AS QD 6C 2C
2H KC 8S JD 7S AC 8D 5C 2S 4D
9D QH 3D 2S TC 3S KS 3C 9H TD
KD 6S AC 2C 7H 5H 3S 6C 6H 8C
QH TC 8S 6S KH TH 4H 5D TS 4D
8C JS 4H 6H 2C 2H 7D AC QD 3D
QS KC 6S 2D 5S 4H TD 3H JH 4C
7S 5H 7H 8H KH 6H QS TH KD 7D
5H AD KD 7C KH 5S TD 6D 3C 6C
8C 9C 5H JD 7C KC KH 7H 2H 3S
7S 4H AD 4D 8S QS TH 3D 7H 5S
8D TC KS KD 9S 6D AD JD 5C 2S
7H 8H 6C QD 2H 6H 9D TC 9S 7C
8D 6D 4C 7C 6C 3C TH KH JS JH
5S 3S 8S JS 9H AS AD 8H 7S KD
JH 7C 2C KC 5H AS AD 9C 9S JS
AD AC 2C 6S QD 7C 3H TH KS KD
9D JD 4H 8H 4C KH 7S TS 8C KC
3S 5S 2H 7S 6H 7D KS 5C 6D AD
5S 8C 9H QS 7H 7S 2H 6C 7D TD
QS 5S TD AC 9D KC 3D TC 2D 4D
TD 2H 7D JD QD 4C 7H 5D KC 3D
4C 3H 8S KD QH 5S QC 9H TC 5H
9C QD TH 5H TS 5C 9H AH QH 2C
4D 6S 3C AC 6C 3D 2C 2H TD TH
AC 9C 5D QC 4D AD 8D 6D 8C KC
AD 3C 4H AC 8D 8H 7S 9S TD JC
4H 9H QH JS 2D TH TD TC KD KS
5S 6S 9S 8D TH AS KH 5H 5C 8S
JD 2S 9S 6S 5S 8S 5D 7S 7H 9D
5D 8C 4C 9D AD TS 2C 7D KD TC
8S QS 4D KC 5C 8D 4S KH JD KD
AS 5C AD QH 7D 2H 9S 7H 7C TC
2S 8S JD KH 7S 6C 6D AD 5D QC
9H 6H 3S 8C 8H AH TC 4H JS TD
2C TS 4D 7H 2D QC 9C 5D TH 7C
6C 8H QC 5D TS JH 5C 5H 9H 4S
2D QC 7H AS JS 8S 2H 4C 4H 8D
JS 6S AC KD 3D 3C 4S 7H TH KC
QH KH 6S QS 5S 4H 3C QD 3S 3H
7H AS KH 8C 4H 9C 5S 3D 6S TS
9C 7C 3H 5S QD 2C 3D AD AC 5H
JH TD 2D 4C TS 3H KH AD 3S 7S
AS 4C 5H 4D 6S KD JC 3C 6H 2D
3H 6S 8C 2D TH 4S AH QH AD 5H
7C 2S 9H 7H KC 5C 6D 5S 3H JC
3C TC 9C 4H QD TD JH 6D 9H 5S
7C 6S 5C 5D 6C 4S 7H 9H 6H AH
AD 2H 7D KC 2C 4C 2S 9S 7H 3S
TH 4C 8S 6S 3S AD KS AS JH TD
5C TD 4S 4D AD 6S 5D TC 9C 7D
8H 3S 4D 4S 5S 6H 5C AC 3H 3D
9H 3C AC 4S QS 8S 9D QH 5H 4D
JC 6C 5H TS AC 9C JD 8C 7C QD
8S 8H 9C JD 2D QC QH 6H 3C 8D
KS JS 2H 6H 5H QH QS 3H 7C 6D
TC 3H 4S 7H QC 2H 3S 8C JS KH
AH 8H 5S 4C 9H JD 3H 7S JC AC
3C 2D 4C 5S 6C 4S QS 3S JD 3D
5H 2D TC AH KS 6D 7H AD 8C 6H
6C 7S 3C JD 7C 8H KS KH AH 6D
AH 7D 3H 8H 8S 7H QS 5H 9D 2D
JD AC 4H 7S 8S 9S KS AS 9D QH
7S 2C 8S 5S JH QS JC AH KD 4C
AH 2S 9H 4H 8D TS TD 6H QH JD
4H JC 3H QS 6D 7S 9C 8S 9D 8D
5H TD 4S 9S 4C 8C 8D 7H 3H 3D
QS KH 3S 2C 2S 3C 7S TD 4S QD
7C TD 4D 5S KH AC AS 7H 4C 6C
2S 5H 6D JD 9H QS 8S 2C 2H TD
2S TS 6H 9H 7S 4H JC 4C 5D 5S
2C 5H 7D 4H 3S QH JC JS 6D 8H
4C QH 7C QD 3S AD TH 8S 5S TS
9H TC 2S TD JC 7D 3S 3D TH QH
7D 4C 8S 5C JH 8H 6S 3S KC 3H
JC 3H KH TC QH TH 6H 2C AC 5H
QS 2H 9D 2C AS 6S 6C 2S 8C 8S
9H 7D QC TH 4H KD QS AC 7S 3C
4D JH 6S 5S 8H KS 9S QC 3S AS
JD 2D 6S 7S TC 9H KC 3H 7D KD
2H KH 7C 4D 4S 3H JS QD 7D KC
4C JC AS 9D 3C JS 6C 8H QD 4D
AH JS 3S 6C 4C 3D JH 6D 9C 9H
9H 2D 8C 7H 5S KS 6H 9C 2S TC
6C 8C AD 7H 6H 3D KH AS 5D TH
KS 8C 3S TS 8S 4D 5S 9S 6C 4H
9H 4S 4H 5C 7D KC 2D 2H 9D JH
5C JS TC 9D 9H 5H 7S KH JC 6S
7C 9H 8H 4D JC KH JD 2H TD TC
8H 6C 2H 2C KH 6H 9D QS QH 5H
AC 7D 2S 3D QD JC 2D 8D JD JH
2H JC 2D 7H 2C 3C 8D KD TD 4H
3S 4H 6D 8D TS 3H TD 3D 6H TH
JH JC 3S AC QH 9H 7H 8S QC 2C
7H TD QS 4S 8S 9C 2S 5D 4D 2H
3D TS 3H 2S QC 8H 6H KC JC KS
5D JD 7D TC 8C 6C 9S 3D 8D AC
8H 6H JH 6C 5D 8D 8S 4H AD 2C
9D 4H 2D 2C 3S TS AS TC 3C 5D
4D TH 5H KS QS 6C 4S 2H 3D AD
5C KC 6H 2C 5S 3C 4D 2D 9H 9S
JD 4C 3H TH QH 9H 5S AH 8S AC
7D 9S 6S 2H TD 9C 4H 8H QS 4C
3C 6H 5D 4H 8C 9C KC 6S QD QS
3S 9H KD TC 2D JS 8C 6S 4H 4S
2S 4C 8S QS 6H KH 3H TH 8C 5D
2C KH 5S 3S 7S 7H 6C 9D QD 8D
8H KS AC 2D KH TS 6C JS KC 7H
9C KS 5C TD QC AH 6C 5H 9S 7C
5D 4D 3H 4H 6S 7C 7S AH QD TD
2H 7D QC 6S TC TS AH 7S 9D 3H
TH 5H QD 9S KS 7S 7C 6H 8C TD
TH 2D 4D QC 5C 7D JD AH 9C 4H
4H 3H AH 8D 6H QC QH 9H 2H 2C
2D AD 4C TS 6H 7S TH 4H QS TD
3C KD 2H 3H QS JD TC QC 5D 8H
KS JC QD TH 9S KD 8D 8C 2D 9C
3C QD KD 6D 4D 8D AH AD QC 8S
8H 3S 9D 2S 3H KS 6H 4C 7C KC
TH 9S 5C 3D 7D 6H AC 7S 4D 2C
5C 3D JD 4D 2D 6D 5H 9H 4C KH
AS 7H TD 6C 2H 3D QD KS 4C 4S
JC 3C AC 7C JD JS 8H 9S QC 5D
JD 6S 5S 2H AS 8C 7D 5H JH 3D
8D TC 5S 9S 8S 3H JC 5H 7S AS
5C TD 3D 7D 4H 8D 7H 4D 5D JS
QS 9C KS TD 2S 8S 5C 2H 4H AS
TH 7S 4H 7D 3H JD KD 5D 2S KC
JD 7H 4S 8H 4C JS 6H QH 5S 4H
2C QS 8C 5S 3H QC 2S 6C QD AD
8C 3D JD TC 4H 2H AD 5S AC 2S
5D 2C JS 2D AD 9D 3D 4C 4S JH
8D 5H 5D 6H 7S 4D KS 9D TD JD
3D 6D 9C 2S AS 7D 5S 5C 8H JD
7C 8S 3S 6S 5H JD TC AD 7H 7S
2S 9D TS 4D AC 8D 6C QD JD 3H
9S KH 2C 3C AC 3D 5H 6H 8D 5D
KS 3D 2D 6S AS 4C 2S 7C 7H KH
AC 2H 3S JC 5C QH 4D 2D 5H 7S
TS AS JD 8C 6H JC 8S 5S 2C 5D
7S QH 7H 6C QC 8H 2D 7C JD 2S
2C QD 2S 2H JC 9C 5D 2D JD JH
7C 5C 9C 8S 7D 6D 8D 6C 9S JH
2C AD 6S 5H 3S KS 7S 9D KH 4C
7H 6C 2C 5C TH 9D 8D 3S QC AH
5S KC 6H TC 5H 8S TH 6D 3C AH
9C KD 4H AD TD 9S 4S 7D 6H 5D
7H 5C 5H 6D AS 4C KD KH 4H 9D
3C 2S 5C 6C JD QS 2H 9D 7D 3H
AC 2S 6S 7S JS QD 5C QS 6H AD
5H TH QC 7H TC 3S 7C 6D KC 3D
4H 3D QC 9S 8H 2C 3S JC KS 5C
4S 6S 2C 6H 8S 3S 3D 9H 3H JS
4S 8C 4D 2D 8H 9H 7D 9D AH TS
9S 2C 9H 4C 8D AS 7D 3D 6D 5S
6S 4C 7H 8C 3H 5H JC AH 9D 9C
2S 7C 5S JD 8C 3S 3D 4D 7D 6S
3C KC 4S 5D 7D 3D JD 7H 3H 4H
9C 9H 4H 4D TH 6D QD 8S 9S 7S
2H AC 8S 4S AD 8C 2C AH 7D TC
TS 9H 3C AD KS TC 3D 8C 8H JD
QC 8D 2C 3C 7D 7C JD 9H 9C 6C
AH 6S JS JH 5D AS QC 2C JD TD
9H KD 2H 5D 2D 3S 7D TC AH TS
TD 8H AS 5D AH QC AC 6S TC 5H
KS 4S 7H 4D 8D 9C TC 2H 6H 3H
3H KD 4S QD QH 3D 8H 8C TD 7S
8S JD TC AH JS QS 2D KH KS 4D
3C AD JC KD JS KH 4S TH 9H 2C
QC 5S JS 9S KS AS 7C QD 2S JD
KC 5S QS 3S 2D AC 5D 9H 8H KS
6H 9C TC AD 2C 6D 5S JD 6C 7C
QS KH TD QD 2C 3H 8S 2S QC AH
9D 9H JH TC QH 3C 2S JS 5C 7H
6C 3S 3D 2S 4S QD 2D TH 5D 2C
2D 6H 6D 2S JC QH AS 7H 4H KH
5H 6S KS AD TC TS 7C AC 4S 4H
AD 3C 4H QS 8C 9D KS 2H 2D 4D
4S 9D 6C 6D 9C AC 8D 3H 7H KD
JC AH 6C TS JD 6D AD 3S 5D QD
JC JH JD 3S 7S 8S JS QC 3H 4S
JD TH 5C 2C AD JS 7H 9S 2H 7S
8D 3S JH 4D QC AS JD 2C KC 6H
2C AC 5H KD 5S 7H QD JH AH 2D
JC QH 8D 8S TC 5H 5C AH 8C 6C
3H JS 8S QD JH 3C 4H 6D 5C 3S
6D 4S 4C AH 5H 5S 3H JD 7C 8D
8H AH 2H 3H JS 3C 7D QC 4H KD
6S 2H KD 5H 8H 2D 3C 8S 7S QD
2S 7S KC QC AH TC QS 6D 4C 8D
5S 9H 2C 3S QD 7S 6C 2H 7C 9D
3C 6C 5C 5S JD JC KS 3S 5D TS
7C KS 6S 5S 2S 2D TC 2H 5H QS
AS 7H 6S TS 5H 9S 9D 3C KD 2H
4S JS QS 3S 4H 7C 2S AC 6S 9D
8C JH 2H 5H 7C 5D QH QS KH QC
3S TD 3H 7C KC 8D 5H 8S KH 8C
4H KH JD TS 3C 7H AS QC JS 5S
AH 9D 2C 8D 4D 2D 6H 6C KC 6S
2S 6H 9D 3S 7H 4D KH 8H KD 3D
9C TC AC JH KH 4D JD 5H TD 3S
7S 4H 9D AS 4C 7D QS 9S 2S KH
3S 8D 8S KS 8C JC 5C KH 2H 5D
8S QH 2C 4D KC JS QC 9D AC 6H
8S 8C 7C JS JD 6S 4C 9C AC 4S
QH 5D 2C 7D JC 8S 2D JS JH 4C
JS 4C 7S TS JH KC KH 5H QD 4S
QD 8C 8D 2D 6S TD 9D AC QH 5S
QH QC JS 3D 3C 5C 4H KH 8S 7H
7C 2C 5S JC 8S 3H QC 5D 2H KC
5S 8D KD 6H 4H QD QH 6D AH 3D
7S KS 6C 2S 4D AC QS 5H TS JD
7C 2D TC 5D QS AC JS QC 6C KC
2C KS 4D 3H TS 8S AD 4H 7S 9S
QD 9H QH 5H 4H 4D KH 3S JC AD
4D AC KC 8D 6D 4C 2D KH 2C JD
2C 9H 2D AH 3H 6D 9C 7D TC KS
8C 3H KD 7C 5C 2S 4S 5H AS AH
TH JD 4H KD 3H TC 5C 3S AC KH
6D 7H AH 7S QC 6H 2D TD JD AS
JH 5D 7H TC 9S 7D JC AS 5S KH
2H 8C AD TH 6H QD KD 9H 6S 6C
QH KC 9D 4D 3S JS JH 4H 2C 9H
TC 7H KH 4H JC 7D 9S 3H QS 7S
AD 7D JH 6C 7H 4H 3S 3H 4D QH
JD 2H 5C AS 6C QC 4D 3C TC JH
AC JD 3H 6H 4C JC AD 7D 7H 9H
4H TC TS 2C 8C 6S KS 2H JD 9S
4C 3H QS QC 9S 9H 6D KC 9D 9C
5C AD 8C 2C QH TH QD JC 8D 8H
QC 2C 2S QD 9C 4D 3S 8D JH QS
9D 3S 2C 7S 7C JC TD 3C TC 9H
3C TS 8H 5C 4C 2C 6S 8D 7C 4H
KS 7H 2H TC 4H 2C 3S AS AH QS
8C 2D 2H 2C 4S 4C 6S 7D 5S 3S
TH QC 5D TD 3C QS KD KC KS AS
4D AH KD 9H KS 5C 4C 6H JC 7S
KC 4H 5C QS TC 2H JC 9S AH QH
4S 9H 3H 5H 3C QD 2H QC JH 8H
5D AS 7H 2C 3D JH 6H 4C 6S 7D
9C JD 9H AH JS 8S QH 3H KS 8H
3S AC QC TS 4D AD 3D AH 8S 9H
7H 3H QS 9C 9S 5H JH JS AH AC
8D 3C JD 2H AC 9C 7H 5S 4D 8H
7C JH 9H 6C JS 9S 7H 8C 9D 4H
2D AS 9S 6H 4D JS JH 9H AD QD
6H 7S JH KH AH 7H TD 5S 6S 2C
8H JH 6S 5H 5S 9D TC 4C QC 9S
7D 2C KD 3H 5H AS QD 7H JS 4D
TS QH 6C 8H TH 5H 3C 3H 9C 9D
AD KH JS 5D 3H AS AC 9S 5C KC
2C KH 8C JC QS 6D AH 2D KC TC
9D 3H 2S 7C 4D 6D KH KS 8D 7D
9H 2S TC JH AC QC 3H 5S 3S 8H
3S AS KD 8H 4C 3H 7C JH QH TS
7S 6D 7H 9D JH 4C 3D 3S 6C AS
4S 2H 2C 4C 8S 5H KC 8C QC QD
3H 3S 6C QS QC 2D 6S 5D 2C 9D
2H 8D JH 2S 3H 2D 6C 5C 7S AD
9H JS 5D QH 8S TS 2H 7S 6S AD
6D QC 9S 7H 5H 5C 7D KC JD 4H
QC 5S 9H 9C 4D 6S KS 2S 4C 7C
9H 7C 4H 8D 3S 6H 5C 8H JS 7S
2D 6H JS TD 4H 4D JC TH 5H KC
AC 7C 8D TH 3H 9S 2D 4C KC 4D
KD QS 9C 7S 3D KS AD TS 4C 4H
QH 9C 8H 2S 7D KS 7H 5D KD 4C
9C 2S 2H JC 6S 6C TC QC JH 5C
7S AC 8H KC 8S 6H QS JC 3D 6S
JS 2D JH 8C 4S 6H 8H 6D 5D AD
6H 7D 2S 4H 9H 7C AS AC 8H 5S
3C JS 4S 6D 5H 2S QH 6S 9C 2C
3D 5S 6S 9S 4C QS 8D QD 8S TC
9C 3D AH 9H 5S 2C 7D AD JC 3S
7H TC AS 3C 6S 6D 7S KH KC 9H
3S TC 8H 6S 5H JH 8C 7D AC 2S
QD 9D 9C 3S JC 8C KS 8H 5D 4D
JS AH JD 6D 9D 8C 9H 9S 8H 3H
2D 6S 4C 4D 8S AD 4S TC AH 9H
TS AC QC TH KC 6D 4H 7S 8C 2H
3C QD JS 9D 5S JC AH 2H TS 9H
3H 4D QH 5D 9C 5H 7D 4S JC 3S
8S TH 3H 7C 2H JD JS TS AC 8D
9C 2H TD KC JD 2S 8C 5S AD 2C
3D KD 7C 5H 4D QH QD TC 6H 7D
7H 2C KC 5S KD 6H AH QC 7S QH
6H 5C AC 5H 2C 9C 2D 7C TD 2S
4D 9D AH 3D 7C JD 4H 8C 4C KS
TH 3C JS QH 8H 4C AS 3D QS QC
4D 7S 5H JH 6D 7D 6H JS KH 3C
QD 8S 7D 2H 2C 7C JC 2S 5H 8C
QH 8S 9D TC 2H AD 7C 8D QD 6S
3S 7C AD 9H 2H 9S JD TS 4C 2D
3S AS 4H QC 2C 8H 8S 7S TD TC
JH TH TD 3S 4D 4H 5S 5D QS 2C
8C QD QH TC 6D 4S 9S 9D 4H QC
8C JS 9D 6H JD 3H AD 6S TD QC
KC 8S 3D 7C TD 7D 8D 9H 4S 3S
6C 4S 3D 9D KD TC KC KS AC 5S
7C 6S QH 3D JS KD 6H 6D 2D 8C
JD 2S 5S 4H 8S AC 2D 6S TS 5C
5H 8C 5S 3C 4S 3D 7C 8D AS 3H
AS TS 7C 3H AD 7D JC QS 6C 6H
3S 9S 4C AC QH 5H 5D 9H TS 4H
6C 5C 7H 7S TD AD JD 5S 2H 2S
7D 6C KC 3S JD 8D 8S TS QS KH
8S QS 8D 6C TH AC AH 2C 8H 9S
7H TD KH QH 8S 3D 4D AH JD AS
TS 3D 2H JC 2S JH KH 6C QC JS
KC TH 2D 6H 7S 2S TC 8C 9D QS
3C 9D 6S KH 8H 6D 5D TH 2C 2H
6H TC 7D AD 4D 8S TS 9H TD 7S
JS 6D JD JC 2H AC 6C 3D KH 8D
KH JD 9S 5D 4H 4C 3H 7S QS 5C
4H JD 5D 3S 3C 4D KH QH QS 7S
JD TS 8S QD AH 4C 6H 3S 5S 2C
QS 3D JD AS 8D TH 7C 6S QC KS
7S 2H 8C QC 7H AC 6D 2D TH KH
5S 6C 7H KH 7D AH 8C 5C 7S 3D
3C KD AD 7D 6C 4D KS 2D 8C 4S
7C 8D 5S 2D 2S AH AD 2C 9D TD
3C AD 4S KS JH 7C 5C 8C 9C TH
AS TD 4D 7C JD 8C QH 3C 5H 9S
3H 9C 8S 9S 6S QD KS AH 5H JH
QC 9C 5S 4H 2H TD 7D AS 8C 9D
8C 2C 9D KD TC 7S 3D KH QC 3C
4D AS 4C QS 5S 9D 6S JD QH KS
6D AH 6C 4C 5H TS 9H 7D 3D 5S
QS JD 7C 8D 9C AC 3S 6S 6C KH
8H JH 5D 9S 6D AS 6S 3S QC 7H
QD AD 5C JH 2H AH 4H AS KC 2C
JH 9C 2C 6H 2D JS 5D 9H KC 6D
7D 9D KD TH 3H AS 6S QC 6H AD
JD 4H 7D KC 3H JS 3C TH 3D QS
4C 3H 8C QD 5H 6H AS 8H AD JD
TH 8S KD 5D QC 7D JS 5S 5H TS
7D KC 9D QS 3H 3C 6D TS 7S AH
7C 4H 7H AH QC AC 4D 5D 6D TH
3C 4H 2S KD 8H 5H JH TC 6C JD
4S 8C 3D 4H JS TD 7S JH QS KD
7C QC KD 4D 7H 6S AD TD TC KH
5H 9H KC 3H 4D 3D AD 6S QD 6H
TH 7C 6H TS QH 5S 2C KC TD 6S
7C 4D 5S JD JH 7D AC KD KH 4H
7D 6C 8D 8H 5C JH 8S QD TH JD
8D 7D 6C 7C 9D KD AS 5C QH JH
9S 2C 8C 3C 4C KS JH 2D 8D 4H
7S 6C JH KH 8H 3H 9D 2D AH 6D
4D TC 9C 8D 7H TD KS TH KD 3C
JD 9H 8D QD AS KD 9D 2C 2S 9C
8D 3H 5C 7H KS 5H QH 2D 8C 9H
2D TH 6D QD 6C KC 3H 3S AD 4C
4H 3H JS 9D 3C TC 5H QH QC JC
3D 5C 6H 3S 3C JC 5S 7S 2S QH
AC 5C 8C 4D 5D 4H 2S QD 3C 3H
2C TD AH 9C KD JS 6S QD 4C QC
QS 8C 3S 4H TC JS 3H 7C JC AD
5H 4D 9C KS JC TD 9S TS 8S 9H
QD TS 7D AS AC 2C TD 6H 8H AH
6S AD 8C 4S 9H 8D 9D KH 8S 3C
QS 4D 2D 7S KH JS JC AD 4C 3C
QS 9S 7H KC TD TH 5H JS AC JH
6D AC 2S QS 7C AS KS 6S KH 5S
6D 8H KH 3C QS 2H 5C 9C 9D 6C
JS 2C 4C 6H 7D JC AC QD TD 3H
4H QC 8H JD 4C KD KS 5C KC 7S
6D 2D 3H 2S QD 5S 7H AS TH 6S
AS 6D 8D 2C 8S TD 8H QD JC AH
9C 9H 2D TD QH 2H 5C TC 3D 8H
KC 8S 3D KH 2S TS TC 6S 4D JH
9H 9D QS AC KC 6H 5D 4D 8D AH
9S 5C QS 4H 7C 7D 2H 8S AD JS
3D AC 9S AS 2C 2D 2H 3H JC KH
7H QH KH JD TC KS 5S 8H 4C 8D
2H 7H 3S 2S 5H QS 3C AS 9H KD
AD 3D JD 6H 5S 9C 6D AC 9S 3S
3D 5D 9C 2D AC 4S 2S AD 6C 6S
QC 4C 2D 3H 6S KC QH QD 2H JH
QC 3C 8S 4D 9S 2H 5C 8H QS QD
6D KD 6S 7H 3S KH 2H 5C JC 6C
3S 9S TC 6S 8H 2D AD 7S 8S TS
3C 6H 9C 3H 5C JC 8H QH TD QD
3C JS QD 5D TD 2C KH 9H TH AS
9S TC JD 3D 5C 5H AD QH 9H KC
TC 7H 4H 8H 3H TD 6S AC 7C 2S
QS 9D 5D 3C JC KS 4D 6C JH 2S
9S 6S 3C 7H TS 4C KD 6D 3D 9C
2D 9H AH AC 7H 2S JH 3S 7C QC
QD 9H 3C 2H AC AS 8S KD 8C KH
2D 7S TD TH 6D JD 8D 4D 2H 5S
8S QH KD JD QS JH 4D KC 5H 3S
3C KH QC 6D 8H 3S AH 7D TD 2D
5S 9H QH 4S 6S 6C 6D TS TH 7S
6C 4C 6D QS JS 9C TS 3H 8D 8S
JS 5C 7S AS 2C AH 2H AD 5S TC
KD 6C 9C 9D TS 2S JC 4H 2C QD
QS 9H TC 3H KC KS 4H 3C AD TH
KH 9C 2H KD 9D TC 7S KC JH 2D
7C 3S KC AS 8C 5D 9C 9S QH 3H
2D 8C TD 4C 2H QC 5D TC 2C 7D
KS 4D 6C QH TD KH 5D 7C AD 8D
2S 9S 8S 4C 8C 3D 6H QD 7C 7H
6C 8S QH 5H TS 5C 3C 4S 2S 2H
8S 6S 2H JC 3S 3H 9D 8C 2S 7H
QC 2C 8H 9C AC JD 4C 4H 6S 3S
3H 3S 7D 4C 9S 5H 8H JC 3D TC
QH 2S 2D 9S KD QD 9H AD 6D 9C
8D 2D KS 9S JC 4C JD KC 4S TH
KH TS 6D 4D 5C KD 5H AS 9H AD
QD JS 7C 6D 5D 5C TH 5H QH QS
9D QH KH 5H JH 4C 4D TC TH 6C
KH AS TS 9D KD 9C 7S 4D 8H 5S
KH AS 2S 7D 9D 4C TS TH AH 7C
KS 4D AC 8S 9S 8D TH QH 9D 5C
5D 5C 8C QS TC 4C 3D 3S 2C 8D
9D KS 2D 3C KC 4S 8C KH 6C JC
8H AH 6H 7D 7S QD 3C 4C 6C KC
3H 2C QH 8H AS 7D 4C 8C 4H KC
QD 5S 4H 2C TD AH JH QH 4C 8S
3H QS 5S JS 8H 2S 9H 9C 3S 2C
6H TS 7S JC QD AC TD KC 5S 3H
QH AS QS 7D JC KC 2C 4C 5C 5S
QH 3D AS JS 4H 8D 7H JC 2S 9C
5D 4D 2S 4S 9D 9C 2D QS 8H 7H
6D 7H 3H JS TS AC 2D JH 7C 8S
JH 5H KC 3C TC 5S 9H 4C 8H 9D
8S KC 5H 9H AD KS 9D KH 8D AH
JC 2H 9H KS 6S 3H QC 5H AH 9C
5C KH 5S AD 6C JC 9H QC 9C TD
5S 5D JC QH 2D KS 8H QS 2H TS
JH 5H 5S AH 7H 3C 8S AS TD KH
6H 3D JD 2C 4C KC 7S AH 6C JH
4C KS 9D AD 7S KC 7D 8H 3S 9C
7H 5C 5H 3C 8H QC 3D KH 6D JC
2D 4H 5D 7D QC AD AH 9H QH 8H
KD 8C JS 9D 3S 3C 2H 5D 6D 2S
8S 6S TS 3C 6H 8D 5S 3H TD 6C
KS 3D JH 9C 7C 9S QS 5S 4H 6H
7S 6S TH 4S KC KD 3S JC JH KS
7C 3C 2S 6D QH 2C 7S 5H 8H AH
KC 8D QD 6D KH 5C 7H 9D 3D 9C
6H 2D 8S JS 9S 2S 6D KC 7C TC
KD 9C JH 7H KC 8S 2S 7S 3D 6H
4H 9H 2D 4C 8H 7H 5S 8S 2H 8D
AD 7C 3C 7S 5S 4D 9H 3D JC KH
5D AS 7D 6D 9C JC 4C QH QS KH
KD JD 7D 3D QS QC 8S 6D JS QD
6S 8C 5S QH TH 9H AS AC 2C JD
QC KS QH 7S 3C 4C 5C KC 5D AH
6C 4H 9D AH 2C 3H KD 3D TS 5C
TD 8S QS AS JS 3H KD AC 4H KS
7D 5D TS 9H 4H 4C 9C 2H 8C QC
2C 7D 9H 4D KS 4C QH AD KD JS
QD AD AH KH 9D JS 9H JC KD JD
8S 3C 4S TS 7S 4D 5C 2S 6H 7C
JS 7S 5C KD 6D QH 8S TD 2H 6S
QH 6C TC 6H TD 4C 9D 2H QC 8H
3D TS 4D 2H 6H 6S 2C 7H 8S 6C
9H 9D JD JH 3S AH 2C 6S 3H 8S
2C QS 8C 5S 3H 2S 7D 3C AD 4S
5C QC QH AS TS 4S 6S 4C 5H JS
JH 5C TD 4C 6H JS KD KH QS 4H
TC KH JC 4D 9H 9D 8D KC 3C 8H
2H TC 8S AD 9S 4H TS 7H 2C 5C
4H 2S 6C 5S KS AH 9C 7C 8H KD
TS QH TD QS 3C JH AH 2C 8D 7D
5D KC 3H 5S AC 4S 7H QS 4C 2H
3D 7D QC KH JH 6D 6C TD TH KD
5S 8D TH 6C 9D 7D KH 8C 9S 6D
JD QS 7S QC 2S QH JC 4S KS 8D
7S 5S 9S JD KD 9C JC AD 2D 7C
4S 5H AH JH 9C 5D TD 7C 2D 6S
KC 6C 7H 6S 9C QD 5S 4H KS TD
6S 8D KS 2D TH TD 9H JD TS 3S
KH JS 4H 5D 9D TC TD QC JD TS
QS QD AC AD 4C 6S 2D AS 3H KC
4C 7C 3C TD QS 9C KC AS 8D AD
KC 7H QC 6D 8H 6S 5S AH 7S 8C
3S AD 9H JC 6D JD AS KH 6S JH
AD 3D TS KS 7H JH 2D JS QD AC
9C JD 7C 6D TC 6H 6C JC 3D 3S
QC KC 3S JC KD 2C 8D AH QS TS
AS KD 3D JD 8H 7C 8C 5C QD 6C
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
class Node:
def __init__(self, data):
self.data = data # Assign data
self.next = None # Initialize next as null
self.prev = None # Initialize prev as null
class Stack:
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self):
self.head = None
def push(self, data):
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self):
"""pop the top element off the stack"""
if self.head is None:
return None
else:
temp = self.head.data
self.head = self.head.next
self.head.prev = None
return temp
def top(self):
"""return the top element of the stack"""
return self.head.data
def __len__(self):
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self):
return self.head is None
def print_stack(self):
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
class Node:
def __init__(self, data):
self.data = data # Assign data
self.next = None # Initialize next as null
self.prev = None # Initialize prev as null
class Stack:
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self):
self.head = None
def push(self, data):
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self):
"""pop the top element off the stack"""
if self.head is None:
return None
else:
temp = self.head.data
self.head = self.head.next
self.head.prev = None
return temp
def top(self):
"""return the top element of the stack"""
return self.head.data
def __len__(self):
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self):
return self.head is None
def print_stack(self):
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def dfs(u):
global g, r, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
for v in g[u]:
dfs(v)
stack.append(u)
def dfs2(u):
global g, r, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
component.append(u)
for v in r[u]:
dfs2(v)
def kosaraju():
global g, r, scc, component, visit, stack
for i in range(n):
dfs(i)
visit = [False] * n
for i in stack[::-1]:
if visit[i]:
continue
component = []
dfs2(i)
scc.append(component)
return scc
if __name__ == "__main__":
# n - no of nodes, m - no of edges
n, m = list(map(int, input().strip().split()))
g = [[] for i in range(n)] # graph
r = [[] for i in range(n)] # reversed graph
# input graph data (edges)
for i in range(m):
u, v = list(map(int, input().strip().split()))
g[u].append(v)
r[v].append(u)
stack = []
visit = [False] * n
scc = []
component = []
print(kosaraju())
| def dfs(u):
global g, r, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
for v in g[u]:
dfs(v)
stack.append(u)
def dfs2(u):
global g, r, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
component.append(u)
for v in r[u]:
dfs2(v)
def kosaraju():
global g, r, scc, component, visit, stack
for i in range(n):
dfs(i)
visit = [False] * n
for i in stack[::-1]:
if visit[i]:
continue
component = []
dfs2(i)
scc.append(component)
return scc
if __name__ == "__main__":
# n - no of nodes, m - no of edges
n, m = list(map(int, input().strip().split()))
g = [[] for i in range(n)] # graph
r = [[] for i in range(n)] # reversed graph
# input graph data (edges)
for i in range(m):
u, v = list(map(int, input().strip().split()))
g[u].append(v)
r[v].append(u)
stack = []
visit = [False] * n
scc = []
component = []
print(kosaraju())
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Breath First Search (BFS) can be used when finding the shortest path
from a given source node to a target node in an unweighted graph.
"""
from __future__ import annotations
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
class Graph:
def __init__(self, graph: dict[str, str], source_vertex: str) -> None:
"""Graph is implemented as dictionary of adjacency lists. Also,
Source vertex have to be defined upon initialization.
"""
self.graph = graph
# mapping node to its parent in resulting breadth first tree
self.parent = {}
self.source_vertex = source_vertex
def breath_first_search(self) -> None:
"""This function is a helper for running breath first search on this graph.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
>>> g.parent
{'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'}
"""
visited = {self.source_vertex}
self.parent[self.source_vertex] = None
queue = [self.source_vertex] # first in first out queue
while queue:
vertex = queue.pop(0)
for adjacent_vertex in self.graph[vertex]:
if adjacent_vertex not in visited:
visited.add(adjacent_vertex)
self.parent[adjacent_vertex] = vertex
queue.append(adjacent_vertex)
def shortest_path(self, target_vertex: str) -> str:
"""This shortest path function returns a string, describing the result:
1.) No path is found. The string is a human readable message to indicate this.
2.) The shortest path is found. The string is in the form
`v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target
vertex, if it exists separately.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
Case 1 - No path is found.
>>> g.shortest_path("Foo")
'No path from vertex:G to vertex:Foo'
Case 2 - The path is found.
>>> g.shortest_path("D")
'G->C->A->B->D'
>>> g.shortest_path("G")
'G'
"""
if target_vertex == self.source_vertex:
return f"{self.source_vertex}"
elif not self.parent.get(target_vertex):
return f"No path from vertex:{self.source_vertex} to vertex:{target_vertex}"
else:
return self.shortest_path(self.parent[target_vertex]) + f"->{target_vertex}"
if __name__ == "__main__":
import doctest
doctest.testmod()
g = Graph(graph, "G")
g.breath_first_search()
print(g.shortest_path("D"))
print(g.shortest_path("G"))
print(g.shortest_path("Foo"))
| """Breath First Search (BFS) can be used when finding the shortest path
from a given source node to a target node in an unweighted graph.
"""
from __future__ import annotations
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
class Graph:
def __init__(self, graph: dict[str, str], source_vertex: str) -> None:
"""Graph is implemented as dictionary of adjacency lists. Also,
Source vertex have to be defined upon initialization.
"""
self.graph = graph
# mapping node to its parent in resulting breadth first tree
self.parent = {}
self.source_vertex = source_vertex
def breath_first_search(self) -> None:
"""This function is a helper for running breath first search on this graph.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
>>> g.parent
{'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'}
"""
visited = {self.source_vertex}
self.parent[self.source_vertex] = None
queue = [self.source_vertex] # first in first out queue
while queue:
vertex = queue.pop(0)
for adjacent_vertex in self.graph[vertex]:
if adjacent_vertex not in visited:
visited.add(adjacent_vertex)
self.parent[adjacent_vertex] = vertex
queue.append(adjacent_vertex)
def shortest_path(self, target_vertex: str) -> str:
"""This shortest path function returns a string, describing the result:
1.) No path is found. The string is a human readable message to indicate this.
2.) The shortest path is found. The string is in the form
`v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target
vertex, if it exists separately.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
Case 1 - No path is found.
>>> g.shortest_path("Foo")
'No path from vertex:G to vertex:Foo'
Case 2 - The path is found.
>>> g.shortest_path("D")
'G->C->A->B->D'
>>> g.shortest_path("G")
'G'
"""
if target_vertex == self.source_vertex:
return f"{self.source_vertex}"
elif not self.parent.get(target_vertex):
return f"No path from vertex:{self.source_vertex} to vertex:{target_vertex}"
else:
return self.shortest_path(self.parent[target_vertex]) + f"->{target_vertex}"
if __name__ == "__main__":
import doctest
doctest.testmod()
g = Graph(graph, "G")
g.breath_first_search()
print(g.shortest_path("D"))
print(g.shortest_path("G"))
print(g.shortest_path("Foo"))
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/bin/sh
#
# An example hook script to prepare a packed repository for use over
# dumb transports.
#
# To enable this hook, rename this file to "post-update".
exec git update-server-info
| #!/bin/sh
#
# An example hook script to prepare a packed repository for use over
# dumb transports.
#
# To enable this hook, rename this file to "post-update".
exec git update-server-info
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| alphabet = {
"A": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"B": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"C": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"D": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"E": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"F": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"G": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"H": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"I": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"J": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"K": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"L": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"M": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"N": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"O": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"P": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"Q": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"R": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"S": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"T": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"U": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"V": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"W": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"X": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"Y": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
"Z": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
}
def generate_table(key: str) -> list[tuple[str, str]]:
"""
>>> generate_table('marvin') # doctest: +NORMALIZE_WHITESPACE
[('ABCDEFGHIJKLM', 'UVWXYZNOPQRST'), ('ABCDEFGHIJKLM', 'NOPQRSTUVWXYZ'),
('ABCDEFGHIJKLM', 'STUVWXYZNOPQR'), ('ABCDEFGHIJKLM', 'QRSTUVWXYZNOP'),
('ABCDEFGHIJKLM', 'WXYZNOPQRSTUV'), ('ABCDEFGHIJKLM', 'UVWXYZNOPQRST')]
"""
return [alphabet[char] for char in key.upper()]
def encrypt(key: str, words: str) -> str:
"""
>>> encrypt('marvin', 'jessica')
'QRACRWU'
"""
cipher = ""
count = 0
table = generate_table(key)
for char in words.upper():
cipher += get_opponent(table[count], char)
count = (count + 1) % len(table)
return cipher
def decrypt(key: str, words: str) -> str:
"""
>>> decrypt('marvin', 'QRACRWU')
'JESSICA'
"""
return encrypt(key, words)
def get_position(table: tuple[str, str], char: str) -> tuple[int, int]:
"""
>>> get_position(generate_table('marvin')[0], 'M')
(0, 12)
"""
# `char` is either in the 0th row or the 1st row
row = 0 if char in table[0] else 1
col = table[row].index(char)
return row, col
def get_opponent(table: tuple[str, str], char: str) -> str:
"""
>>> get_opponent(generate_table('marvin')[0], 'M')
'T'
"""
row, col = get_position(table, char.upper())
if row == 1:
return table[0][col]
else:
return table[1][col] if row == 0 else char
if __name__ == "__main__":
import doctest
doctest.testmod() # Fist ensure that all our tests are passing...
"""
Demo:
Enter key: marvin
Enter text to encrypt: jessica
Encrypted: QRACRWU
Decrypted with key: JESSICA
"""
key = input("Enter key: ").strip()
text = input("Enter text to encrypt: ").strip()
cipher_text = encrypt(key, text)
print(f"Encrypted: {cipher_text}")
print(f"Decrypted with key: {decrypt(key, cipher_text)}")
| alphabet = {
"A": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"B": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"C": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"D": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"E": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"F": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"G": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"H": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"I": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"J": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"K": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"L": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"M": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"N": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"O": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"P": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"Q": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"R": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"S": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"T": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"U": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"V": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"W": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"X": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"Y": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
"Z": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
}
def generate_table(key: str) -> list[tuple[str, str]]:
"""
>>> generate_table('marvin') # doctest: +NORMALIZE_WHITESPACE
[('ABCDEFGHIJKLM', 'UVWXYZNOPQRST'), ('ABCDEFGHIJKLM', 'NOPQRSTUVWXYZ'),
('ABCDEFGHIJKLM', 'STUVWXYZNOPQR'), ('ABCDEFGHIJKLM', 'QRSTUVWXYZNOP'),
('ABCDEFGHIJKLM', 'WXYZNOPQRSTUV'), ('ABCDEFGHIJKLM', 'UVWXYZNOPQRST')]
"""
return [alphabet[char] for char in key.upper()]
def encrypt(key: str, words: str) -> str:
"""
>>> encrypt('marvin', 'jessica')
'QRACRWU'
"""
cipher = ""
count = 0
table = generate_table(key)
for char in words.upper():
cipher += get_opponent(table[count], char)
count = (count + 1) % len(table)
return cipher
def decrypt(key: str, words: str) -> str:
"""
>>> decrypt('marvin', 'QRACRWU')
'JESSICA'
"""
return encrypt(key, words)
def get_position(table: tuple[str, str], char: str) -> tuple[int, int]:
"""
>>> get_position(generate_table('marvin')[0], 'M')
(0, 12)
"""
# `char` is either in the 0th row or the 1st row
row = 0 if char in table[0] else 1
col = table[row].index(char)
return row, col
def get_opponent(table: tuple[str, str], char: str) -> str:
"""
>>> get_opponent(generate_table('marvin')[0], 'M')
'T'
"""
row, col = get_position(table, char.upper())
if row == 1:
return table[0][col]
else:
return table[1][col] if row == 0 else char
if __name__ == "__main__":
import doctest
doctest.testmod() # Fist ensure that all our tests are passing...
"""
Demo:
Enter key: marvin
Enter text to encrypt: jessica
Encrypted: QRACRWU
Decrypted with key: JESSICA
"""
key = input("Enter key: ").strip()
text = input("Enter text to encrypt: ").strip()
cipher_text = encrypt(key, text)
print(f"Encrypted: {cipher_text}")
print(f"Decrypted with key: {decrypt(key, cipher_text)}")
| -1 |
TheAlgorithms/Python | 4,409 | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures | I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| AhmedHaj | "2021-05-11T03:30:29Z" | "2021-05-12T06:22:42Z" | 727341e3db4a28dae3f1bbf166522844f3de8f6d | deb71167e7d5aabe24ae4a5c33e8e73dd3af8ece | [mypy] Fix type annotations for linked_stack.py, evaluate_postfix_notations.py, stack.py in data structures. I have kept it to only 3 files as per guidelines.
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution()
# 31875000
"""
for a in range(300):
for b in range(400):
for c in range(500):
if a < b < c:
if (a ** 2) + (b ** 2) == (c ** 2):
if (a + b + c) == 1000:
return a * b * c
def solution_fast() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution_fast()
# 31875000
"""
for a in range(300):
for b in range(400):
c = 1000 - a - b
if a < b < c and (a ** 2) + (b ** 2) == (c ** 2):
return a * b * c
def benchmark() -> None:
"""
Benchmark code comparing two different version function.
"""
import timeit
print(
timeit.timeit("solution()", setup="from __main__ import solution", number=1000)
)
print(
timeit.timeit(
"solution_fast()", setup="from __main__ import solution_fast", number=1000
)
)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 9: https://projecteuler.net/problem=9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product a*b*c.
References:
- https://en.wikipedia.org/wiki/Pythagorean_triple
"""
def solution() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution()
# 31875000
"""
for a in range(300):
for b in range(400):
for c in range(500):
if a < b < c:
if (a ** 2) + (b ** 2) == (c ** 2):
if (a + b + c) == 1000:
return a * b * c
def solution_fast() -> int:
"""
Returns the product of a,b,c which are Pythagorean Triplet that satisfies
the following:
1. a < b < c
2. a**2 + b**2 = c**2
3. a + b + c = 1000
# The code below has been commented due to slow execution affecting Travis.
# >>> solution_fast()
# 31875000
"""
for a in range(300):
for b in range(400):
c = 1000 - a - b
if a < b < c and (a ** 2) + (b ** 2) == (c ** 2):
return a * b * c
def benchmark() -> None:
"""
Benchmark code comparing two different version function.
"""
import timeit
print(
timeit.timeit("solution()", setup="from __main__ import solution", number=1000)
)
print(
timeit.timeit(
"solution_fast()", setup="from __main__ import solution_fast", number=1000
)
)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 4,359 | fix(ci): Update pre-commit hooks and apply new black | Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| dhruvmanila | "2021-04-26T04:41:57Z" | "2021-04-26T05:46:50Z" | 69457357e8c6a3530034aca9707e22ce769da067 | 6f21f76696ff6657bff6fc2239315a1650924190 | fix(ci): Update pre-commit hooks and apply new black. Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| name: pre-commit
on: [push, pull_request]
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: |
~/.cache/pre-commit
~/.cache/pip
key: ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- uses: actions/setup-python@v2
- uses: psf/[email protected]
- name: Install pre-commit
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pre-commit
- run: pre-commit run --verbose --all-files --show-diff-on-failure
| name: pre-commit
on: [push, pull_request]
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: |
~/.cache/pre-commit
~/.cache/pip
key: ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- uses: actions/setup-python@v2
- uses: psf/[email protected]
- name: Install pre-commit
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pre-commit
- run: pre-commit run --verbose --all-files --show-diff-on-failure
| 1 |
TheAlgorithms/Python | 4,359 | fix(ci): Update pre-commit hooks and apply new black | Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| dhruvmanila | "2021-04-26T04:41:57Z" | "2021-04-26T05:46:50Z" | 69457357e8c6a3530034aca9707e22ce769da067 | 6f21f76696ff6657bff6fc2239315a1650924190 | fix(ci): Update pre-commit hooks and apply new black. Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
exclude: |
(?x)^(
data_structures/heap/binomial_heap.py
)$
- id: requirements-txt-fixer
- repo: https://github.com/psf/black
rev: 20.8b1
hooks:
- id: black
- repo: https://github.com/PyCQA/isort
rev: 5.7.0
hooks:
- id: isort
args:
- --profile=black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.9.0
hooks:
- id: flake8
args:
- --ignore=E203,W503
- --max-complexity=25
- --max-line-length=88
# FIXME: fix mypy errors and then uncomment this
# - repo: https://github.com/pre-commit/mirrors-mypy
# rev: v0.782
# hooks:
# - id: mypy
# args:
# - --ignore-missing-imports
- repo: https://github.com/codespell-project/codespell
rev: v2.0.0
hooks:
- id: codespell
args:
- --ignore-words-list=ans,crate,fo,followings,hist,iff,mater,secant,som,tim
- --skip="./.*,./strings/dictionary.txt,./strings/words.txt,./project_euler/problem_022/p022_names.txt"
- --quiet-level=2
exclude: |
(?x)^(
strings/dictionary.txt |
strings/words.txt |
project_euler/problem_022/p022_names.txt
)$
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
| repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
exclude: |
(?x)^(
data_structures/heap/binomial_heap.py
)$
- id: requirements-txt-fixer
- repo: https://github.com/psf/black
rev: 21.4b0
hooks:
- id: black
- repo: https://github.com/PyCQA/isort
rev: 5.8.0
hooks:
- id: isort
args:
- --profile=black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.9.1
hooks:
- id: flake8
args:
- --ignore=E203,W503
- --max-complexity=25
- --max-line-length=88
# FIXME: fix mypy errors and then uncomment this
# - repo: https://github.com/pre-commit/mirrors-mypy
# rev: v0.782
# hooks:
# - id: mypy
# args:
# - --ignore-missing-imports
- repo: https://github.com/codespell-project/codespell
rev: v2.0.0
hooks:
- id: codespell
args:
- --ignore-words-list=ans,crate,fo,followings,hist,iff,mater,secant,som,tim
- --skip="./.*,./strings/dictionary.txt,./strings/words.txt,./project_euler/problem_022/p022_names.txt"
- --quiet-level=2
exclude: |
(?x)^(
strings/dictionary.txt |
strings/words.txt |
project_euler/problem_022/p022_names.txt
)$
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
| 1 |
TheAlgorithms/Python | 4,359 | fix(ci): Update pre-commit hooks and apply new black | Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| dhruvmanila | "2021-04-26T04:41:57Z" | "2021-04-26T05:46:50Z" | 69457357e8c6a3530034aca9707e22ce769da067 | 6f21f76696ff6657bff6fc2239315a1650924190 | fix(ci): Update pre-commit hooks and apply new black. Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A binary search Tree
"""
class Node:
def __init__(self, value, parent):
self.value = value
self.parent = parent # Added in order to delete a node easier
self.left = None
self.right = None
def __repr__(self):
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({"%s" % (self.value): (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root=None):
self.root = root
def __str__(self):
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node, new_children):
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = new_children
def is_right(self, node):
return node == node.parent.right
def empty(self):
return self.root is None
def __insert(self, value):
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value, None) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values):
for value in values:
self.__insert(value)
return self
def search(self, value):
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node=None):
"""
We go deep on the right branch
"""
if node is None:
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node=None):
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value):
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value)
node.value = (
tmp_node.value
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node):
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None):
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node):
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree """
arr = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node):
"""
postOrder (left, right, self)
"""
node_list = list()
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree():
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree().insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value)
print("Min Value: ", t.get_min().value)
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod()
# binary_search_tree()
| """
A binary search Tree
"""
class Node:
def __init__(self, value, parent):
self.value = value
self.parent = parent # Added in order to delete a node easier
self.left = None
self.right = None
def __repr__(self):
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({"%s" % (self.value): (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root=None):
self.root = root
def __str__(self):
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node, new_children):
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = new_children
def is_right(self, node):
return node == node.parent.right
def empty(self):
return self.root is None
def __insert(self, value):
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value, None) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values):
for value in values:
self.__insert(value)
return self
def search(self, value):
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node=None):
"""
We go deep on the right branch
"""
if node is None:
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node=None):
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value):
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value)
node.value = (
tmp_node.value
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node):
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None):
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node):
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree"""
arr = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node):
"""
postOrder (left, right, self)
"""
node_list = list()
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree():
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree().insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value)
print("Min Value: ", t.get_min().value)
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod()
# binary_search_tree()
| 1 |
TheAlgorithms/Python | 4,359 | fix(ci): Update pre-commit hooks and apply new black | Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| dhruvmanila | "2021-04-26T04:41:57Z" | "2021-04-26T05:46:50Z" | 69457357e8c6a3530034aca9707e22ce769da067 | 6f21f76696ff6657bff6fc2239315a1650924190 | fix(ci): Update pre-commit hooks and apply new black. Ref:
- https://github.com/psf/black/pull/1740 (New formatting)
- https://github.com/psf/black/releases/tag/21.4b0
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Iterable, List, Optional
class Heap:
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
>>> h = Heap()
>>> h.build_max_heap(unsorted)
>>> print(h)
[209, 201, 25, 103, 107, 15, 1, 9, 7, 11, 5]
>>>
>>> h.extract_max()
209
>>> print(h)
[201, 107, 25, 103, 11, 15, 1, 9, 7, 5]
>>>
>>> h.insert(100)
>>> print(h)
[201, 107, 25, 103, 100, 15, 1, 9, 7, 5, 11]
>>>
>>> h.heap_sort()
>>> print(h)
[1, 5, 7, 9, 11, 15, 25, 100, 103, 107, 201]
"""
def __init__(self) -> None:
self.h: List[float] = []
self.heap_size: int = 0
def __repr__(self) -> str:
return str(self.h)
def parent_index(self, child_idx: int) -> Optional[int]:
""" return the parent index of given child """
if child_idx > 0:
return (child_idx - 1) // 2
return None
def left_child_idx(self, parent_idx: int) -> Optional[int]:
"""
return the left child index if the left child exists.
if not, return None.
"""
left_child_index = 2 * parent_idx + 1
if left_child_index < self.heap_size:
return left_child_index
return None
def right_child_idx(self, parent_idx: int) -> Optional[int]:
"""
return the right child index if the right child exists.
if not, return None.
"""
right_child_index = 2 * parent_idx + 2
if right_child_index < self.heap_size:
return right_child_index
return None
def max_heapify(self, index: int) -> None:
"""
correct a single violation of the heap property in a subtree's root.
"""
if index < self.heap_size:
violation: int = index
left_child = self.left_child_idx(index)
right_child = self.right_child_idx(index)
# check which child is larger than its parent
if left_child is not None and self.h[left_child] > self.h[violation]:
violation = left_child
if right_child is not None and self.h[right_child] > self.h[violation]:
violation = right_child
# if violation indeed exists
if violation != index:
# swap to fix the violation
self.h[violation], self.h[index] = self.h[index], self.h[violation]
# fix the subsequent violation recursively if any
self.max_heapify(violation)
def build_max_heap(self, collection: Iterable[float]) -> None:
""" build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
if self.heap_size > 1:
# max_heapify from right to left but exclude leaves (last level)
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
def max(self) -> float:
""" return the max in the heap """
if self.heap_size >= 1:
return self.h[0]
else:
raise Exception("Empty heap")
def extract_max(self) -> float:
""" get and remove max from heap """
if self.heap_size >= 2:
me = self.h[0]
self.h[0] = self.h.pop(-1)
self.heap_size -= 1
self.max_heapify(0)
return me
elif self.heap_size == 1:
self.heap_size -= 1
return self.h.pop(-1)
else:
raise Exception("Empty heap")
def insert(self, value: float) -> None:
""" insert a new value into the max heap """
self.h.append(value)
idx = (self.heap_size - 1) // 2
self.heap_size += 1
while idx >= 0:
self.max_heapify(idx)
idx = (idx - 1) // 2
def heap_sort(self) -> None:
size = self.heap_size
for j in range(size - 1, 0, -1):
self.h[0], self.h[j] = self.h[j], self.h[0]
self.heap_size -= 1
self.max_heapify(0)
self.heap_size = size
if __name__ == "__main__":
import doctest
# run doc test
doctest.testmod()
# demo
for unsorted in [
[0],
[2],
[3, 5],
[5, 3],
[5, 5],
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 3, 5],
[0, 2, 2, 3, 5],
[2, 5, 3, 0, 2, 3, 0, 3],
[6, 1, 2, 7, 9, 3, 4, 5, 10, 8],
[103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5],
[-45, -2, -5],
]:
print(f"unsorted array: {unsorted}")
heap = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")
print(f"max value: {heap.extract_max()}")
print(f"after max value removed: {heap}")
heap.insert(100)
print(f"after new value 100 inserted: {heap}")
heap.heap_sort()
print(f"heap-sorted array: {heap}\n")
| from typing import Iterable, List, Optional
class Heap:
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
>>> h = Heap()
>>> h.build_max_heap(unsorted)
>>> print(h)
[209, 201, 25, 103, 107, 15, 1, 9, 7, 11, 5]
>>>
>>> h.extract_max()
209
>>> print(h)
[201, 107, 25, 103, 11, 15, 1, 9, 7, 5]
>>>
>>> h.insert(100)
>>> print(h)
[201, 107, 25, 103, 100, 15, 1, 9, 7, 5, 11]
>>>
>>> h.heap_sort()
>>> print(h)
[1, 5, 7, 9, 11, 15, 25, 100, 103, 107, 201]
"""
def __init__(self) -> None:
self.h: List[float] = []
self.heap_size: int = 0
def __repr__(self) -> str:
return str(self.h)
def parent_index(self, child_idx: int) -> Optional[int]:
"""return the parent index of given child"""
if child_idx > 0:
return (child_idx - 1) // 2
return None
def left_child_idx(self, parent_idx: int) -> Optional[int]:
"""
return the left child index if the left child exists.
if not, return None.
"""
left_child_index = 2 * parent_idx + 1
if left_child_index < self.heap_size:
return left_child_index
return None
def right_child_idx(self, parent_idx: int) -> Optional[int]:
"""
return the right child index if the right child exists.
if not, return None.
"""
right_child_index = 2 * parent_idx + 2
if right_child_index < self.heap_size:
return right_child_index
return None
def max_heapify(self, index: int) -> None:
"""
correct a single violation of the heap property in a subtree's root.
"""
if index < self.heap_size:
violation: int = index
left_child = self.left_child_idx(index)
right_child = self.right_child_idx(index)
# check which child is larger than its parent
if left_child is not None and self.h[left_child] > self.h[violation]:
violation = left_child
if right_child is not None and self.h[right_child] > self.h[violation]:
violation = right_child
# if violation indeed exists
if violation != index:
# swap to fix the violation
self.h[violation], self.h[index] = self.h[index], self.h[violation]
# fix the subsequent violation recursively if any
self.max_heapify(violation)
def build_max_heap(self, collection: Iterable[float]) -> None:
"""build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
if self.heap_size > 1:
# max_heapify from right to left but exclude leaves (last level)
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
def max(self) -> float:
"""return the max in the heap"""
if self.heap_size >= 1:
return self.h[0]
else:
raise Exception("Empty heap")
def extract_max(self) -> float:
"""get and remove max from heap"""
if self.heap_size >= 2:
me = self.h[0]
self.h[0] = self.h.pop(-1)
self.heap_size -= 1
self.max_heapify(0)
return me
elif self.heap_size == 1:
self.heap_size -= 1
return self.h.pop(-1)
else:
raise Exception("Empty heap")
def insert(self, value: float) -> None:
"""insert a new value into the max heap"""
self.h.append(value)
idx = (self.heap_size - 1) // 2
self.heap_size += 1
while idx >= 0:
self.max_heapify(idx)
idx = (idx - 1) // 2
def heap_sort(self) -> None:
size = self.heap_size
for j in range(size - 1, 0, -1):
self.h[0], self.h[j] = self.h[j], self.h[0]
self.heap_size -= 1
self.max_heapify(0)
self.heap_size = size
if __name__ == "__main__":
import doctest
# run doc test
doctest.testmod()
# demo
for unsorted in [
[0],
[2],
[3, 5],
[5, 3],
[5, 5],
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 3, 5],
[0, 2, 2, 3, 5],
[2, 5, 3, 0, 2, 3, 0, 3],
[6, 1, 2, 7, 9, 3, 4, 5, 10, 8],
[103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5],
[-45, -2, -5],
]:
print(f"unsorted array: {unsorted}")
heap = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")
print(f"max value: {heap.extract_max()}")
print(f"after max value removed: {heap}")
heap.insert(100)
print(f"after new value 100 inserted: {heap}")
heap.heap_sort()
print(f"heap-sorted array: {heap}\n")
| 1 |
Subsets and Splits