repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
oobabooga/text-generation-webui | 2,912 | oobabooga__text-generation-webui-2912 | [
"2840"
] | 5d2a8b31be17a2992308d58b674274375a3f360c | diff --git a/modules/exllama.py b/modules/exllama.py
--- a/modules/exllama.py
+++ b/modules/exllama.py
@@ -1,6 +1,8 @@
import sys
from pathlib import Path
+from torch import version as torch_version
+
from modules import shared
from modules.logging_colors import logger
@@ -51,6 +53,12 @@ def from_pretrained(self, path_to_model):
if shared.args.gpu_split:
config.set_auto_map(shared.args.gpu_split)
config.gpu_peer_fix = True
+ if torch_version.hip:
+ config.rmsnorm_no_half2 = True
+ config.rope_no_half2 = True
+ config.matmul_no_half2 = True
+ config.silu_no_half2 = True
+
model = ExLlama(config)
tokenizer = ExLlamaTokenizer(str(tokenizer_model_path))
diff --git a/modules/exllama_hf.py b/modules/exllama_hf.py
--- a/modules/exllama_hf.py
+++ b/modules/exllama_hf.py
@@ -97,6 +97,11 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
if shared.args.gpu_split:
config.set_auto_map(shared.args.gpu_split)
config.gpu_peer_fix = True
+ if torch.version.hip:
+ config.rmsnorm_no_half2 = True
+ config.rope_no_half2 = True
+ config.matmul_no_half2 = True
+ config.silu_no_half2 = True
# This slowes down a bit but align better with autogptq generation.
# TODO: Should give user choice to tune the exllama config
| exllama gibberish output
### Describe the bug
I'm running into issues using exllama with the textgen UI. I have exllama running well using its webui, but when I load the same model into textgen UI, I only get gibberish as output. Llama.cpp works fine as a backend. I looked at the output in the console with --verbose and the input prompt looks fine. I tried multiple different sampler presets with no change. I'm using an AMD Vega 56 and I talked with another person on Discord with the same problem on an AMD card so it is probably an AMD-specific issue. I followed the oobabooga ROCm rentry guide in the README to set up textgen. One thing I noticed is the bitsandbytes package in the rocm fork is a little outdated, so could that be the issue?
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Set up textgen webUI using instructions in the README and the ROCm installation guide in Arch Linux on a system with an AMD card. Clone exllama into the repositories folder and install the requirements. Test that exllama works with its on WebGUI. Load a model using exllama in textgen webUI, then generate output from any prompt.
### Screenshot
_No response_
### Logs
```shell
No log available
```
### System Info
```shell
Arch Linux
AMD 5600X
AMD Vega 56
```
| I also have this issue, using RX 6700 XT on EndeavourOS (Arch).
I have the exact same issue too. Running on Arch with a 5600X as well and an RX 6800.
Tried out a variety of models to no avail, however the provided exllama web UI works wonderfully and so does the CLI example chatbot, no matter which model I throw at it. Leads me to think oobabooga is the point of failure. The most I saw this discussed was the fellow in #\750 having something similar and resolving it with a driver update. However, that was on an Nvidia card, so I'm not sure it applies to us.
If it helps, here's an example of said gibberish output I receive:
```Common sense questions and answers
Question: Hello, what does one plus one equal to?
Factual answer:FDrnrn Beaubourg Burgrnлия Sud TournConstraint StockrutMatrixrou tématu Wallsko stick Matrixagentлияobre Starлия Reserve hus BrunoAAStarAngle Bourrou titles Burgiler respect lo sortsDIR Bourbourg Außoin stick Sud Beaubourg Beauroubourg Fourier Sudrou Beau Beaurournobreлия frame stick stick Blaignerutmatrix stick consp Beaurn Burg stickobre Sud Sud Bourлияobreoin Stock Burg ace stickrut Beau Sud Burgbourg Beau Stockлия substitute StockrnFD Stockbourg Assume Beau Beau stickrou Beau lyrrourouShiftMDb SudrutSortobrern stick SudrouEventListeneroin prayerrn Bourbourg Sud Bruno Beau Bruno BrunoFDrnMDb Bruno Bour autonom stickFDFDrutobreobreFD Stock Burgoin FinлияFD Burgoinoinobrehing stickrou stickrou Beaurou Bour Sudrou Beau Burg Sudobre BurgoinFDFDbourgiatbourg stickbourgobre Beau Burg Beau Burg substitutionrnbourgrouлияFD BeaulaimFD BurgMDbrutFDлия Burg Reserve Reserve stickbourg Brunoobre```
Running a 6800xt and have the same problem. I tried update to ROCm 5.5.1 and PyTorch nightly preview version. Didn't help. Exllama's own webui and CLI example chatbot work fine. But oobabooga webui throws gibberish at me :( Please help us! | 2023-06-28T13:47:35 |
|
oobabooga/text-generation-webui | 3,014 | oobabooga__text-generation-webui-3014 | [
"3009"
] | e0a50fb77a9985205433317a82a132c7f4f12631 | diff --git a/download-model.py b/download-model.py
--- a/download-model.py
+++ b/download-model.py
@@ -23,7 +23,7 @@
class ModelDownloader:
- def __init__(self, max_retries):
+ def __init__(self, max_retries = 5):
self.s = requests.Session()
if max_retries:
self.s.mount('https://cdn-lfs.huggingface.co', HTTPAdapter(max_retries=max_retries))
| Error when downloading model from UI
### Describe the bug
I just downloaded the latest version of text-generation-webui on Ubuntu and started the UI but it is not longer allowing me to download a model from the UI. I tried to downloading 'anon8231489123/vicuna-13b-GPTQ-4bit-128g' but got the following error:
Traceback (most recent call last): File “/home/squirol/ben2/oobabooga_linux/text-generation-webui/server.py”, line 134, in download_model_wrapper downloader = downloader_module.ModelDownloader() TypeError: ModelDownloader.init() missing 1 required positional argument: ‘max_retries’
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Launch web UI using ./start_linux.sh
2. Open browser to http://127.0.0.1:7860/
3. Enter 'anon8231489123/vicuna-13b-GPTQ-4bit-128g' and select Download in UI
4. View exception under Download button
### Screenshot

### Logs
```shell
N/A
```
### System Info
```shell
Ubuntu
NVIDIA
```
| Can confirm, also was able to quickly fix for myself by modifying line 134 of server.py to read `downloader = downloader_module.ModelDownloader(max_retries=1)` | 2023-07-05T10:54:21 |
|
oobabooga/text-generation-webui | 3,120 | oobabooga__text-generation-webui-3120 | [
"3096"
] | 63688004dcf9093479d082950713f390f338f98e | diff --git a/modules/LoRA.py b/modules/LoRA.py
--- a/modules/LoRA.py
+++ b/modules/LoRA.py
@@ -8,6 +8,14 @@
from modules.models import reload_model
+def merge_loras():
+ if len(list({shared.model.peft_config[adapter].r for adapter in shared.model.peft_config.keys()})) > 1:
+ logger.warning("The loaded LoRAs cannot be merged, as they have dissimilar ranks. Only the first one will be active.")
+ return
+
+ shared.model.add_weighted_adapter(shared.lora_names, [1] * len(shared.lora_names), "__merged")
+ shared.model.set_adapter("__merged")
+
def add_lora_to_model(lora_names):
if 'GPTQForCausalLM' in shared.model.__class__.__name__ or shared.args.loader == 'AutoGPTQ':
add_lora_autogptq(lora_names)
@@ -136,11 +144,14 @@ def add_lora_transformers(lora_names):
return
# Add a LoRA when another LoRA is already present
- if len(removed_set) == 0 and len(prior_set) > 0:
+ if len(removed_set) == 0 and len(prior_set) > 0 and "__merged" not in shared.model.peft_config.keys():
logger.info(f"Adding the LoRA(s) named {added_set} to the model...")
for lora in added_set:
shared.model.load_adapter(get_lora_path(lora), lora)
+ if len(lora_names) > 1:
+ merge_loras()
+
return
# If any LoRA needs to be removed, start over
@@ -165,6 +176,9 @@ def add_lora_transformers(lora_names):
for lora in lora_names[1:]:
shared.model.load_adapter(get_lora_path(lora), lora)
+ if len(lora_names) > 1:
+ merge_loras()
+
shared.lora_names = lora_names
if not shared.args.load_in_8bit and not shared.args.cpu:
| With multiple LoRAs loaded, only the first one in the list affects the generated text output
### Describe the bug
We're experimenting with training "stacked" LoRAs, as in LoRAs trained on top of other LoRAs. According to the warning message triggered by this, such an arrangement is supposed to work, but "may have unexpected effects". Training succeeds without any issue, but when I load the finished LoRAs, only the first one seems to have any effect.
So far, we have encountered and successfully worked around an issue where the 2nd LoRA's weights would never get loaded due to a PEFT bug/weirdness (the 2nd LoRA's weights get serialized with module names starting with "base_model.model.base_model.model.model", but on deserialization, PEFT expects "base_model.model.model"). Patching this made the 2nd LoRA load and affect the generated output successfully when it's the only LoRA being loaded. However, when we load it on top of the 1st LoRA, it still has no effect. With the loading order reversed compared to the training order, only the 2nd LoRA has any effect on the output - as if the 2nd and subsequent LoRAs were basically ignored.
To test this, we tried loading two known good, independently trained LoRAs (tloan/alpaca-lora-7b and 22h/cabrita-lora-v0-1) on a common base model (yahma/llama-7b-hf). Each LoRA works fine when loaded on its own, and is seen to affect the output.
When the "LoRA(s)" box is set to list alpaca first, then cabrita, both are shown as successfully loaded, but the generated text is identical to what I get with only alpaca loaded. If I put cabrita first, then alpaca, the output is indistinguishable from only cabrita being selected. This seems to confirm that only the first LoRA listed in the box is properly loaded into the model, the rest are silently ignored.
BnB quantization doesn't appear to affect this behavior - it's the same in 4-bit, 8-bit and native mode.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Download yahma/llama-7b-hf, tloen/alpaca-lora-7b and 22h/cabrita-lora-v0-1 from Hugging Face.
Start the UI in chat mode.
On the Parameters tab, set the temperature to 0.3.
On the Chat settings tab, select the "Alpaca" instruction template.
On the Model tab, load the yahma/llama-7b-hf base model.
Without loading any LoRA, switch to the Text generation tab, and run the following 3 queries (clear history after each one):
- "Tell me about alpacas."
- "Invente uma desculpa criativa pra dizer que não preciso ir à festa."
- "How many helicopters can a human eat in one sitting?"
Note the responses. (The Portuguese query will prompt a response in English, while the helicopter one will be nonsense.)
On the Model tab, reload the model, then load the alpaca LoRA. Run the above 3 questions again. Note how the answers change. (The Portuguese prompt is now answered in Portuguese, but not very creatively, and the helicopter answer will be correct, i.e. none.)
Unload the alpaca LoRA, reload the model, and load the cabrita LoRA. Run the 3 prompts, and note the new answers. (Portuguese answer is different, more creative, while the helicopter one is back to nonsense.)
Unload the LoRA, reload the model. In the LoRA(s) box, select cabrita first, then alpaca. Run the 3 prompts. (Answers are as if only cabrita were loaded!)
Unload the LoRAs, reload the model. In the LoRA(s) box, select alpaca first, then cabrita. Run the 3 prompts. (Answers indistinguishable from when only alpaca was loaded!)
### Screenshot
_No response_
### Logs
```shell
no exception generated
```
### System Info
```shell
Ryzen 7950X with a Zotac GeForce RTX 4080 GPU, running Ubuntu 22.04 LTS with the open-source NVIDIA kernel module (with nvidia.NVreg_OpenRmEnableUnsupportedGpus=1 specified on the kernel command line to skip the "datacenter GPUs only" check). The issue is reproducible in both CUDA and CPU mode.
```
| It's not a bug, it's a feature.
There is no such thing as "stacked LoRA" You have it confused with Stable Diffusion (that does dynamically merge multiple LoRAs on forward during interference). But PEFT doesn't. PEFT uses one LoRA at a time.
If you add multiple LoRA's to PEFT model you can:
1. quickly switch between them (regardless of Rank)
2. merge them into another LoRA (must have the same Rank)
My playground extension does all of those things and more.
https://github.com/FartyPants/Playground
Training LoRA on top of PEFT model+LoRA - absolutely NO.
You need to merge model with LoRA if you want to train on top of that for some reason (but it may not be what you imagine it to be). | 2023-07-13T00:18:33 |
|
oobabooga/text-generation-webui | 3,309 | oobabooga__text-generation-webui-3309 | [
"3296",
"3296"
] | d6314fd5394bbfe2edd9030d953892dcfc4de105 | diff --git a/extensions/openai/completions.py b/extensions/openai/completions.py
--- a/extensions/openai/completions.py
+++ b/extensions/openai/completions.py
@@ -48,7 +48,7 @@ def __call__(self, input_ids: torch.LongTensor, logits: torch.FloatTensor) -> to
top_tokens = [ decode(tok) for tok in top_indices[0] ]
top_probs = [ float(x) for x in top_values[0] ]
self.token_alternatives = dict(zip(top_tokens, top_probs))
- debug_msg(f"{self.__class__.__name__}(logprobs+1={self.logprobs+1}, token_alternatives={self.token_alternatives})")
+ debug_msg(repr(self))
return logits
def __repr__(self):
@@ -63,7 +63,8 @@ def convert_logprobs_to_tiktoken(model, logprobs):
# return dict([(encoder.decode([encoder.encode(token)[0]]), prob) for token, prob in logprobs.items()])
# except KeyError:
# # assume native tokens if we can't find the tokenizer
- return logprobs
+# return logprobs
+ return logprobs
def marshal_common_params(body):
@@ -271,16 +272,16 @@ def chat_completions(body: dict, is_legacy: bool = False) -> dict:
req_params['max_new_tokens'] = req_params['truncation_length']
# format the prompt from messages
- prompt, token_count = messages_to_prompt(body, req_params, max_tokens)
+ prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings']
# set real max, avoid deeper errors
if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']:
req_params['max_new_tokens'] = req_params['truncation_length'] - token_count
+ stopping_strings = req_params.pop('stopping_strings', [])
+
# generate reply #######################################
debug_msg({'prompt': prompt, 'req_params': req_params})
- stopping_strings = req_params.pop('stopping_strings', [])
- logprob_proc = req_params.pop('logprob_proc', None)
generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
answer = ''
@@ -347,7 +348,7 @@ def stream_chat_completions(body: dict, is_legacy: bool = False):
req_params['max_new_tokens'] = req_params['truncation_length']
# format the prompt from messages
- prompt, token_count = messages_to_prompt(body, req_params, max_tokens)
+ prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings']
# set real max, avoid deeper errors
if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']:
@@ -441,16 +442,9 @@ def completions(body: dict, is_legacy: bool = False):
if not prompt_str in body:
raise InvalidRequestError("Missing required input", param=prompt_str)
- prompt = body[prompt_str]
- if isinstance(prompt, list):
- if prompt and isinstance(prompt[0], int):
- try:
- encoder = tiktoken.encoding_for_model(requested_model)
- prompt = encoder.decode(prompt)
- except KeyError:
- prompt = decode(prompt)[0]
- else:
- raise InvalidRequestError(message="API Batched generation not yet supported.", param=prompt_str)
+ prompt_arg = body[prompt_str]
+ if isinstance(prompt_arg, str) or (isinstance(prompt_arg, list) and isinstance(prompt_arg[0], int)):
+ prompt_arg = [prompt_arg]
# common params
req_params = marshal_common_params(body)
@@ -460,59 +454,75 @@ def completions(body: dict, is_legacy: bool = False):
req_params['max_new_tokens'] = max_tokens
requested_model = req_params.pop('requested_model')
logprob_proc = req_params.pop('logprob_proc', None)
+ stopping_strings = req_params.pop('stopping_strings', [])
+ #req_params['suffix'] = default(body, 'suffix', req_params['suffix'])
+ req_params['echo'] = default(body, 'echo', req_params['echo'])
+ req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
- token_count = len(encode(prompt)[0])
+ resp_list_data = []
+ total_completion_token_count = 0
+ total_prompt_token_count = 0
- if token_count + max_tokens > req_params['truncation_length']:
- err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})."
- # print(f"Warning: ${err_msg}")
- raise InvalidRequestError(message=err_msg, param=max_tokens_str)
+ for idx, prompt in enumerate(prompt_arg, start=0):
+ if isinstance(prompt[0], int):
+ # token lists
+ if requested_model == shared.model_name:
+ prompt = decode(prompt)[0]
+ else:
+ try:
+ encoder = tiktoken.encoding_for_model(requested_model)
+ prompt = encoder.decode(prompt)
+ except KeyError:
+ prompt = decode(prompt)[0]
- req_params['echo'] = default(body, 'echo', req_params['echo'])
- req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
+ token_count = len(encode(prompt)[0])
+ total_prompt_token_count += token_count
- # generate reply #######################################
- debug_msg({'prompt': prompt, 'req_params': req_params})
- stopping_strings = req_params.pop('stopping_strings', [])
- generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
+ if token_count + max_tokens > req_params['truncation_length']:
+ err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})."
+ # print(f"Warning: ${err_msg}")
+ raise InvalidRequestError(message=err_msg, param=max_tokens_str)
- answer = ''
+ # generate reply #######################################
+ debug_msg({'prompt': prompt, 'req_params': req_params})
+ generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
+ answer = ''
- for a in generator:
- answer = a
+ for a in generator:
+ answer = a
- # strip extra leading space off new generated content
- if answer and answer[0] == ' ':
- answer = answer[1:]
+ # strip extra leading space off new generated content
+ if answer and answer[0] == ' ':
+ answer = answer[1:]
- completion_token_count = len(encode(answer)[0])
- stop_reason = "stop"
- if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens:
- stop_reason = "length"
+ completion_token_count = len(encode(answer)[0])
+ total_completion_token_count += completion_token_count
+ stop_reason = "stop"
+ if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens:
+ stop_reason = "length"
+
+ respi = {
+ "index": idx,
+ "finish_reason": stop_reason,
+ "text": answer,
+ "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None,
+ }
+
+ resp_list_data.extend([respi])
resp = {
"id": cmpl_id,
"object": object_type,
"created": created_time,
"model": shared.model_name, # TODO: add Lora info?
- resp_list: [{
- "index": 0,
- "finish_reason": stop_reason,
- "text": answer,
- }],
+ resp_list: resp_list_data,
"usage": {
- "prompt_tokens": token_count,
- "completion_tokens": completion_token_count,
- "total_tokens": token_count + completion_token_count
+ "prompt_tokens": total_prompt_token_count,
+ "completion_tokens": total_completion_token_count,
+ "total_tokens": total_prompt_token_count + total_completion_token_count
}
}
- if logprob_proc and logprob_proc.token_alternatives:
- top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives)
- resp[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]}
- else:
- resp[resp_list][0]["logprobs"] = None
-
return resp
@@ -550,6 +560,10 @@ def stream_completions(body: dict, is_legacy: bool = False):
req_params['max_new_tokens'] = max_tokens
requested_model = req_params.pop('requested_model')
logprob_proc = req_params.pop('logprob_proc', None)
+ stopping_strings = req_params.pop('stopping_strings', [])
+ #req_params['suffix'] = default(body, 'suffix', req_params['suffix'])
+ req_params['echo'] = default(body, 'echo', req_params['echo'])
+ req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
token_count = len(encode(prompt)[0])
@@ -558,9 +572,6 @@ def stream_completions(body: dict, is_legacy: bool = False):
# print(f"Warning: ${err_msg}")
raise InvalidRequestError(message=err_msg, param=max_tokens_str)
- req_params['echo'] = default(body, 'echo', req_params['echo'])
- req_params['top_k'] = default(body, 'best_of', req_params['top_k'])
-
def text_streaming_chunk(content):
# begin streaming
chunk = {
@@ -572,13 +583,9 @@ def text_streaming_chunk(content):
"index": 0,
"finish_reason": None,
"text": content,
+ "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None,
}],
}
- if logprob_proc:
- top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives)
- chunk[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]}
- else:
- chunk[resp_list][0]["logprobs"] = None
return chunk
@@ -586,8 +593,6 @@ def text_streaming_chunk(content):
# generate reply #######################################
debug_msg({'prompt': prompt, 'req_params': req_params})
- stopping_strings = req_params.pop('stopping_strings', [])
- logprob_proc = req_params.pop('logprob_proc', None)
generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False)
answer = ''
diff --git a/extensions/openai/script.py b/extensions/openai/script.py
--- a/extensions/openai/script.py
+++ b/extensions/openai/script.py
@@ -120,7 +120,7 @@ def do_GET(self):
resp = OAImodels.list_models(is_legacy)
else:
model_name = self.path[len('/v1/models/'):]
- resp = OAImodels.model_info()
+ resp = OAImodels.model_info(model_name)
self.return_json(resp)
| Openai Batched generation
**Description**
Batched API openai extension

Openai Batched generation
**Description**
Batched API openai extension

| On my radar, I'll bump it up the priority list.
> On my radar, I'll bump it up the priority list.
Thanks a ton!
On my radar, I'll bump it up the priority list.
> On my radar, I'll bump it up the priority list.
Thanks a ton! | 2023-07-25T21:34:50 |
|
oobabooga/text-generation-webui | 3,732 | oobabooga__text-generation-webui-3732 | [
"3717"
] | 57e9ded00cb023d5ea846eee53711650054147cf | diff --git a/download-model.py b/download-model.py
--- a/download-model.py
+++ b/download-model.py
@@ -47,7 +47,7 @@ def sanitize_model_and_branch_names(self, model, branch):
return model, branch
- def get_download_links_from_huggingface(self, model, branch, text_only=False):
+ def get_download_links_from_huggingface(self, model, branch, text_only=False, specific_file=None):
base = "https://huggingface.co"
page = f"/api/models/{model}/tree/{branch}"
cursor = b""
@@ -73,6 +73,9 @@ def get_download_links_from_huggingface(self, model, branch, text_only=False):
for i in range(len(dict)):
fname = dict[i]['path']
+ if specific_file is not None and fname != specific_file:
+ continue
+
if not is_lora and fname.endswith(('adapter_config.json', 'adapter_model.bin')):
is_lora = True
@@ -126,12 +129,16 @@ def get_download_links_from_huggingface(self, model, branch, text_only=False):
if classifications[i] == 'ggml':
links.pop(i)
- return links, sha256, is_lora
+ return links, sha256, is_lora, ((has_ggml or has_gguf) and specific_file is not None)
- def get_output_folder(self, model, branch, is_lora, base_folder=None):
+ def get_output_folder(self, model, branch, is_lora, is_llamacpp=False, base_folder=None):
if base_folder is None:
base_folder = 'models' if not is_lora else 'loras'
+ # If the model is of type GGUF or GGML, save directly in the base_folder
+ if is_llamacpp:
+ return Path(base_folder)
+
output_folder = f"{'_'.join(model.split('/')[-2:])}"
if branch != 'main':
output_folder += f'_{branch}'
@@ -173,7 +180,7 @@ def get_single_file(self, url, output_folder, start_from_scratch=False):
def start_download_threads(self, file_list, output_folder, start_from_scratch=False, threads=1):
thread_map(lambda url: self.get_single_file(url, output_folder, start_from_scratch=start_from_scratch), file_list, max_workers=threads, disable=True)
- def download_model_files(self, model, branch, links, sha256, output_folder, progress_bar=None, start_from_scratch=False, threads=1):
+ def download_model_files(self, model, branch, links, sha256, output_folder, progress_bar=None, start_from_scratch=False, threads=1, specific_file=None):
self.progress_bar = progress_bar
# Creating the folder and writing the metadata
@@ -189,8 +196,11 @@ def download_model_files(self, model, branch, links, sha256, output_folder, prog
metadata += '\n'
(output_folder / 'huggingface-metadata.txt').write_text(metadata)
- # Downloading the files
- print(f"Downloading the model to {output_folder}")
+ if specific_file:
+ print(f"Downloading {specific_file} to {output_folder}")
+ else:
+ print(f"Downloading the model to {output_folder}")
+
self.start_download_threads(links, output_folder, start_from_scratch=start_from_scratch, threads=threads)
def check_model_files(self, model, branch, links, sha256, output_folder):
@@ -226,6 +236,7 @@ def check_model_files(self, model, branch, links, sha256, output_folder):
parser.add_argument('--branch', type=str, default='main', help='Name of the Git branch to download from.')
parser.add_argument('--threads', type=int, default=1, help='Number of files to download simultaneously.')
parser.add_argument('--text-only', action='store_true', help='Only download text files (txt/json).')
+ parser.add_argument('--specific-file', type=str, default=None, help='Name of the specific file to download (if not provided, downloads all).')
parser.add_argument('--output', type=str, default=None, help='The folder where the model should be saved.')
parser.add_argument('--clean', action='store_true', help='Does not resume the previous download.')
parser.add_argument('--check', action='store_true', help='Validates the checksums of model files.')
@@ -234,28 +245,29 @@ def check_model_files(self, model, branch, links, sha256, output_folder):
branch = args.branch
model = args.MODEL
+ specific_file = args.specific_file
if model is None:
print("Error: Please specify the model you'd like to download (e.g. 'python download-model.py facebook/opt-1.3b').")
sys.exit()
downloader = ModelDownloader(max_retries=args.max_retries)
- # Cleaning up the model/branch names
+ # Clean up the model/branch names
try:
model, branch = downloader.sanitize_model_and_branch_names(model, branch)
except ValueError as err_branch:
print(f"Error: {err_branch}")
sys.exit()
- # Getting the download links from Hugging Face
- links, sha256, is_lora = downloader.get_download_links_from_huggingface(model, branch, text_only=args.text_only)
+ # Get the download links from Hugging Face
+ links, sha256, is_lora, is_llamacpp = downloader.get_download_links_from_huggingface(model, branch, text_only=args.text_only, specific_file=specific_file)
- # Getting the output folder
- output_folder = downloader.get_output_folder(model, branch, is_lora, base_folder=args.output)
+ # Get the output folder
+ output_folder = downloader.get_output_folder(model, branch, is_lora, is_llamacpp=is_llamacpp, base_folder=args.output)
if args.check:
# Check previously downloaded files
downloader.check_model_files(model, branch, links, sha256, output_folder)
else:
# Download files
- downloader.download_model_files(model, branch, links, sha256, output_folder, threads=args.threads)
+ downloader.download_model_files(model, branch, links, sha256, output_folder, specific_file=specific_file, threads=args.threads)
| Download only one model when multiple are available in the Hugging Face repo
I'm encountering an issue while attempting to utilize this file: https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_K_M.gguf
When I input "TheBloke/WizardCoder-Python-13B-V1.0-GGUF" into the model download section of the webui it triggers the download of all available model versions, rather than just the specific one I intend to use. This leads to an unnecessary downloads and wasted space.
Is there a way to prevent this and only download the required model version to save on download size?
| Hi there, I had the same struggle and its easy to solve! Its in the model card on each huggingface thebloke GPTQ model :
How to download from branches
In text-generation-webui, you can add :branch to the end of the download name, eg TheBloke/WizardCoder-Python-34B-V1.0-GPTQ:gptq-4bit-32g-actorder_True
With Git, you can clone a branch with:
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WizardCoder-Python-34B-V1.0-GPTQ
It doesn't work for GGUF or GGML. You get everything. | 2023-08-28T19:46:45 |
|
oobabooga/text-generation-webui | 3,878 | oobabooga__text-generation-webui-3878 | [
"3871"
] | df123a20fccda29439d70c5590de2e33937acbed | diff --git a/extensions/silero_tts/script.py b/extensions/silero_tts/script.py
--- a/extensions/silero_tts/script.py
+++ b/extensions/silero_tts/script.py
@@ -28,7 +28,25 @@
}
current_params = params.copy()
-voices_by_gender = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115']
+
+voices_en = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115']
+voices_es = ["es_0", "es_1", "es_2"]
+voices_fr = ["fr_0", "fr_1", "fr_2", "fr_3", "fr_4", "fr_5"]
+voices_de = ["bernd_ungerer", "eva_k", "friedrich", "hokuspokus", "karlsson"]
+voices_ru = ["aidar", "baya", "kseniya", "xenia"]
+voices_ua = ["mykyta"]
+voices_uz = ["dilnavoz"]
+
+languages = {
+ "en": {"label": "English", "voices": voices_en, "default_voice": "en_56", "model_id": "v3_en"},
+ "es": {"label": "Español", "voices": voices_es, "default_voice": "es_0", "model_id": "v3_es"},
+ "fr": {"label": "Français", "voices": voices_fr, "default_voice": "fr_0", "model_id": "v3_fr"},
+ "de": {"label": "Deutsch", "voices": voices_de, "default_voice": "eva_k", "model_id": "v3_de"},
+ "ru": {"label": "русский", "voices": voices_ru, "default_voice": "aidar", "model_id": "ru_v3"},
+ "ua": {"label": "українська", "voices": voices_ua, "default_voice": "mykyta", "model_id": "v3_ua"},
+ "uz": {"label": "Oʻzbekcha", "voices": voices_uz, "default_voice": "dilnavoz", "model_id": "v3_uz"},
+}
+
voice_pitches = ['x-low', 'low', 'medium', 'high', 'x-high']
voice_speeds = ['x-slow', 'slow', 'medium', 'fast', 'x-fast']
@@ -167,6 +185,13 @@ def voice_preview(preview_text):
return f'<audio src="file/{output_file.as_posix()}?{int(time.time())}" controls autoplay></audio>'
+def language_change(lang):
+ global params
+ lang_code = list(languages.keys())[lang]
+ params.update({"language": lang_code, "speaker": languages[lang_code]["default_voice"], "model_id": languages[lang_code]["model_id"]})
+ return gr.update(choices=languages[lang_code]["voices"], value=languages[lang_code]["default_voice"])
+
+
def custom_css():
path_to_css = Path(__file__).parent.resolve() / 'style.css'
return open(path_to_css, 'r').read()
@@ -180,7 +205,10 @@ def ui():
autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically')
show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player')
- voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice')
+
+ with gr.Row():
+ language = gr.Dropdown(value=languages[params['language']]["label"], choices=[v["label"] for _, v in languages.items()], label='Language', type="index")
+ voice = gr.Dropdown(value=params['speaker'], choices=voices_en, label='TTS voice')
with gr.Row():
v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch')
v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed')
@@ -216,6 +244,7 @@ def ui():
# Event functions to update the parameters in the backend
activate.change(lambda x: params.update({"activate": x}), activate, None)
autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None)
+ language.change(language_change, language, voice, show_progress=False)
voice.change(lambda x: params.update({"speaker": x}), voice, None)
v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None)
v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None)
| Silero TTS more languages
Hello. I want to know. Why are only english speakers available in Silero extension? How can I add other languages?
| Silero supports [these languages](https://github.com/snakers4/silero-models#models-and-speakers). You can change the language, model ID, and default speaker here.
https://github.com/oobabooga/text-generation-webui/blob/df123a20fccda29439d70c5590de2e33937acbed/extensions/silero_tts/script.py#L18-L20
And the list of speakers here.
https://github.com/oobabooga/text-generation-webui/blob/df123a20fccda29439d70c5590de2e33937acbed/extensions/silero_tts/script.py#L31 | 2023-09-12T08:02:08 |
|
oobabooga/text-generation-webui | 4,541 | oobabooga__text-generation-webui-4541 | [
"4521"
] | f7534b2f4b6ceca5e00d9cf6af2d25b744af3d06 | diff --git a/extensions/api/util.py b/extensions/api/util.py
--- a/extensions/api/util.py
+++ b/extensions/api/util.py
@@ -74,8 +74,8 @@ def build_parameters(body, chat=False):
if str(character) == "None":
character = "Assistant"
- name1, name2, _, greeting, context, _ = load_character_memoized(character, str(body.get('your_name', shared.settings['name1'])), '', instruct=False)
- name1_instruct, name2_instruct, _, _, context_instruct, turn_template = load_character_memoized(instruction_template, '', '', instruct=True)
+ name1, name2, _, greeting, context, _, _ = load_character_memoized(character, str(body.get('your_name', shared.settings['name1'])), '', instruct=False)
+ name1_instruct, name2_instruct, _, _, context_instruct, turn_template, _ = load_character_memoized(instruction_template, '', '', instruct=True)
generate_params.update({
'mode': str(body.get('mode', 'chat')),
'name1': str(body.get('name1', name1)),
diff --git a/extensions/openai/script.py b/extensions/openai/script.py
--- a/extensions/openai/script.py
+++ b/extensions/openai/script.py
@@ -91,6 +91,10 @@ async def generator():
async with streaming_semaphore:
response = OAIcompletions.stream_completions(to_dict(request_data), is_legacy=is_legacy)
for resp in response:
+ disconnected = await request.is_disconnected()
+ if disconnected:
+ break
+
yield {"data": json.dumps(resp)}
return EventSourceResponse(generator()) # SSE streaming
@@ -110,6 +114,10 @@ async def generator():
async with streaming_semaphore:
response = OAIcompletions.stream_chat_completions(to_dict(request_data), is_legacy=is_legacy)
for resp in response:
+ disconnected = await request.is_disconnected()
+ if disconnected:
+ break
+
yield {"data": json.dumps(resp)}
return EventSourceResponse(generator()) # SSE streaming
| OpenAI-api Streaming: closing http socket does not cancel generation
### Describe the bug
Streaming: closing http socket does not cancel generation like when the old api was around
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
enable openai-api extension, use api with streaming, close socket, continues generating
### Screenshot
_No response_
### Logs
```shell
no logs
```
### System Info
```shell
win 10
msi
3070
local
frontend: SillyTavern
```
| Afaik, SSE is uni-directional so the server doesn't know about the client disconnection. But it is possible to stop the generation by posting to `/v1/internal/stop-generation`:
```
curl -X 'POST' \
-H "Content-Type: application/json" \
-d '' \
'https://127.0.0.1:5000/v1/internal/stop-generation'
``` | 2023-11-09T17:53:19 |
|
oobabooga/text-generation-webui | 4,597 | oobabooga__text-generation-webui-4597 | [
"3763"
] | 8a9d5a0cea66d23645aa274817ff077185c54b37 | diff --git a/modules/models.py b/modules/models.py
--- a/modules/models.py
+++ b/modules/models.py
@@ -381,15 +381,13 @@ def RWKV_loader(model_name):
def get_max_memory_dict():
- max_memory = {}
+ max_cpu_memory = shared.args.cpu_memory.strip() if shared.args.cpu_memory is not None else '99GiB'
+ max_memory = {'cpu': f'{max_cpu_memory}GiB' if not re.match('.*ib$', max_cpu_memory.lower()) else max_cpu_memory}
if shared.args.gpu_memory:
memory_map = list(map(lambda x: x.strip(), shared.args.gpu_memory))
for i in range(len(memory_map)):
max_memory[i] = f'{memory_map[i]}GiB' if not re.match('.*ib$', memory_map[i].lower()) else memory_map[i]
- max_cpu_memory = shared.args.cpu_memory.strip() if shared.args.cpu_memory is not None else '99GiB'
- max_memory['cpu'] = f'{max_cpu_memory}GiB' if not re.match('.*ib$', max_cpu_memory.lower()) else max_cpu_memory
-
# If --auto-devices is provided standalone, try to get a reasonable value
# for the maximum memory of device :0
elif shared.args.auto_devices:
@@ -403,7 +401,7 @@ def get_max_memory_dict():
suggestion = int(round(suggestion / 1000))
logger.warning(f"Auto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors. You can manually set other values.")
- max_memory = {0: f'{suggestion}GiB', 'cpu': f'{shared.args.cpu_memory or 99}GiB'}
+ max_memory[0] = f'{suggestion}GiB'
return max_memory if len(max_memory) > 0 else None
| Problem when setting CPU memory limit
### Describe the bug
When trying to set a CPU memory limit, I get this error:
line 85, in convert_file_size_to_int
raise ValueError(err_msg)
ValueError: size 64000MiBGiB is not in a valid format. Use an integer for bytes, or a string with an unit (like ‘5.0GB’).
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Change the CPU memory and try to load a model
### Screenshot
<img width="958" alt="image" src="https://github.com/oobabooga/text-generation-webui/assets/128736804/689f1d6e-fa77-4598-a1f6-4c55eb47bdac">
### Logs
```shell
Traceback (most recent call last):
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/accelerate/utils/modeling.py”, line 70, in convert_file_size_to_int
mem_size = int(float(size[:-3]) * (2**30))
ValueError: could not convert string to float: ‘64000MiB’
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/home/chungus/Downloads/LocalGPT/text-generation-webui/modules/ui_model_menu.py”, line 195, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File “/home/chungus/Downloads/LocalGPT/text-generation-webui/modules/models.py”, line 79, in load_model
output = load_func_map[loader](model_name)
File “/home/chungus/Downloads/LocalGPT/text-generation-webui/modules/models.py”, line 320, in AutoGPTQ_loader
return modules.AutoGPTQ_loader.load_quantized(model_name)
File “/home/chungus/Downloads/LocalGPT/text-generation-webui/modules/AutoGPTQ_loader.py”, line 57, in load_quantized
model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params)
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/auto_gptq/modeling/auto.py”, line 108, in from_quantized
return quant_func(
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/auto_gptq/modeling/_base.py”, line 859, in from_quantized
max_memory = accelerate.utils.get_balanced_memory(
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/accelerate/utils/modeling.py”, line 767, in get_balanced_memory
max_memory = get_max_memory(max_memory)
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/accelerate/utils/modeling.py”, line 654, in get_max_memory
max_memory[key] = convert_file_size_to_int(max_memory[key])
File “/home/chungus/Downloads/LocalGPT/installer_files/env/lib/python3.10/site-packages/accelerate/utils/modeling.py”, line 85, in convert_file_size_to_int
raise ValueError(err_msg)
ValueError: size 64000MiBGiB is not in a valid format. Use an integer for bytes, or a string with an unit (like ‘5.0GB’).
```
### System Info
```shell
Most recent Ubuntu, RTX 4070 (gpu0) and RTX 3070 (gpu1)
```
| *I'm in the same boat with the same problem.
I just avoided using CPU regarding sliders
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. | 2023-11-15T00:36:22 |
|
oobabooga/text-generation-webui | 4,827 | oobabooga__text-generation-webui-4827 | [
"4603"
] | 8956f3ebe21c3297faecc786cc3dd4859a4314e3 | diff --git a/extensions/openai/completions.py b/extensions/openai/completions.py
--- a/extensions/openai/completions.py
+++ b/extensions/openai/completions.py
@@ -1,10 +1,15 @@
+import base64
import copy
+import re
import time
from collections import deque
+from io import BytesIO
+import requests
import tiktoken
import torch
import torch.nn.functional as F
+from PIL import Image
from transformers import LogitsProcessor, LogitsProcessorList
from extensions.openai.errors import InvalidRequestError
@@ -140,7 +145,25 @@ def convert_history(history):
system_message = ""
for entry in history:
- content = entry["content"]
+ if "image_url" in entry:
+ image_url = entry['image_url']
+ if "base64" in image_url:
+ image_url = re.sub('^data:image/.+;base64,', '', image_url)
+ img = Image.open(BytesIO(base64.b64decode(image_url)))
+ else:
+ try:
+ my_res = requests.get(image_url)
+ img = Image.open(BytesIO(my_res.content))
+ except Exception:
+ raise 'Image cannot be loaded from the URL!'
+
+ buffered = BytesIO()
+ img.save(buffered, format="JPEG")
+ img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
+ content = f'<img src="data:image/jpeg;base64,{img_str}">'
+ else:
+ content = entry["content"]
+
role = entry["role"]
if role == "user":
@@ -182,7 +205,8 @@ def chat_completions_common(body: dict, is_legacy: bool = False, stream=False) -
raise InvalidRequestError(message="messages: missing role", param='messages')
elif m['role'] == 'function':
raise InvalidRequestError(message="role: function is not supported.", param='messages')
- if 'content' not in m:
+
+ if 'content' not in m and "image_url" not in m:
raise InvalidRequestError(message="messages: missing content", param='messages')
# Chat Completions
| Multimodal api is not longer available
### Describe the bug
I think this was forgotten in the transition to the openai API: [usage-through-api](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal#usage-through-api)
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
-
### Screenshot
_No response_
### Logs
```shell
-
```
### System Info
```shell
-
```
| FYI - The (beta) OpenAI equivalent looks to be: https://platform.openai.com/docs/guides/vision
Essentially adding image types and a URL/base64 image to the content passed into the completions APIs. Obviously none of this is implemented yet, but doesn't look too hard. | 2023-12-06T12:04:08 |
|
oobabooga/text-generation-webui | 4,851 | oobabooga__text-generation-webui-4851 | [
"4834"
] | 884871c10751b0de405d22494a75796ed761d640 | diff --git a/modules/text_generation.py b/modules/text_generation.py
--- a/modules/text_generation.py
+++ b/modules/text_generation.py
@@ -265,9 +265,8 @@ def apply_stopping_strings(reply, all_stop_strings):
def get_reply_from_output_ids(output_ids, state, starting_from=0):
reply = decode(output_ids[starting_from:], state['skip_special_tokens'])
- if type(shared.tokenizer) in [transformers.LlamaTokenizer, transformers.LlamaTokenizerFast] and len(output_ids) > starting_from:
- if shared.tokenizer.convert_ids_to_tokens(int(output_ids[starting_from])).startswith('▁'):
- reply = ' ' + reply
+ if hasattr(shared.tokenizer, 'convert_ids_to_tokens') and len(output_ids) > starting_from and shared.tokenizer.convert_ids_to_tokens(int(output_ids[starting_from])).startswith('▁'):
+ reply = ' ' + reply
return reply
| Chat output has no spaces between words
I'm just starting up and managed to download the model **codellama_CodeLlama-7b-Instruct-hf**. Whenever I write a chat message with the AI, I'm getting something but has no spaces between the words so I'm not sure if I missed something or there is something wrong with the model.
This is an example Input/Output that I'm getting:
**Input:**
How many class types in default c#
**Output:**
Thereareseveralclassesavailablebydefault,including:Boolean,Byte,Char,DateTime,Decimal,Double,Int16,Int32,Int64,SByte,Single,String,TimeSpan,UInt16,UInt32,andUInt64.YoumayalsofindithelpfultoknowthatC#providessupportforarrays,lists,dictionaries,sets,tuples,andothercollectiontypesaswell.
Can someone provide few hints how to fix it?
| I am also just starting to set up text-generation-webui and have been experiencing this for the last couple of days as well.
Using the snapshot-2023-12-03 branch I get spaces between words.
> Using the snapshot-2023-12-03 branch I get spaces between words.
Could you produce steps to fix or if there are any files to replace?
> > Using the snapshot-2023-12-03 branch I get spaces between words.
>
> Could you produce steps to fix or if there are any files to replace?
I went to **snapshot-2023-12-03** and from it, I replaced the folders/files in my local installation:
- server.py
- modules
- js
- extensions
Now I'm getting spaces again. | 2023-12-08T13:25:47 |
|
oobabooga/text-generation-webui | 4,865 | oobabooga__text-generation-webui-4865 | [
"4843"
] | 8c8825b777516f189c0fc3eb46c22f552fecc300 | diff --git a/modules/ui_model_menu.py b/modules/ui_model_menu.py
--- a/modules/ui_model_menu.py
+++ b/modules/ui_model_menu.py
@@ -58,7 +58,7 @@ def create_ui():
with gr.Row():
with gr.Column():
with gr.Row():
- shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), value=shared.model_name, label='Model', elem_classes='slim-dropdown', interactive=not mu)
+ shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), value=lambda: shared.model_name, label='Model', elem_classes='slim-dropdown', interactive=not mu)
ui.create_refresh_button(shared.gradio['model_menu'], lambda: None, lambda: {'choices': utils.get_available_models()}, 'refresh-button', interactive=not mu)
shared.gradio['load_model'] = gr.Button("Load", visible=not shared.settings['autoload_model'], elem_classes='refresh-button', interactive=not mu)
shared.gradio['unload_model'] = gr.Button("Unload", elem_classes='refresh-button', interactive=not mu)
| Refreshing browser causes selected model to reset to default
### Describe the bug
After loading a model, if I refresh the browser (just simple F5), I see that the model tab no longer has the loaded model selected. If I passed a default model with --model, that's the one that will appear, and if not it will just show "None"
Funny enough, the model loader settings remain, with ExLlamav2_HF and proper max_seq_len etc
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Load the web ui
select any model that wasn't loaded, load it
refresh the page
go to model page and observe that your loaded model is no longer the one selected in the dropdown
### Screenshot

### Logs
```shell
nothing relevant
```
### System Info
```shell
Ubuntu 22.04 docker
```
| 2023-12-09T23:33:11 |
||
oobabooga/text-generation-webui | 4,905 | oobabooga__text-generation-webui-4905 | [
"4901"
] | c2802bc3ac88b6f0268b6fe82bd15308982083a6 | diff --git a/extensions/coqui_tts/script.py b/extensions/coqui_tts/script.py
--- a/extensions/coqui_tts/script.py
+++ b/extensions/coqui_tts/script.py
@@ -1,3 +1,4 @@
+import os
import html
import json
import random
@@ -26,6 +27,7 @@
raise
+os.environ["COQUI_TOS_AGREED"] = "1"
params = {
"activate": True,
| coqui_tts fails to load as assumes interactive sessions to accept ToS
### Describe the bug
When enabled coqui_tts prevents textgen from starting as it expects an interactive session for a user to accept a ToS agreement
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
- Enable coqui_tts
- Restart textgen
- Note that textgen never starts
- Check console logs
```
2023-12-12 22:13:22 INFO:Loading the extension "coqui_tts"...
[XTTS] Loading XTTS...
> You must agree to the terms of service to use this model.
| > Please see the terms of service at https://coqui.ai/cpml.txt
| > "I have read, understood and agreed to the Terms and Conditions." - [y/n]
```
- No way to accept non-interactively
### Screenshot
_No response_
### Logs
```shell
INFO: Started server process [37]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:5001 (Press CTRL+C to quit)
2023-12-12 22:13:18 DEBUG:Intercepting all calls to posthog.
2023-12-12 22:13:19 DEBUG:Creating Sentence Embedder...
2023-12-12 22:13:20 WARNING:Using embedded DuckDB without persistence: data will be transient
2023-12-12 22:13:22 DEBUG:Loading hyperparameters...
2023-12-12 22:13:22 INFO:Loading the extension "coqui_tts"...
[XTTS] Loading XTTS...
> You must agree to the terms of service to use this model.
| > Please see the terms of service at https://coqui.ai/cpml.txt
| > "I have read, understood and agreed to the Terms and Conditions." - [y/n]
```
### System Info
```shell
Latest official docker image running on server.
```
Note that a workaround for this is to remove coqui_tts and install "alltalk_tts" instead which seems to work without issue.
| 2023-12-13T03:47:35 |
||
oobabooga/text-generation-webui | 5,125 | oobabooga__text-generation-webui-5125 | [
"4947"
] | 20a2eaaf95eeb77c2e6b38de1de0f7977a98c21b | diff --git a/modules/models_settings.py b/modules/models_settings.py
--- a/modules/models_settings.py
+++ b/modules/models_settings.py
@@ -35,7 +35,7 @@ def get_model_metadata(model):
path = Path(f'{shared.args.model_dir}/{model}/config.json')
if path.exists():
- hf_metadata = json.loads(open(path, 'r').read())
+ hf_metadata = json.loads(open(path, 'r', encoding='utf-8').read())
else:
hf_metadata = None
@@ -78,7 +78,7 @@ def get_model_metadata(model):
else:
# Transformers metadata
if hf_metadata is not None:
- metadata = json.loads(open(path, 'r').read())
+ metadata = json.loads(open(path, 'r', encoding='utf-8').read())
if 'max_position_embeddings' in metadata:
model_settings['truncation_length'] = metadata['max_position_embeddings']
model_settings['max_seq_len'] = metadata['max_position_embeddings']
@@ -101,7 +101,7 @@ def get_model_metadata(model):
# Read AutoGPTQ metadata
path = Path(f'{shared.args.model_dir}/{model}/quantize_config.json')
if path.exists():
- metadata = json.loads(open(path, 'r').read())
+ metadata = json.loads(open(path, 'r', encoding='utf-8').read())
if 'bits' in metadata:
model_settings['wbits'] = metadata['bits']
if 'group_size' in metadata:
@@ -112,7 +112,7 @@ def get_model_metadata(model):
# Try to find the Jinja instruct template
path = Path(f'{shared.args.model_dir}/{model}') / 'tokenizer_config.json'
if path.exists():
- metadata = json.loads(open(path, 'r').read())
+ metadata = json.loads(open(path, 'r', encoding='utf-8').read())
if 'chat_template' in metadata:
template = metadata['chat_template']
for k in ['eos_token', 'bos_token']:
| UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 125: character maps to <undefined>
### Describe the bug
That is probably locale-dependant thing, but some models complain on decoding error. Here's diff to fix:
```python
diff --git a/modules/models_settings.py b/modules/models_settings.py
index 8a4febd..90c7b69 100644
--- a/modules/models_settings.py
+++ b/modules/models_settings.py
@@ -102,7 +102,7 @@ def get_model_metadata(model):
# Try to find the Jinja instruct template
path = Path(f'{shared.args.model_dir}/{model}') / 'tokenizer_config.json'
if path.exists():
- metadata = json.loads(open(path, 'r').read())
+ metadata = json.loads(open(path, 'r', encoding='utf-8').read())
if 'chat_template' in metadata:
template = metadata['chat_template']
for k in ['eos_token', 'bos_token']:
```
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Eg. see https://huggingface.co/LoneStriker/deepseek-coder-33b-instruct-4.65bpw-h6-exl2/discussions/1#657daf9c9a9e347d4b9fc544
### Screenshot
_No response_
### Logs
```shell
See: https://huggingface.co/LoneStriker/deepseek-coder-33b-instruct-4.65bpw-h6-exl2/discussions/1#657daf9c9a9e347d4b9fc544
```
### System Info
```shell
windows
```
| Thanks nice hint
A similar error occurred with the AWQ model, but it worked.
>UnicodeDecodeError: 'cp932' codec can't decode byte 0x81 in position 451: illegal multibyte sequence
I tried made the same edits, but i still cant load any of the deep seek models unless its in ggml. Tried them all and i get the x81 byte
`Traceback (most recent call last):
File "C:\Users\PC\Documents\text-generation-webui\modules\ui_model_menu.py", line 214, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PC\Documents\text-generation-webui\modules\models.py", line 79, in load_model
metadata = get_model_metadata(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PC\Documents\text-generation-webui\modules\models_settings.py", line 115, in get_model_metadata
metadata = json.loads(open(path, 'r', encoding='utf-8').read())
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PC\Documents\text-generation-webui\installer_files\env\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 125: character maps to <undefined>
`
I have the same error for GGUF and exl2 models, if someone found the solution I'll be happy to take that information
By taking inspiration from this
https://huggingface.co/LoneStriker/deepseek-coder-33b-instruct-4.65bpw-h6-exl2/discussions/1#657db96ba982e9093f675299
On the file ```text-generation-webui\modules\models_settings.py```
If you replace all the
```
metadata = json.loads(open(path, 'r').read())
```
By
```
metadata = json.loads(open(path, 'r', encoding='utf-8').read())
```
The loading works, don't forget to set the rope_scaling (alpha_value AND compress_pos_emb) to 4, it's how this model work

@oobabooga can you take a look at your models_settings.py script so that we shouldn't need to do this manually to make the deepseek coder models work? | 2023-12-31T01:52:02 |
|
oobabooga/text-generation-webui | 5,641 | oobabooga__text-generation-webui-5641 | [
"5628"
] | 60f3d87309bd5fa8e3d77ed1fc66b25ef84db8c5 | diff --git a/modules/extensions.py b/modules/extensions.py
--- a/modules/extensions.py
+++ b/modules/extensions.py
@@ -36,7 +36,7 @@ def load_extensions():
try:
extension = importlib.import_module(f"extensions.{name}.script")
except ModuleNotFoundError:
- logger.error(f"Could not import the requirements for '{name}'. Make sure to install the requirements for the extension.\n\nLinux / Mac:\n\npip install -r extensions/{name}/requirements.txt --upgrade\n\nWindows:\n\npip install -r extensions\\{name}\\requirements.txt --upgrade\n\nIf you used the one-click installer, paste the command above in the terminal window opened after launching the cmd script for your OS.")
+ logger.error(f"Could not import the requirements for '{name}'. Make sure to install the requirements for the extension.\n\n* To install requirements for all available extensions, launch the\n update_wizard script for your OS and choose the B option.\n\n* To install the requirements for this extension alone, launch the\n cmd script for your OS and paste the following command in the\n terminal window that appears:\n\nLinux / Mac:\n\npip install -r extensions/{name}/requirements.txt --upgrade\n\nWindows:\n\npip install -r extensions\\{name}\\requirements.txt --upgrade\n")
raise
# Only run setup() and apply settings from settings.yaml once
diff --git a/one_click.py b/one_click.py
--- a/one_click.py
+++ b/one_click.py
@@ -9,14 +9,21 @@
import subprocess
import sys
-script_dir = os.getcwd()
-conda_env_path = os.path.join(script_dir, "installer_files", "env")
-
# Remove the '# ' from the following lines as needed for your AMD GPU on Linux
# os.environ["ROCM_PATH"] = '/opt/rocm'
# os.environ["HSA_OVERRIDE_GFX_VERSION"] = '10.3.0'
# os.environ["HCC_AMDGPU_TARGET"] = 'gfx1030'
+
+# Define the required PyTorch version
+TORCH_VERSION = "2.2.1"
+TORCHVISION_VERSION = "0.17.1"
+TORCHAUDIO_VERSION = "2.2.1"
+
+# Environment
+script_dir = os.getcwd()
+conda_env_path = os.path.join(script_dir, "installer_files", "env")
+
# Command-line flags
cmd_flags_path = os.path.join(script_dir, "CMD_FLAGS.txt")
if os.path.exists(cmd_flags_path):
@@ -25,7 +32,7 @@
else:
CMD_FLAGS = ''
-flags = f"{' '.join([flag for flag in sys.argv[1:] if flag != '--update'])} {CMD_FLAGS}"
+flags = f"{' '.join([flag for flag in sys.argv[1:] if flag != '--update-wizard'])} {CMD_FLAGS}"
def signal_handler(sig, frame):
@@ -86,13 +93,42 @@ def torch_version():
if site_packages_path:
torch_version_file = open(os.path.join(site_packages_path, 'torch', 'version.py')).read().splitlines()
- torver = [line for line in torch_version_file if '__version__' in line][0].split('__version__ = ')[1].strip("'")
+ torver = [line for line in torch_version_file if line.startswith('__version__')][0].split('__version__ = ')[1].strip("'")
else:
from torch import __version__ as torver
return torver
+def update_pytorch():
+ print_big_message("Checking for PyTorch updates")
+
+ torver = torch_version()
+ is_cuda = '+cu' in torver
+ is_cuda118 = '+cu118' in torver # 2.1.0+cu118
+ is_rocm = '+rocm' in torver # 2.0.1+rocm5.4.2
+ is_intel = '+cxx11' in torver # 2.0.1a0+cxx11.abi
+ is_cpu = '+cpu' in torver # 2.0.1+cpu
+
+ install_pytorch = f"python -m pip install --upgrade torch=={TORCH_VERSION} torchvision=={TORCHVISION_VERSION} torchaudio=={TORCHAUDIO_VERSION} "
+
+ if is_cuda118:
+ install_pytorch += "--index-url https://download.pytorch.org/whl/cu118"
+ elif is_cuda:
+ install_pytorch += "--index-url https://download.pytorch.org/whl/cu121"
+ elif is_rocm:
+ install_pytorch += "--index-url https://download.pytorch.org/whl/rocm5.6"
+ elif is_cpu:
+ install_pytorch += "--index-url https://download.pytorch.org/whl/cpu"
+ elif is_intel:
+ if is_linux():
+ install_pytorch = "python -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"
+ else:
+ install_pytorch = "python -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"
+
+ run_cmd(f"{install_pytorch}", assert_success=True, environment=True)
+
+
def is_installed():
site_packages_path = None
for sitedir in site.getsitepackages():
@@ -129,8 +165,7 @@ def print_big_message(message):
lines = message.split('\n')
print("\n\n*******************************************************************")
for line in lines:
- if line.strip() != '':
- print("*", line)
+ print("*", line)
print("*******************************************************************\n\n")
@@ -165,26 +200,51 @@ def run_cmd(cmd, assert_success=False, environment=False, capture_output=False,
return result
+def generate_alphabetic_sequence(index):
+ result = ''
+ while index >= 0:
+ index, remainder = divmod(index, 26)
+ result = chr(ord('A') + remainder) + result
+ index -= 1
+
+ return result
+
+
+def get_user_choice(question, options_dict):
+ print()
+ print(question)
+ print()
+
+ for key, value in options_dict.items():
+ print(f"{key}) {value}")
+
+ print()
+
+ choice = input("Input> ").upper()
+ while choice not in options_dict.keys():
+ print("Invalid choice. Please try again.")
+ choice = input("Input> ").upper()
+
+ return choice
+
+
def install_webui():
- # Select your GPU, or choose to run in CPU mode
+
+ # Ask the user for the GPU vendor
if "GPU_CHOICE" in os.environ:
choice = os.environ["GPU_CHOICE"].upper()
print_big_message(f"Selected GPU choice \"{choice}\" based on the GPU_CHOICE environment variable.")
else:
- print()
- print("What is your GPU?")
- print()
- print("A) NVIDIA")
- print("B) AMD (Linux/MacOS only. Requires ROCm SDK 5.6 on Linux)")
- print("C) Apple M Series")
- print("D) Intel Arc (IPEX)")
- print("N) None (I want to run models in CPU mode)")
- print()
-
- choice = input("Input> ").upper()
- while choice not in 'ABCDN':
- print("Invalid choice. Please try again.")
- choice = input("Input> ").upper()
+ choice = get_user_choice(
+ "What is your GPU?",
+ {
+ 'A': 'NVIDIA',
+ 'B': 'AMD (Linux/MacOS only. Requires ROCm SDK 5.6 on Linux)',
+ 'C': 'Apple M Series',
+ 'D': 'Intel Arc (IPEX)',
+ 'N': 'None (I want to run models in CPU mode)'
+ },
+ )
gpu_choice_to_name = {
"A": "NVIDIA",
@@ -195,24 +255,21 @@ def install_webui():
}
selected_gpu = gpu_choice_to_name[choice]
+ use_cuda118 = "N"
+ # Write a flag to CMD_FLAGS.txt for CPU mode
if selected_gpu == "NONE":
with open(cmd_flags_path, 'r+') as cmd_flags_file:
if "--cpu" not in cmd_flags_file.read():
print_big_message("Adding the --cpu flag to CMD_FLAGS.txt.")
- cmd_flags_file.write("\n--cpu")
-
- # Find the proper Pytorch installation command
- install_git = "conda install -y -k ninja git"
- install_pytorch = "python -m pip install torch==2.1.* torchvision==0.16.* torchaudio==2.1.* "
+ cmd_flags_file.write("\n--cpu\n")
- use_cuda118 = "N"
- if any((is_windows(), is_linux())) and selected_gpu == "NVIDIA":
+ # Check if the user wants CUDA 11.8
+ elif any((is_windows(), is_linux())) and selected_gpu == "NVIDIA":
if "USE_CUDA118" in os.environ:
use_cuda118 = "Y" if os.environ.get("USE_CUDA118", "").lower() in ("yes", "y", "true", "1", "t", "on") else "N"
else:
- # Ask for CUDA version if using NVIDIA
- print("\nDo you want to use CUDA 11.8 instead of 12.1? Only choose this option if your GPU is very old (Kepler or older).\nFor RTX and GTX series GPUs, say \"N\". If unsure, say \"N\".\n")
+ print("\nDo you want to use CUDA 11.8 instead of 12.1?\nOnly choose this option if your GPU is very old (Kepler or older).\n\nFor RTX and GTX series GPUs, say \"N\".\nIf unsure, say \"N\".\n")
use_cuda118 = input("Input (Y/N)> ").upper().strip('"\'').strip()
while use_cuda118 not in 'YN':
print("Invalid choice. Please try again.")
@@ -220,29 +277,35 @@ def install_webui():
if use_cuda118 == 'Y':
print("CUDA: 11.8")
- install_pytorch += "--index-url https://download.pytorch.org/whl/cu118"
else:
print("CUDA: 12.1")
- install_pytorch += "--index-url https://download.pytorch.org/whl/cu121"
- elif not is_macos() and selected_gpu == "AMD":
- if is_linux():
- install_pytorch += "--index-url https://download.pytorch.org/whl/rocm5.6"
+
+ # No PyTorch for AMD on Windows (?)
+ elif is_windows() and selected_gpu == "AMD":
+ print("PyTorch setup on Windows is not implemented yet. Exiting...")
+ sys.exit(1)
+
+ # Find the Pytorch installation command
+ install_pytorch = f"python -m pip install torch=={TORCH_VERSION} torchvision=={TORCHVISION_VERSION} torchaudio=={TORCHAUDIO_VERSION} "
+
+ if selected_gpu == "NVIDIA":
+ if use_cuda118 == 'Y':
+ install_pytorch += "--index-url https://download.pytorch.org/whl/cu118"
else:
- print("AMD GPUs are only supported on Linux. Exiting...")
- sys.exit(1)
- elif is_linux() and selected_gpu in ["APPLE", "NONE"]:
+ install_pytorch += "--index-url https://download.pytorch.org/whl/cu121"
+ elif selected_gpu == "AMD":
+ install_pytorch += "--index-url https://download.pytorch.org/whl/rocm5.6"
+ elif selected_gpu in ["APPLE", "NONE"]:
install_pytorch += "--index-url https://download.pytorch.org/whl/cpu"
elif selected_gpu == "INTEL":
- install_pytorch = "python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"
+ if is_linux():
+ install_pytorch = "python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"
+ else:
+ install_pytorch = "python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/"
# Install Git and then Pytorch
print_big_message("Installing PyTorch.")
- run_cmd(f"{install_git} && {install_pytorch} && python -m pip install py-cpuinfo==9.0.0", assert_success=True, environment=True)
-
- # Install CUDA libraries (this wasn't necessary for Pytorch before...)
- if selected_gpu == "NVIDIA":
- print_big_message("Installing the CUDA runtime libraries.")
- run_cmd(f"conda install -y -c \"nvidia/label/{'cuda-12.1.1' if use_cuda118 == 'N' else 'cuda-11.8.0'}\" cuda-runtime", assert_success=True, environment=True)
+ run_cmd(f"conda install -y -k ninja git && {install_pytorch} && python -m pip install py-cpuinfo==9.0.0", assert_success=True, environment=True)
if selected_gpu == "INTEL":
# Install oneAPI dependencies via conda
@@ -255,47 +318,49 @@ def install_webui():
update_requirements(initial_installation=True)
-def update_requirements(initial_installation=False):
+def get_extensions_names():
+ return [foldername for foldername in os.listdir('extensions') if os.path.isfile(os.path.join('extensions', foldername, 'requirements.txt'))]
+
+
+def install_extensions_requirements():
+ print_big_message("Installing extensions requirements.\nSome of these may fail on Windows.\nDon\'t worry if you see error messages, as they will not affect the main program.")
+ extensions = get_extensions_names()
+ for i, extension in enumerate(extensions):
+ print(f"\n\n--- [{i+1}/{len(extensions)}]: {extension}\n\n")
+ extension_req_path = os.path.join("extensions", extension, "requirements.txt")
+ run_cmd(f"python -m pip install -r {extension_req_path} --upgrade", assert_success=False, environment=True)
+
+
+def update_requirements(initial_installation=False, pull=True):
# Create .git directory if missing
if not os.path.exists(os.path.join(script_dir, ".git")):
git_creation_cmd = 'git init -b main && git remote add origin https://github.com/oobabooga/text-generation-webui && git fetch && git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main && git reset --hard origin/main && git branch --set-upstream-to=origin/main'
run_cmd(git_creation_cmd, environment=True, assert_success=True)
- files_to_check = [
- 'start_linux.sh', 'start_macos.sh', 'start_windows.bat', 'start_wsl.bat',
- 'update_linux.sh', 'update_macos.sh', 'update_windows.bat', 'update_wsl.bat',
- 'one_click.py'
- ]
-
- before_pull_hashes = {file_name: calculate_file_hash(file_name) for file_name in files_to_check}
- run_cmd("git pull --autostash", assert_success=True, environment=True)
- after_pull_hashes = {file_name: calculate_file_hash(file_name) for file_name in files_to_check}
-
- # Check for differences in installation file hashes
- for file_name in files_to_check:
- if before_pull_hashes[file_name] != after_pull_hashes[file_name]:
- print_big_message(f"File '{file_name}' was updated during 'git pull'. Please run the script again.")
- exit(1)
-
- # Extensions requirements are installed only during the initial install by default.
- # That can be changed with the INSTALL_EXTENSIONS environment variable.
- install = initial_installation
- if "INSTALL_EXTENSIONS" in os.environ:
- install = os.environ["INSTALL_EXTENSIONS"].lower() in ("yes", "y", "true", "1", "t", "on")
-
- if install:
- print_big_message("Installing extensions requirements.")
- skip = ['superbooga', 'superboogav2', 'coqui_tts'] # Fail to install on Windows
- extensions = [foldername for foldername in os.listdir('extensions') if os.path.isfile(os.path.join('extensions', foldername, 'requirements.txt'))]
- extensions = [x for x in extensions if x not in skip]
- for i, extension in enumerate(extensions):
- print(f"\n\n--- [{i+1}/{len(extensions)}]: {extension}\n\n")
- extension_req_path = os.path.join("extensions", extension, "requirements.txt")
- run_cmd(f"python -m pip install -r {extension_req_path} --upgrade", assert_success=False, environment=True)
- elif initial_installation:
- print_big_message("Will not install extensions due to INSTALL_EXTENSIONS environment variable.")
-
- # Detect the Python and PyTorch versions
+ if pull:
+ print_big_message("Updating the local copy of the repository with \"git pull\"")
+
+ files_to_check = [
+ 'start_linux.sh', 'start_macos.sh', 'start_windows.bat', 'start_wsl.bat',
+ 'update_linux.sh', 'update_macos.sh', 'update_windows.bat', 'update_wsl.bat',
+ 'one_click.py'
+ ]
+
+ before_pull_hashes = {file_name: calculate_file_hash(file_name) for file_name in files_to_check}
+ run_cmd("git pull --autostash", assert_success=True, environment=True)
+ after_pull_hashes = {file_name: calculate_file_hash(file_name) for file_name in files_to_check}
+
+ # Check for differences in installation file hashes
+ for file_name in files_to_check:
+ if before_pull_hashes[file_name] != after_pull_hashes[file_name]:
+ print_big_message(f"File '{file_name}' was updated during 'git pull'. Please run the script again.")
+ exit(1)
+
+ # Update PyTorch
+ if not initial_installation:
+ update_pytorch()
+
+ # Detect the PyTorch version
torver = torch_version()
is_cuda = '+cu' in torver
is_cuda118 = '+cu118' in torver # 2.1.0+cu118
@@ -335,11 +400,6 @@ def update_requirements(initial_installation=False):
run_cmd(f"python -m pip uninstall -y {package_name}", environment=True)
print(f"Uninstalled {package_name}")
- # Make sure that API requirements are installed (temporary)
- extension_req_path = os.path.join("extensions", "openai", "requirements.txt")
- if os.path.exists(extension_req_path):
- run_cmd(f"python -m pip install -r {extension_req_path} --upgrade", environment=True)
-
# Install/update the project requirements
run_cmd("python -m pip install -r temp_requirements.txt --upgrade", assert_success=True, environment=True)
os.remove('temp_requirements.txt')
@@ -364,19 +424,49 @@ def launch_webui():
check_env()
parser = argparse.ArgumentParser(add_help=False)
- parser.add_argument('--update', action='store_true', help='Update the web UI.')
+ parser.add_argument('--update-wizard', action='store_true', help='Launch a menu with update options.')
args, _ = parser.parse_known_args()
- if args.update:
- update_requirements()
+ if args.update_wizard:
+ while True:
+ choice = get_user_choice(
+ "What would you like to do?",
+ {
+ 'A': 'Update the web UI',
+ 'B': 'Install/update extensions requirements',
+ 'C': 'Revert local changes to repository files with \"git reset --hard\"',
+ 'N': 'Nothing (exit)'
+ },
+ )
+
+ if choice == 'A':
+ update_requirements()
+ elif choice == 'B':
+ choices = {'A': 'All extensions'}
+ for i, name in enumerate(get_extensions_names()):
+ key = generate_alphabetic_sequence(i + 1)
+ choices[key] = name
+
+ choice = get_user_choice("What extension?", choices)
+
+ if choice == 'A':
+ install_extensions_requirements()
+ else:
+ extension_req_path = os.path.join("extensions", choices[choice], "requirements.txt")
+ run_cmd(f"python -m pip install -r {extension_req_path} --upgrade", assert_success=False, environment=True)
+
+ update_requirements(pull=False)
+ elif choice == 'C':
+ run_cmd("git reset --hard", assert_success=True, environment=True)
+ elif choice == 'N':
+ sys.exit()
else:
- # If webui has already been installed, skip and run
if not is_installed():
install_webui()
os.chdir(script_dir)
if os.environ.get("LAUNCH_AFTER_INSTALL", "").lower() in ("no", "n", "false", "0", "f", "off"):
- print_big_message("Install finished successfully and will now exit due to LAUNCH_AFTER_INSTALL.")
+ print_big_message("Will now exit due to LAUNCH_AFTER_INSTALL.")
sys.exit()
# Check if a model has been downloaded yet
@@ -388,7 +478,7 @@ def launch_webui():
model_dir = 'models'
if len([item for item in glob.glob(f'{model_dir}/*') if not item.endswith(('.txt', '.yaml'))]) == 0:
- print_big_message("WARNING: You haven't downloaded any model yet.\nOnce the web UI launches, head over to the \"Model\" tab and download one.")
+ print_big_message("You haven't downloaded any model yet.\nOnce the web UI launches, head over to the \"Model\" tab and download one.")
# Workaround for llama-cpp-python loading paths in CUDA env vars even if they do not exist
conda_path_bin = os.path.join(conda_env_path, "bin")
| I start the start_windows.bat and it doesn't work.
### Describe the bug
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to C:\Users\alias\Documents\obabo\text-generation-webui\installer_files\miniconda_installer.exe
Ya existe el subdirectorio o el archivo C:\Users\alias\Documents\obabo\text-generation-webui\installer_files.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - La función de revocación no puede comprobar la revocación para el certificado.
Miniconda failed to download.
Presione una tecla para continuar . . .
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
i followed this tutorial on youtube https://www.youtube.com/watch?v=C-7jGYOGvy4&t=1028s
I tried to solve it with the help of gpt chat but it didn't work.
### Screenshot
_No response_
### Logs
```shell
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to C:\Users\alias\Documents\obabo\text-generation-webui\installer_files\miniconda_installer.exe
Ya existe el subdirectorio o el archivo C:\Users\alias\Documents\obabo\text-generation-webui\installer_files.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - La función de revocación no puede comprobar la revocación para el certificado.
Miniconda failed to download.
Presione una tecla para continuar . . .
```
### System Info
```shell
windows 11
only cpu
intel core i5
8 ram
```
| 2024-03-06T15:05:59 |
||
oobabooga/text-generation-webui | 5,722 | oobabooga__text-generation-webui-5722 | [
"5717"
] | 49b111e2dd2e5c4f1e6f2a25df38c3a1b1dbf4d7 | diff --git a/extensions/openai/typing.py b/extensions/openai/typing.py
--- a/extensions/openai/typing.py
+++ b/extensions/openai/typing.py
@@ -107,7 +107,7 @@ class ChatCompletionRequestParams(BaseModel):
context: str | None = Field(default=None, description="Overwrites the value set by character field.")
greeting: str | None = Field(default=None, description="Overwrites the value set by character field.")
user_name: str | None = Field(default=None, description="Your name (the user). By default, it's \"You\".", alias="name1")
- user_bio: str | None = Field(default=None, description="The user description/personality.")
+ user_bio: str | None = Field(default='', description="The user description/personality.")
chat_template_str: str | None = Field(default=None, description="Jinja2 template for chat.")
chat_instruct_command: str | None = None
| `user_bio` is None by default, get an error when replacing the character names
### Describe the bug
Using API `/v1/chat/completions` (None stream mode) without `user_bio`.
----
The new parameter `user_bio` in API chat mode raises an error because it's `None` as default.
https://github.com/oobabooga/text-generation-webui/blob/7cf1402bde48fd76af501d5efecb34227bf4d082/extensions/openai/typing.py#L110
----
Then, in `chat.py` can't replace the names correctly.
https://github.com/oobabooga/text-generation-webui/blob/7cf1402bde48fd76af501d5efecb34227bf4d082/modules/chat.py#L97
get this error

-----
An empty string as default in webui.
https://github.com/oobabooga/text-generation-webui/blob/7cf1402bde48fd76af501d5efecb34227bf4d082/modules/shared.py#L60
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Any request in `/v1/chat/completions`
### Screenshot
_No response_
### Logs
```shell
text = text.replace('{{user}}', name1).replace('{{char}}', name2)
AttributeError: 'NoneType' object has no attribute 'replace'
```
### System Info
```shell
None
```
| 2024-03-18T04:39:40 |
||
oobabooga/text-generation-webui | 5,794 | oobabooga__text-generation-webui-5794 | [
"5592"
] | 9ab7365b5637f0474dec1b655680e3eda9c31c24 | diff --git a/modules/ui.py b/modules/ui.py
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -233,14 +233,16 @@ def save_settings(state, preset, extensions_list, show_controls, theme_state):
# Save extension values in the UI
for extension_name in extensions_list:
- extension = getattr(extensions, extension_name).script
- if hasattr(extension, 'params'):
- params = getattr(extension, 'params')
- for param in params:
- _id = f"{extension_name}-{param}"
- # Only save if different from default value
- if param not in shared.default_settings or params[param] != shared.default_settings[param]:
- output[_id] = params[param]
+ extension = getattr(extensions, extension_name, None)
+ if extension:
+ extension = extension.script
+ if hasattr(extension, 'params'):
+ params = getattr(extension, 'params')
+ for param in params:
+ _id = f"{extension_name}-{param}"
+ # Only save if different from default value
+ if param not in shared.default_settings or params[param] != shared.default_settings[param]:
+ output[_id] = params[param]
# Do not save unchanged settings
for key in list(output.keys()):
| Save UI defaults to settings.yaml Not working
### Describe the bug
When I try to activate few options and try using Save UI defaults to settings.yaml it save empty settings.yaml
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
I check few options like in the image below:

and press Save UI defaults to settings.yaml
close the termenal and start again and it goes back as if I did not set those options
Add to that when I check settings.yaml it is totally empty file
### Screenshot
_No response_
### Logs
```shell
it does not show any log
```
### System Info
```shell
Windows 11
I9 13900
Nvidia 4090
128GB RAM
```
| I have the same problem.
Did you the "Save UI defaults to settings.yaml" show an "ERROR" and the second input tab show "./"?
I think is the problem source. | 2024-04-02T16:33:32 |
|
oobabooga/text-generation-webui | 5,900 | oobabooga__text-generation-webui-5900 | [
"5865"
] | 0877741b0350d200be7f1e6cca2780a25ee29cd0 | diff --git a/modules/grammar/grammar_utils.py b/modules/grammar/grammar_utils.py
--- a/modules/grammar/grammar_utils.py
+++ b/modules/grammar/grammar_utils.py
@@ -60,7 +60,7 @@ def hex_to_int(c):
return int(c)
elif "a" <= c.lower() <= "f":
return ord(c.lower()) - ord("a") + 10
- return -1
+ raise RuntimeError("unknown hex char " + c)
def remove_leading_white_space(src, newline_ok):
@@ -100,6 +100,13 @@ def parse_name(src):
return src[:pos], src[pos:]
+def read_hex(s):
+ val = 0
+ for c in s:
+ val = (val << 4) + hex_to_int(c)
+ return chr(val)
+
+
def parse_char(src):
"""
parse the leading char from the input string
@@ -111,13 +118,12 @@ def parse_char(src):
if src[0] == "\\":
esc = src[1]
if esc == "x":
- first = hex_to_int(src[2])
- if first > -1:
- second = hex_to_int(src[3])
- if second > -1:
- return (first << 4) + second, src[4:]
- raise RuntimeError("expecting \\xNN at " + src)
- elif esc in ('"', "[", "]"):
+ return read_hex(src[2:4]), src[4:]
+ elif esc == "u":
+ return read_hex(src[2:6]), src[6:]
+ elif esc == "U":
+ return read_hex(src[2:10]), src[10:]
+ elif esc in ('"', "[", "]", "\\", "-"):
return esc, src[2:]
elif esc == "r":
return "\r", src[2:]
@@ -454,7 +460,8 @@ class IncrementalGrammarConstraint(GrammarConstraint):
def __init__(self, grammar_str, start_rule_name, tokenizer):
super().__init__(grammar_str, start_rule_name, tokenizer)
- def accept_char(self, byte, stacks):
+ def accept_char(self, char, stacks):
+ byte = ord(char)
new_stacks = []
for stack in stacks:
# stack is empty
@@ -471,6 +478,9 @@ def accept_char(self, byte, stacks):
if self.grammar_encoding[pos + i] <= byte and byte <= self.grammar_encoding[pos + i + 1]:
found = True
break
+ if self.grammar_encoding[pos + i] >= byte and byte >= self.grammar_encoding[pos + i + 1]:
+ found = True
+ break
if not found:
continue
@@ -483,9 +493,8 @@ def accept_char(self, byte, stacks):
return new_stacks
def accept_string(self, string: str, stacks: List[List[int]]):
- _bytes = bytes(string, "utf-8")
- for byte in _bytes:
- stacks = self.accept_char(byte, stacks)
+ for char in string:
+ stacks = self.accept_char(char, stacks)
return stacks
def accept_token_id(self, token_id: int, stacks: List[List[int]]):
@@ -537,16 +546,18 @@ def filter_vocab(self, stacks, device):
# For each sub-rule in the grammar, cache whether each byte is accepted.
@lru_cache(maxsize=None)
- def pos_char_acceptance(self, pos):
- acceptance = [False] * 256
+ def pos_char_acceptance(self, pos, char):
+ byte = ord(char)
num_chars = self.grammar_encoding[pos]
pos += 1
for i in range(0, num_chars, 2):
start = self.grammar_encoding[pos + i]
end = self.grammar_encoding[pos + i + 1]
- for j in range(start, end + 1):
- acceptance[j] = True
- return acceptance
+ if byte >= start and byte <= end:
+ return True
+ if byte <= start and byte >= end:
+ return True
+ return False
# Probably this should be configurable. If the grammar has an exceedingly
# large number of states, the correct setting is a tradeoff between GPU
@@ -580,7 +591,7 @@ def traverse_trie(trie, stacks):
pos = stk[-1]
num_chars = self.grammar_encoding[pos]
- if not self.pos_char_acceptance(pos)[byte]:
+ if not self.pos_char_acceptance(pos, byte):
continue
pos += num_chars + 1
@@ -657,14 +668,14 @@ def fmt_token(id):
token = tokenizer.convert_ids_to_tokens(id)
token = re.sub(r"<0x([0-9a-fA-F]{2})>", replace_hex, token)
token = token.replace("▁", " ")
- return bytes(token, "utf-8")
+ return token
else:
print("Warning: unrecognized tokenizer: using default token formatting")
def fmt_token(id):
token = tokenizer.convert_ids_to_tokens(id)
- return bytes(token, "utf-8")
+ return token
# note: vocab_size doesn't work here because there are also
# get_added_vocab() tokens
| Let grammar escape backslashes
## Checklist:
- [X ] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
I was trying to develop a grammar string to respond in this format:
(path)"\\"(name).safetensors [(hash)]
Here is what I tried...
```
grammar_string: |-
root ::= path? name ".safetensors" " [" hash "]"
path ::= (folder "\\")*
folder ::= [a-zA-Z0-9_]+
name ::= [a-zA-Z0-9_]+
hash ::= HEX HEX HEX HEX HEX HEX HEX HEX HEX HEX
HEX ::= [0-9a-fA-F]
```
`error: unknown escape`
I then tried this:
` path ::= (folder "\")*`
`error: out of range`
Also this:
` path ::= (folder "\\\\")*`
`error: unknown escape`
It turns out, the parser was just incapable of escaping backslashes.
Simply adding this condition to the parser code resolves it:
```
elif esc == "\\":
return "\\", src[2:]
```
| 2024-04-21T16:08:48 |
||
oobabooga/text-generation-webui | 6,203 | oobabooga__text-generation-webui-6203 | [
"6201"
] | 3315d0065100c1c349d510d33679370b29a45385 | diff --git a/modules/chat.py b/modules/chat.py
--- a/modules/chat.py
+++ b/modules/chat.py
@@ -577,7 +577,7 @@ def find_all_histories_with_first_prompts(state):
data = json.load(f)
first_prompt = ""
- if 'visible' in data and len(data['visible']) > 0:
+ if data and 'visible' in data and len(data['visible']) > 0:
if data['internal'][0][0] == '<|BEGIN-VISIBLE-CHAT|>':
if len(data['visible']) > 1:
first_prompt = html.unescape(data['visible'][1][0])
diff --git a/modules/llama_cpp_python_hijack.py b/modules/llama_cpp_python_hijack.py
--- a/modules/llama_cpp_python_hijack.py
+++ b/modules/llama_cpp_python_hijack.py
@@ -100,9 +100,11 @@ def eval_with_progress(self, tokens: Sequence[int]):
def monkey_patch_llama_cpp_python(lib):
+ if getattr(lib.Llama, '_is_patched', False):
+ # If the patch is already applied, do nothing
+ return
def my_generate(self, *args, **kwargs):
-
if shared.args.streaming_llm:
new_sequence = args[0]
past_sequence = self._input_ids
@@ -116,3 +118,6 @@ def my_generate(self, *args, **kwargs):
lib.Llama.eval = eval_with_progress
lib.Llama.original_generate = lib.Llama.generate
lib.Llama.generate = my_generate
+
+ # Set the flag to indicate that the patch has been applied
+ lib.Llama._is_patched = True
| monkey_patch_llama_cpp_python is not needed
### Describe the bug
A bit busy to do a pull request.
Current (v1.9) version does not work with llama.cpp, solved it by disabling line 54-55 in llama_cpp_python_hijack.py.
```python
#if return_lib is not None:
#monkey_patch_llama_cpp_python(return_lib)
```
Guessing that the latest llama-cpp-python version update turned it into a generator so we don't have to patch one.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
```python
#if return_lib is not None:
#monkey_patch_llama_cpp_python(return_lib)
```
### Screenshot
_No response_
### Logs
```shell
included
```
### System Info
```shell
Windows python==3.11
```
| 2024-07-05T10:37:12 |
||
Showndarya/Hacktoberfest | 435 | Showndarya__Hacktoberfest-435 | [
"434"
] | 34c250833bec45885d85b6a3edb9b771b1ba9316 | diff --git a/.travis.py b/.travis.py
--- a/.travis.py
+++ b/.travis.py
@@ -19,13 +19,13 @@
if re.search(r"\.json$", changed_file):
changed_files_json.append(changed_file)
-
+
# Iterate over list of changed JSON files.
for changed_file_json in changed_files_json:
print(f"Checking file {changed_file_json}...")
there_was_an_error = False
- if not changed_file_json[0].isupper():
+ if not os.path.basename(changed_file_json)[0].isupper():
there_was_an_error = True
print("🔥 File name not capitalized.")
| Travis test ignore first letter of filename for some reason
I'll try and figure out why, thought about simply renaming every file in the travis script but that requires alot of work and overhead for little gain, it is certainly doable, you have to configure the git on the travis instance and make a new commit etc.
Might as well have a cron job or something to it recursively and periodically over the entirety of the repo and make a single commit...
| Oh yeah easy catch I'm checking if the first letter is a capital in `A/Axxxxx` instead of the actual filename. | 2018-10-08T17:55:12 |
|
Showndarya/Hacktoberfest | 545 | Showndarya__Hacktoberfest-545 | [
"443"
] | 546d2c4bbf43427d0875939f4a1bc9c26be892fc | diff --git a/.travis.py b/.travis.py
--- a/.travis.py
+++ b/.travis.py
@@ -34,7 +34,7 @@
file_content = json.loads(data_file.read())
except json.decoder.JSONDecodeError:
there_was_an_error = True
- print("🔥 JSON could not be parsed.")
+ print(f"🔥 JSON could not be parsed. Follow this link to know more : https://jsonlint.com/?json={data_file.read()}")
if 'word' not in file_content:
there_was_an_error = True
| Make JSON linting more verbose ?
Currently we simply check :
1. json is valid
2. json contains the keys we want
3. those keys are not empty
the problem is with step 1, it's not very helpful if people have no idea what the JSON spec is.
One fix is :
1. add a link to jsonlint.com or such a service in the print of the error so that people can check themselves.
2. add something like http://deron.meranda.us/python/demjson/ with proper package caching on travis side, but it will make for slower builds still probably
| Might also want to look into http://www.fixjson.com
I tried it on https://github.com/Showndarya/Hacktoberfest/pull/442/files and it would have added the missing commas | 2018-10-09T14:48:19 |
|
feast-dev/feast | 135 | feast-dev__feast-135 | [
"134"
] | 5a6edc4f17c68d1de5781460cd18940a6bf6750c | diff --git a/sdk/python/feast/sdk/client.py b/sdk/python/feast/sdk/client.py
--- a/sdk/python/feast/sdk/client.py
+++ b/sdk/python/feast/sdk/client.py
@@ -166,7 +166,7 @@ def run(self, importer, name_override=None,
self._apply_entity(importer.entity)
if apply_features:
for feature in importer.features:
- self._apply_feature(feature)
+ self._apply_feature(importer.features[feature])
if importer.require_staging:
print("Staging file to remote path {}"
| Python SDK fails to apply feature when submitting job
## Expected Behavior
When using the Python SDK to start an import job, the `run` method should be able to `apply` features within the import.
## Current Behavior
The run method returns an error:
```
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/xxx/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/183.5429.31/helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/xxx/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/183.5429.31/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/xxx/projects/feast/sdk/python/run.py", line 98, in <module>
fs.run(importer, apply_features=True, apply_entity=True)
File "/home/xxx/projects/feast/sdk/python/feast/sdk/client.py", line 169, in run
self._apply_feature(feature)
File "/home/xxx/projects/feast/sdk/python/feast/sdk/client.py", line 392, in _apply_feature
response = self._core_service_stub.ApplyFeature(feature.spec)
AttributeError: 'str' object has no attribute 'spec'
```
## Steps to reproduce
Running the following code from the `0.0.1` Python SDK quickstart.
```
fs = Client(core_url=FEAST_CORE_URL, verbose=True)
serving_ds=Datastore(id='REDIS1')
warehouse_ds=Datastore(id='BIGQUERY1')
# Create importer
importer = Importer.from_df(df_complete,
entity='ride',
granularity=Granularity.NONE,
owner='[email protected]',
staging_location=STAGING_LOCATION,
id_column='ride',
timestamp_column='pickup_datetime',
serving_store=serving_ds,
warehouse_store=warehouse_ds)
# Update feature and entity metadata. Ideally you want to update these manually
# so that they contain adequate information for the next user
importer.entity.description = 'nyc taxi dataset'
for feature_id in importer.features:
importer.features[feature_id].description = 'nyc taxi dataset'
# Ingest the feature data into the store
fs.run(importer, apply_features=True, apply_entity=True)
```
### Specifications
- Version: 0.0.1
- Platform: Ubuntu 18.04
- Subsystem:
## Possible Solution
| 2019-02-17T04:26:36 |
||
feast-dev/feast | 143 | feast-dev__feast-143 | [
"133"
] | 294a6ecd777c1f0cdab22171f7bc06cf7ca989d7 | diff --git a/sdk/python/feast/sdk/importer.py b/sdk/python/feast/sdk/importer.py
--- a/sdk/python/feast/sdk/importer.py
+++ b/sdk/python/feast/sdk/importer.py
@@ -106,7 +106,8 @@ def from_csv(cls, path, entity, granularity, owner, staging_location=None,
Returns:
Importer: the importer for the dataset provided.
"""
- source_options = {"format": "csv"}
+ src_type = "file.csv"
+ source_options = {}
source_options["path"], require_staging = \
_get_remote_location(path, staging_location)
if is_gs_path(path):
@@ -118,9 +119,10 @@ def from_csv(cls, path, entity, granularity, owner, staging_location=None,
feature_columns, timestamp_column,
timestamp_value, serving_store,
warehouse_store, df)
- iport_spec = _create_import("file", source_options, job_options, entity, schema)
+ iport_spec = _create_import(src_type, source_options, job_options,
+ entity, schema)
- props = (_properties("csv", len(df.index), require_staging,
+ props = (_properties(src_type, len(df.index), require_staging,
source_options["path"]))
specs = _specs(iport_spec, Entity(name=entity), features)
| diff --git a/sdk/python/tests/sdk/test_importer.py b/sdk/python/tests/sdk/test_importer.py
--- a/sdk/python/tests/sdk/test_importer.py
+++ b/sdk/python/tests/sdk/test_importer.py
@@ -15,7 +15,8 @@
import pandas as pd
import pytest
import ntpath
-from feast.sdk.resources.feature import Feature, Granularity, ValueType, Datastore
+from feast.sdk.resources.feature import Feature, Granularity, ValueType, \
+ Datastore
from feast.sdk.importer import _create_feature, Importer
from feast.sdk.utils.gs_utils import is_gs_path
from feast.types.Granularity_pb2 import Granularity as Granularity_pb2
@@ -30,56 +31,60 @@ def test_from_csv(self):
staging_location = "gs://test-bucket"
id_column = "driver_id"
feature_columns = ["avg_distance_completed",
- "avg_customer_distance_completed"]
+ "avg_customer_distance_completed"]
timestamp_column = "ts"
- importer = Importer.from_csv(path = csv_path,
- entity = entity_name,
- granularity = feature_granularity,
- owner = owner,
- staging_location=staging_location,
- id_column = id_column,
- feature_columns=feature_columns,
- timestamp_column=timestamp_column)
+ importer = Importer.from_csv(path=csv_path,
+ entity=entity_name,
+ granularity=feature_granularity,
+ owner=owner,
+ staging_location=staging_location,
+ id_column=id_column,
+ feature_columns=feature_columns,
+ timestamp_column=timestamp_column)
self._validate_csv_importer(importer, csv_path, entity_name,
- feature_granularity, owner, staging_location, id_column,
- feature_columns, timestamp_column)
+ feature_granularity, owner,
+ staging_location, id_column,
+ feature_columns, timestamp_column)
def test_from_csv_id_column_not_specified(self):
with pytest.raises(ValueError,
- match="Column with name driver is not found") as e_info:
+ match="Column with name driver is not found"):
feature_columns = ["avg_distance_completed",
- "avg_customer_distance_completed"]
+ "avg_customer_distance_completed"]
csv_path = "tests/data/driver_features.csv"
- importer = Importer.from_csv(path = csv_path,
- entity = "driver",
- granularity = Granularity.DAY,
- owner = "[email protected]",
- staging_location="gs://test-bucket",
- feature_columns=feature_columns,
- timestamp_column="ts")
+ Importer.from_csv(path=csv_path,
+ entity="driver",
+ granularity=Granularity.DAY,
+ owner="[email protected]",
+ staging_location="gs://test-bucket",
+ feature_columns=feature_columns,
+ timestamp_column="ts")
def test_from_csv_timestamp_column_not_specified(self):
feature_columns = ["avg_distance_completed",
- "avg_customer_distance_completed", "avg_distance_cancelled"]
+ "avg_customer_distance_completed",
+ "avg_distance_cancelled"]
csv_path = "tests/data/driver_features.csv"
entity_name = "driver"
granularity = Granularity.DAY
owner = "[email protected]"
staging_location = "gs://test-bucket"
id_column = "driver_id"
- importer = Importer.from_csv(path = csv_path,
- entity = entity_name,
- granularity = granularity,
- owner = owner,
- staging_location=staging_location,
- id_column = id_column,
- feature_columns= feature_columns)
+ importer = Importer.from_csv(path=csv_path,
+ entity=entity_name,
+ granularity=granularity,
+ owner=owner,
+ staging_location=staging_location,
+ id_column=id_column,
+ feature_columns=feature_columns)
self._validate_csv_importer(importer, csv_path, entity_name,
- granularity, owner, staging_location = staging_location,
- id_column=id_column, feature_columns=feature_columns)
+ granularity, owner,
+ staging_location=staging_location,
+ id_column=id_column,
+ feature_columns=feature_columns)
def test_from_csv_feature_columns_not_specified(self):
csv_path = "tests/data/driver_features.csv"
@@ -89,43 +94,45 @@ def test_from_csv_feature_columns_not_specified(self):
staging_location = "gs://test-bucket"
id_column = "driver_id"
timestamp_column = "ts"
- importer = Importer.from_csv(path = csv_path,
- entity = entity_name,
- granularity = granularity,
- owner = owner,
- staging_location=staging_location,
- id_column = id_column,
- timestamp_column=timestamp_column)
+ importer = Importer.from_csv(path=csv_path,
+ entity=entity_name,
+ granularity=granularity,
+ owner=owner,
+ staging_location=staging_location,
+ id_column=id_column,
+ timestamp_column=timestamp_column)
self._validate_csv_importer(importer, csv_path, entity_name,
- granularity, owner, staging_location = staging_location,
- id_column=id_column, timestamp_column=timestamp_column)
+ granularity, owner,
+ staging_location=staging_location,
+ id_column=id_column,
+ timestamp_column=timestamp_column)
def test_from_csv_staging_location_not_specified(self):
with pytest.raises(ValueError,
- match="Specify staging_location for importing local file/dataframe") as e_info:
+ match="Specify staging_location for importing local file/dataframe"):
feature_columns = ["avg_distance_completed",
- "avg_customer_distance_completed"]
+ "avg_customer_distance_completed"]
csv_path = "tests/data/driver_features.csv"
- importer = Importer.from_csv(path = csv_path,
- entity = "driver",
- granularity = Granularity.DAY,
- owner = "[email protected]",
- feature_columns=feature_columns,
- timestamp_column="ts")
+ Importer.from_csv(path=csv_path,
+ entity="driver",
+ granularity=Granularity.DAY,
+ owner="[email protected]",
+ feature_columns=feature_columns,
+ timestamp_column="ts")
with pytest.raises(ValueError,
- match="Staging location must be in GCS") as e_info:
+ match="Staging location must be in GCS") as e_info:
feature_columns = ["avg_distance_completed",
- "avg_customer_distance_completed"]
+ "avg_customer_distance_completed"]
csv_path = "tests/data/driver_features.csv"
- importer = Importer.from_csv(path = csv_path,
- entity = "driver",
- granularity = Granularity.DAY,
- owner = "[email protected]",
- staging_location = "/home",
- feature_columns=feature_columns,
- timestamp_column="ts")
+ Importer.from_csv(path=csv_path,
+ entity="driver",
+ granularity=Granularity.DAY,
+ owner="[email protected]",
+ staging_location="/home",
+ feature_columns=feature_columns,
+ timestamp_column="ts")
def test_from_df(self):
csv_path = "tests/data/driver_features.csv"
@@ -133,59 +140,63 @@ def test_from_df(self):
staging_location = "gs://test-bucket"
entity = "driver"
- importer = Importer.from_df(df = df,
- entity = entity,
- granularity = Granularity.DAY,
- owner = "[email protected]",
- staging_location=staging_location,
- id_column = "driver_id",
- timestamp_column="ts")
-
+ importer = Importer.from_df(df=df,
+ entity=entity,
+ granularity=Granularity.DAY,
+ owner="[email protected]",
+ staging_location=staging_location,
+ id_column="driver_id",
+ timestamp_column="ts")
assert importer.require_staging == True
assert ("{}/tmp_{}".format(staging_location, entity)
- in importer.remote_path)
+ in importer.remote_path)
for feature in importer.features.values():
assert feature.name in df.columns
assert feature.id == "driver.day." + feature.name
import_spec = importer.spec
assert import_spec.type == "file"
- assert import_spec.sourceOptions == {"format" : "csv", "path" : importer.remote_path}
+ assert import_spec.sourceOptions == {"format": "csv",
+ "path": importer.remote_path}
assert import_spec.entities == ["driver"]
schema = import_spec.schema
assert schema.entityIdColumn == "driver_id"
assert schema.timestampValue is not None
feature_columns = ["completed", "avg_distance_completed",
- "avg_customer_distance_completed",
- "avg_distance_cancelled"]
+ "avg_customer_distance_completed",
+ "avg_distance_cancelled"]
for col, field in zip(df.columns.values, schema.fields):
assert col == field.name
if col in feature_columns:
assert field.featureId == "driver.day." + col
def _validate_csv_importer(self,
- importer, csv_path, entity_name, feature_granularity, owner,
- staging_location = None, id_column = None, feature_columns = None,
- timestamp_column = None, timestamp_value = None):
+ importer, csv_path, entity_name,
+ feature_granularity, owner,
+ staging_location=None, id_column=None,
+ feature_columns=None,
+ timestamp_column=None, timestamp_value=None):
df = pd.read_csv(csv_path)
assert not importer.require_staging == is_gs_path(csv_path)
if importer.require_staging:
assert importer.remote_path == "{}/{}".format(staging_location,
- ntpath.basename(csv_path))
+ ntpath.basename(
+ csv_path))
# check features created
for feature in importer.features.values():
assert feature.name in df.columns
assert feature.id == "{}.{}.{}".format(entity_name,
- Granularity_pb2.Enum.Name(feature_granularity.value).lower(),
- feature.name)
+ Granularity_pb2.Enum.Name(
+ feature_granularity.value).lower(),
+ feature.name)
import_spec = importer.spec
- assert import_spec.type == "file"
+ assert import_spec.type == "file.csv"
path = importer.remote_path if importer.require_staging else csv_path
- assert import_spec.sourceOptions == {"format" : "csv", "path" : path}
+ assert import_spec.sourceOptions == {"path": path}
assert import_spec.entities == [entity_name]
schema = import_spec.schema
@@ -204,19 +215,23 @@ def _validate_csv_importer(self,
for col, field in zip(df.columns.values, schema.fields):
assert col == field.name
if col in feature_columns:
- assert field.featureId == "{}.{}.{}".format(entity_name,
- Granularity_pb2.Enum.Name(feature_granularity.value).lower(), col)
+ assert field.featureId == \
+ "{}.{}.{}".format(entity_name,
+ Granularity_pb2.Enum.Name(
+ feature_granularity.value).lower(),
+ col)
class TestHelpers:
def test_create_feature(self):
- col = pd.Series([1]*3,dtype='int32',name="test")
+ col = pd.Series([1] * 3, dtype='int32', name="test")
expected = Feature(name="test",
- entity="test",
- granularity=Granularity.NONE,
- owner="person",
- value_type=ValueType.INT32)
- actual = _create_feature(col, "test", Granularity.NONE, "person", None, None)
+ entity="test",
+ granularity=Granularity.NONE,
+ owner="person",
+ value_type=ValueType.INT32)
+ actual = _create_feature(col, "test", Granularity.NONE, "person", None,
+ None)
assert actual.id == expected.id
assert actual.value_type == expected.value_type
assert actual.owner == expected.owner
@@ -231,7 +246,8 @@ def test_create_feature_with_stores(self):
serving_store=Datastore(id="SERVING"),
warehouse_store=Datastore(id="WAREHOUSE"))
actual = _create_feature(col, "test", Granularity.NONE, "person",
- Datastore(id="SERVING"), Datastore(id="WAREHOUSE"))
+ Datastore(id="SERVING"),
+ Datastore(id="WAREHOUSE"))
assert actual.id == expected.id
assert actual.value_type == expected.value_type
assert actual.owner == expected.owner
| Default dump format should be changed for Python SDK
## Expected Behavior
When using the Python SDK to dump an "Importer" to a YAML file, the type should be "file.csv"
## Current Behavior
Using the dump method results in the importer's type being saved as "file", which fails validation when using the YAML file for importing from the CLI.
## Steps to reproduce
Follow the current Quickstart [here](https://github.com/gojek/feast/blob/master/sdk/python/examples/quickstart/Quickstart.ipynb)
Dump the importer after it is defined.
```
importer.dump('out.yaml')
```
### Specifications
- Version: 0.0.1
- Platform: Ubuntu 18.04
- Subsystem:
## Possible Solution
This default probably needs to be changed:
https://github.com/gojek/feast/blob/master/sdk/python/feast/sdk/importer.py#L121
| Not sure about intended behavior, which is why I created the issue. | 2019-02-23T15:30:16 |
feast-dev/feast | 208 | feast-dev__feast-208 | [
"201"
] | d5000015f7313b8778d091138899cd6ddf9fd222 | diff --git a/sdk/python/feast/sdk/client.py b/sdk/python/feast/sdk/client.py
--- a/sdk/python/feast/sdk/client.py
+++ b/sdk/python/feast/sdk/client.py
@@ -266,7 +266,7 @@ def download_dataset(
str: path to the downloaded file
"""
return self._table_downloader.download_table_as_file(
- dataset_info.table_id, dest, staging_location, file_type
+ dataset_info.full_table_id, dest, staging_location, file_type
)
def download_dataset_to_df(self, dataset_info, staging_location):
@@ -282,7 +282,7 @@ def download_dataset_to_df(self, dataset_info, staging_location):
"""
return self._table_downloader.download_table_as_df(
- dataset_info.table_id, staging_location
+ dataset_info.full_table_id, staging_location
)
def close(self):
diff --git a/sdk/python/feast/sdk/resources/feature_set.py b/sdk/python/feast/sdk/resources/feature_set.py
--- a/sdk/python/feast/sdk/resources/feature_set.py
+++ b/sdk/python/feast/sdk/resources/feature_set.py
@@ -66,16 +66,16 @@ class FileType(object):
class DatasetInfo:
- def __init__(self, name, table_id):
+ def __init__(self, name, full_table_id):
"""
Create instance of DatasetInfo with a BigQuery table as its
backing store.
Args:
name: (str) dataset name
- table_id: (str) fully qualified table id
+ full_table_id: (str) fully qualified table id
"""
self._name = name
- self._table_id = table_id
+ self._full_table_id = full_table_id
@property
def name(self):
@@ -87,10 +87,10 @@ def name(self):
return self._name
@property
- def table_id(self):
+ def full_table_id(self):
"""
Returns: fully qualified table id
"""
- return self._table_id
+ return self._full_table_id
diff --git a/sdk/python/feast/sdk/utils/bq_util.py b/sdk/python/feast/sdk/utils/bq_util.py
--- a/sdk/python/feast/sdk/utils/bq_util.py
+++ b/sdk/python/feast/sdk/utils/bq_util.py
@@ -197,11 +197,11 @@ def bq(self):
self._bq = BQClient()
return self._bq
- def download_table_as_file(self, table_id, dest, staging_location, file_type):
+ def download_table_as_file(self, full_table_id, dest, staging_location, file_type):
"""
Download a bigquery table as file
Args:
- table_id (str): fully qualified BigQuery table id
+ full_table_id (str): fully qualified BigQuery table id
dest (str): destination filename
staging_location (str): url to staging_location (currently
support a folder in GCS)
@@ -218,7 +218,7 @@ def download_table_as_file(self, table_id, dest, staging_location, file_type):
job_config = ExtractJobConfig()
job_config.destination_format = file_type
- src_table = Table.from_string(table_id)
+ src_table = Table.from_string(full_table_id)
job = self.bq.extract_table(src_table, staging_file_path, job_config=job_config)
# await completion
@@ -230,11 +230,11 @@ def download_table_as_file(self, table_id, dest, staging_location, file_type):
blob.download_to_filename(dest)
return dest
- def download_table_as_df(self, table_id, staging_location):
+ def download_table_as_df(self, full_table_id, staging_location):
"""
Download a BigQuery table as Pandas Dataframe
Args:
- table_id (src) : fully qualified BigQuery table id
+ full_table_id (src) : fully qualified BigQuery table id
staging_location: url to staging_location (currently
support a folder in GCS)
@@ -250,7 +250,7 @@ def download_table_as_df(self, table_id, staging_location):
job_config = ExtractJobConfig()
job_config.destination_format = DestinationFormat.CSV
job = self.bq.extract_table(
- Table.from_string(table_id), staging_file_path, job_config=job_config
+ Table.from_string(full_table_id), staging_file_path, job_config=job_config
)
# await completion
| diff --git a/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java b/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
--- a/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
+++ b/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
@@ -16,13 +16,6 @@
*/
package feast.core.training;
-import static org.hamcrest.Matchers.equalTo;
-import static org.junit.Assert.assertThat;
-import static org.mockito.ArgumentMatchers.any;
-import static org.mockito.ArgumentMatchers.anyLong;
-import static org.mockito.Mockito.verify;
-import static org.mockito.Mockito.when;
-
import com.google.cloud.bigquery.BigQuery;
import com.google.protobuf.Timestamp;
import com.google.protobuf.util.Timestamps;
@@ -30,7 +23,6 @@
import feast.core.DatasetServiceProto.FeatureSet;
import feast.core.storage.BigQueryStorageManager;
import feast.specs.StorageSpecProto.StorageSpec;
-import java.time.Clock;
import java.time.Instant;
import java.util.Arrays;
import org.junit.Before;
@@ -38,6 +30,21 @@
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
+import static org.hamcrest.Matchers.equalTo;
+import static org.junit.Assert.assertThat;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+// TODO: Should consider testing with "actual" BigQuery vs mocking it
+// because the mocked BigQuery client is very basic and may miss important functionalities
+// such as an actual table / dataset is actually created
+// In the test method, should probably add a condition so that tests can be skipped if
+// the user running the tests do not have permission to manage BigQuery (although ideally they should have)
+// Example of adding the condition whether or not to accept the test result as valid:
+// https://stackoverflow.com/questions/1689242/conditionally-ignoring-tests-in-junit-4
+
public class BigQueryTraningDatasetCreatorTest {
public static final String projectId = "the-project";
@@ -48,11 +55,9 @@ public class BigQueryTraningDatasetCreatorTest {
private BigQueryDatasetTemplater templater;
@Mock
private BigQuery bq;
- @Mock
- private Clock clock;
@Before
- public void setUp() throws Exception {
+ public void setUp() {
MockitoAnnotations.initMocks(this);
when(templater.getStorageSpec()).thenReturn(StorageSpec.newBuilder()
.setId("BIGQUERY1")
@@ -60,14 +65,13 @@ public void setUp() throws Exception {
.putOptions("project", "project")
.putOptions("dataset", "dataset")
.build());
- creator = new BigQueryTraningDatasetCreator(templater, clock, projectId, datasetPrefix, bq);
+ creator = new BigQueryTraningDatasetCreator(templater, projectId, datasetPrefix, bq);
when(templater.createQuery(
any(FeatureSet.class), any(Timestamp.class), any(Timestamp.class), anyLong()))
.thenReturn("SELECT * FROM `project.dataset.table`");
}
-
@Test
public void shouldCreateCorrectDatasetIfPrefixNotSpecified() {
String entityName = "myentity";
@@ -85,14 +89,15 @@ public void shouldCreateCorrectDatasetIfPrefixNotSpecified() {
long limit = 999;
String namePrefix = "";
- DatasetInfo dsInfo =
- creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
- assertThat(dsInfo.getName(), equalTo("myentity_0_20180101_20190101"));
+ DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
+ assertThat(
+ dsInfo.getName(), equalTo("feast_myentity_b0009f0f7df634ddc130571319e0deb9742eb1da"));
assertThat(
dsInfo.getTableUrl(),
equalTo(
String.format(
- "%s.%s_%s.%s", projectId, datasetPrefix, entityName, "0_20180101_20190101")));
+ "%s.dataset.%s_%s_%s",
+ projectId, datasetPrefix, entityName, "b0009f0f7df634ddc130571319e0deb9742eb1da")));
}
@Test
@@ -112,15 +117,20 @@ public void shouldCreateCorrectDatasetIfPrefixIsSpecified() {
long limit = 999;
String namePrefix = "mydataset";
- DatasetInfo dsInfo =
- creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
+ DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
assertThat(
dsInfo.getTableUrl(),
equalTo(
String.format(
- "%s.%s_%s.%s", projectId, datasetPrefix, entityName,
- "mydataset_0_20180101_20190101")));
- assertThat(dsInfo.getName(), equalTo("mydataset_0_20180101_20190101"));
+ "%s.dataset.%s_%s_%s_%s",
+ projectId,
+ datasetPrefix,
+ entityName,
+ namePrefix,
+ "b0009f0f7df634ddc130571319e0deb9742eb1da")));
+ assertThat(
+ dsInfo.getName(),
+ equalTo("feast_myentity_mydataset_b0009f0f7df634ddc130571319e0deb9742eb1da"));
}
@Test
diff --git a/sdk/python/tests/data/austin_bikeshare.bikeshare_stations.avro b/sdk/python/tests/data/austin_bikeshare.bikeshare_stations.avro
Binary files a/sdk/python/tests/data/austin_bikeshare.bikeshare_stations.avro and b/sdk/python/tests/data/austin_bikeshare.bikeshare_stations.avro differ
diff --git a/sdk/python/tests/sdk/resources/test_feature_set.py b/sdk/python/tests/sdk/resources/test_feature_set.py
--- a/sdk/python/tests/sdk/resources/test_feature_set.py
+++ b/sdk/python/tests/sdk/resources/test_feature_set.py
@@ -38,7 +38,7 @@ def test_different_entity(self):
class TestDatasetInfo(object):
def test_creation(self):
name = "dataset_name"
- table_id = "gcp-project.dataset.table_name"
- dataset = DatasetInfo(name, table_id)
+ full_table_id = "gcp-project.dataset.table_name"
+ dataset = DatasetInfo(name, full_table_id)
assert dataset.name == name
- assert dataset.table_id == table_id
+ assert dataset.full_table_id == full_table_id
diff --git a/sdk/python/tests/sdk/test_client.py b/sdk/python/tests/sdk/test_client.py
--- a/sdk/python/tests/sdk/test_client.py
+++ b/sdk/python/tests/sdk/test_client.py
@@ -198,7 +198,7 @@ def test_create_dataset(self, client, mocker):
ds = client.create_dataset(fs, start_date, end_date)
assert "dataset_name" == ds.name
- assert "project.dataset.table" == ds.table_id
+ assert "project.dataset.table" == ds.full_table_id
mock_trn_stub.CreateDataset.assert_called_once_with(
DatasetServiceTypes.CreateDatasetRequest(
featureSet=fs.proto,
@@ -229,7 +229,7 @@ def test_create_dataset_with_limit(self, client, mocker):
ds = client.create_dataset(fs, start_date, end_date, limit=limit)
assert "dataset_name" == ds.name
- assert "project.dataset.table" == ds.table_id
+ assert "project.dataset.table" == ds.full_table_id
mock_trn_stub.CreateDataset.assert_called_once_with(
DatasetServiceTypes.CreateDatasetRequest(
featureSet=fs.proto,
@@ -263,7 +263,7 @@ def test_create_dataset_with_name_prefix(self, client, mocker):
fs, start_date, end_date, limit=limit, name_prefix=name_prefix)
assert "dataset_name" == ds.name
- assert "project.dataset.table" == ds.table_id
+ assert "project.dataset.table" == ds.full_table_id
mock_dssvc_stub.CreateDataset.assert_called_once_with(
DatasetServiceTypes.CreateDatasetRequest(
featureSet=fs.proto,
@@ -427,9 +427,9 @@ def test_download_dataset_as_file(self, client, mocker):
table_dlder, "download_table_as_file", return_value=destination)
client._table_downloader = table_dlder
- table_id = "project.dataset.table"
+ full_table_id = "project.dataset.table"
staging_location = "gs://gcs_bucket/"
- dataset = DatasetInfo("mydataset", table_id)
+ dataset = DatasetInfo("mydataset", full_table_id)
result = client.download_dataset(
dataset,
@@ -439,7 +439,7 @@ def test_download_dataset_as_file(self, client, mocker):
assert result == destination
table_dlder.download_table_as_file.assert_called_once_with(
- table_id, destination, staging_location, FileType.CSV)
+ full_table_id, destination, staging_location, FileType.CSV)
def _create_query_features_response(self, entity_name, entities):
response = QueryFeaturesResponse(entityName=entity_name)
diff --git a/sdk/python/tests/sdk/utils/test_bq_utils.py b/sdk/python/tests/sdk/utils/test_bq_utils.py
--- a/sdk/python/tests/sdk/utils/test_bq_utils.py
+++ b/sdk/python/tests/sdk/utils/test_bq_utils.py
@@ -48,6 +48,23 @@ def test_get_table_name_not_bq():
with pytest.raises(ValueError, match="storage spec is not BigQuery storage spec"):
get_table_name(feature_id, storage_spec)
+
[email protected](
+ os.getenv("SKIP_BIGQUERY_TEST") is not None,
+ reason="SKIP_BIGQUERY_TEST is set in the environment",
+)
+def test_query_to_dataframe():
+ with open(
+ os.path.join(testdata_path, "austin_bikeshare.bikeshare_stations.avro"), "rb"
+ ) as expected_file:
+ avro_reader = fastavro.reader(expected_file)
+ expected = pd.DataFrame.from_records(avro_reader)
+
+ query = "SELECT * FROM `bigquery-public-data.austin_bikeshare.bikeshare_stations`"
+ actual = query_to_dataframe(query)
+ assert expected.equals(actual)
+
+
@pytest.mark.skipif(
os.getenv("SKIP_BIGQUERY_TEST") is not None,
reason="SKIP_BIGQUERY_TEST is set in the environment",
@@ -67,7 +84,7 @@ def test_download_table_as_df(self, mocker):
staging_path = "gs://temp/"
staging_file_name = "temp_0"
- table_id = "project_id.dataset_id.table_id"
+ full_table_id = "project_id.dataset_id.table_id"
table_dldr = TableDownloader()
exp_staging_path = os.path.join(staging_path, staging_file_name)
@@ -75,11 +92,11 @@ def test_download_table_as_df(self, mocker):
table_dldr._bq = _Mock_BQ_Client()
mocker.patch.object(table_dldr._bq, "extract_table", return_value=_Job())
- table_dldr.download_table_as_df(table_id, staging_location=staging_path)
+ table_dldr.download_table_as_df(full_table_id, staging_location=staging_path)
assert len(table_dldr._bq.extract_table.call_args_list) == 1
args, kwargs = table_dldr._bq.extract_table.call_args_list[0]
- assert args[0].full_table_id == Table.from_string(table_id).full_table_id
+ assert args[0].full_table_id == Table.from_string(full_table_id).full_table_id
assert args[1] == exp_staging_path
assert kwargs["job_config"].destination_format == "CSV"
mocked_gcs_to_df.assert_called_once_with(exp_staging_path)
@@ -97,25 +114,25 @@ def test_download_json(self, mocker):
self._test_download_file(mocker, FileType.JSON)
def test_download_invalid_staging_url(self):
- table_id = "project_id.dataset_id.table_id"
+ full_table_id = "project_id.dataset_id.table_id"
table_dldr = TableDownloader()
with pytest.raises(
ValueError, match="staging_uri must be a directory in " "GCS"
):
table_dldr.download_table_as_file(
- table_id, "/tmp/dst", "/local/directory", FileType.CSV
+ full_table_id, "/tmp/dst", "/local/directory", FileType.CSV
)
with pytest.raises(
ValueError, match="staging_uri must be a directory in " "GCS"
):
- table_dldr.download_table_as_df(table_id, "/local/directory")
+ table_dldr.download_table_as_df(full_table_id, "/local/directory")
def _test_download_file(self, mocker, type):
staging_path = "gs://temp/"
staging_file_name = "temp_0"
dst_path = "/tmp/myfile.csv"
- table_id = "project_id.dataset_id.table_id"
+ full_table_id = "project_id.dataset_id.table_id"
table_dldr = TableDownloader()
mock_blob = _Blob()
@@ -128,13 +145,13 @@ def _test_download_file(self, mocker, type):
)
table_dldr.download_table_as_file(
- table_id, dst_path, staging_location=staging_path, file_type=type
+ full_table_id, dst_path, staging_location=staging_path, file_type=type
)
exp_staging_path = os.path.join(staging_path, staging_file_name)
assert len(table_dldr._bq.extract_table.call_args_list) == 1
args, kwargs = table_dldr._bq.extract_table.call_args_list[0]
- assert args[0].full_table_id == Table.from_string(table_id).full_table_id
+ assert args[0].full_table_id == Table.from_string(full_table_id).full_table_id
assert args[1] == exp_staging_path
assert kwargs["job_config"].destination_format == str(type)
| Python SDK create_dataset is actually creating dataset in BQ
## Expected Behavior
"Dataset" that was meant on the sdk is a collection of features selected by the user. While this term is also used in BigQuery, the SDK should *not* create a dataset whenever this function is called, rather just create a view in the feast dataset.
## Current Behavior
It creates a new BigQuery Dataset.
## Steps to reproduce
Use the [quickstart](https://github.com/gojek/feast/blob/master/sdk/python/examples/quickstart/Quickstart.ipynb):
```python
feature_set = FeatureSet(entity="ride",
features=["ride.log_trip_duration",
"ride.distance_haversine",
"ride.distance_dummy_manhattan",
"ride.direction",
"ride.month",
"ride.day_of_month",
"ride.hour",
"ride.day_of_week",
"ride.vi_1",
"ride.vi_2",
"ride.sf_n",
"ride.sf_y"])
dataset_info = fs.create_dataset(feature_set, "2016-06-01", "2016-08-01")
dataset = fs.download_dataset_to_df(dataset_info, staging_location=STAGING_LOCATION)
dataset.head()
```
### Specifications
- Version: latest
- Platform: as per suggested by [install guide](https://github.com/gojek/feast/blob/master/docs/install.md)
- Subsystem: as per suggested by [install guide](https://github.com/gojek/feast/blob/master/docs/install.md)
## Possible Solution
- Fix `create_dataset` to create view in feast dataset instead
- Write down dataset definition
- Change to `create_view`?
| 2019-05-30T05:32:13 |
|
feast-dev/feast | 215 | feast-dev__feast-215 | [
"214"
] | 8a6b2fc53946fe2be11e0dbfa32bb1a831fe4c0a | diff --git a/sdk/python/feast/core/DatasetService_pb2.py b/sdk/python/feast/core/DatasetService_pb2.py
--- a/sdk/python/feast/core/DatasetService_pb2.py
+++ b/sdk/python/feast/core/DatasetService_pb2.py
@@ -20,13 +20,50 @@
package='feast.core',
syntax='proto3',
serialized_options=_b('\n\nfeast.coreB\023DatasetServiceProtoZ5github.com/gojek/feast/protos/generated/go/feast/core'),
- serialized_pb=_b('\n\x1f\x66\x65\x61st/core/DatasetService.proto\x12\nfeast.core\x1a\x1fgoogle/protobuf/timestamp.proto\"\xa0\x02\n\x13\x44\x61tasetServiceTypes\x1a\xc1\x01\n\x14\x43reateDatasetRequest\x12*\n\nfeatureSet\x18\x01 \x01(\x0b\x32\x16.feast.core.FeatureSet\x12-\n\tstartDate\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12+\n\x07\x65ndDate\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\r\n\x05limit\x18\x04 \x01(\x03\x12\x12\n\nnamePrefix\x18\x05 \x01(\t\x1a\x45\n\x15\x43reateDatasetResponse\x12,\n\x0b\x64\x61tasetInfo\x18\x01 \x01(\x0b\x32\x17.feast.core.DatasetInfo\"4\n\nFeatureSet\x12\x12\n\nentityName\x18\x01 \x01(\t\x12\x12\n\nfeatureIds\x18\x02 \x03(\t\"-\n\x0b\x44\x61tasetInfo\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08tableUrl\x18\x02 \x01(\t2\x90\x01\n\x0e\x44\x61tasetService\x12~\n\rCreateDataset\x12\x34.feast.core.DatasetServiceTypes.CreateDatasetRequest\x1a\x35.feast.core.DatasetServiceTypes.CreateDatasetResponse\"\x00\x42X\n\nfeast.coreB\x13\x44\x61tasetServiceProtoZ5github.com/gojek/feast/protos/generated/go/feast/coreb\x06proto3')
+ serialized_pb=_b('\n\x1f\x66\x65\x61st/core/DatasetService.proto\x12\nfeast.core\x1a\x1fgoogle/protobuf/timestamp.proto\"\xa4\x03\n\x13\x44\x61tasetServiceTypes\x1a\xc5\x02\n\x14\x43reateDatasetRequest\x12*\n\nfeatureSet\x18\x01 \x01(\x0b\x32\x16.feast.core.FeatureSet\x12-\n\tstartDate\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12+\n\x07\x65ndDate\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\r\n\x05limit\x18\x04 \x01(\x03\x12\x12\n\nnamePrefix\x18\x05 \x01(\t\x12R\n\x07\x66ilters\x18\x06 \x03(\x0b\x32\x41.feast.core.DatasetServiceTypes.CreateDatasetRequest.FiltersEntry\x1a.\n\x0c\x46iltersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\x1a\x45\n\x15\x43reateDatasetResponse\x12,\n\x0b\x64\x61tasetInfo\x18\x01 \x01(\x0b\x32\x17.feast.core.DatasetInfo\"4\n\nFeatureSet\x12\x12\n\nentityName\x18\x01 \x01(\t\x12\x12\n\nfeatureIds\x18\x02 \x03(\t\"-\n\x0b\x44\x61tasetInfo\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08tableUrl\x18\x02 \x01(\t2\x90\x01\n\x0e\x44\x61tasetService\x12~\n\rCreateDataset\x12\x34.feast.core.DatasetServiceTypes.CreateDatasetRequest\x1a\x35.feast.core.DatasetServiceTypes.CreateDatasetResponse\"\x00\x42X\n\nfeast.coreB\x13\x44\x61tasetServiceProtoZ5github.com/gojek/feast/protos/generated/go/feast/coreb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,])
+_DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY = _descriptor.Descriptor(
+ name='FiltersEntry',
+ full_name='feast.core.DatasetServiceTypes.CreateDatasetRequest.FiltersEntry',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='key', full_name='feast.core.DatasetServiceTypes.CreateDatasetRequest.FiltersEntry.key', index=0,
+ number=1, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=_b("").decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR),
+ _descriptor.FieldDescriptor(
+ name='value', full_name='feast.core.DatasetServiceTypes.CreateDatasetRequest.FiltersEntry.value', index=1,
+ number=2, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=_b("").decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=_b('8\001'),
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=384,
+ serialized_end=430,
+)
+
_DATASETSERVICETYPES_CREATEDATASETREQUEST = _descriptor.Descriptor(
name='CreateDatasetRequest',
full_name='feast.core.DatasetServiceTypes.CreateDatasetRequest',
@@ -69,10 +106,17 @@
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
+ _descriptor.FieldDescriptor(
+ name='filters', full_name='feast.core.DatasetServiceTypes.CreateDatasetRequest.filters', index=5,
+ number=6, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
- nested_types=[],
+ nested_types=[_DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY, ],
enum_types=[
],
serialized_options=None,
@@ -82,7 +126,7 @@
oneofs=[
],
serialized_start=105,
- serialized_end=298,
+ serialized_end=430,
)
_DATASETSERVICETYPES_CREATEDATASETRESPONSE = _descriptor.Descriptor(
@@ -111,8 +155,8 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=300,
- serialized_end=369,
+ serialized_start=432,
+ serialized_end=501,
)
_DATASETSERVICETYPES = _descriptor.Descriptor(
@@ -135,7 +179,7 @@
oneofs=[
],
serialized_start=81,
- serialized_end=369,
+ serialized_end=501,
)
@@ -172,8 +216,8 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=371,
- serialized_end=423,
+ serialized_start=503,
+ serialized_end=555,
)
@@ -210,13 +254,15 @@
extension_ranges=[],
oneofs=[
],
- serialized_start=425,
- serialized_end=470,
+ serialized_start=557,
+ serialized_end=602,
)
+_DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY.containing_type = _DATASETSERVICETYPES_CREATEDATASETREQUEST
_DATASETSERVICETYPES_CREATEDATASETREQUEST.fields_by_name['featureSet'].message_type = _FEATURESET
_DATASETSERVICETYPES_CREATEDATASETREQUEST.fields_by_name['startDate'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_DATASETSERVICETYPES_CREATEDATASETREQUEST.fields_by_name['endDate'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
+_DATASETSERVICETYPES_CREATEDATASETREQUEST.fields_by_name['filters'].message_type = _DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY
_DATASETSERVICETYPES_CREATEDATASETREQUEST.containing_type = _DATASETSERVICETYPES
_DATASETSERVICETYPES_CREATEDATASETRESPONSE.fields_by_name['datasetInfo'].message_type = _DATASETINFO
_DATASETSERVICETYPES_CREATEDATASETRESPONSE.containing_type = _DATASETSERVICETYPES
@@ -228,6 +274,13 @@
DatasetServiceTypes = _reflection.GeneratedProtocolMessageType('DatasetServiceTypes', (_message.Message,), dict(
CreateDatasetRequest = _reflection.GeneratedProtocolMessageType('CreateDatasetRequest', (_message.Message,), dict(
+
+ FiltersEntry = _reflection.GeneratedProtocolMessageType('FiltersEntry', (_message.Message,), dict(
+ DESCRIPTOR = _DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY,
+ __module__ = 'feast.core.DatasetService_pb2'
+ # @@protoc_insertion_point(class_scope:feast.core.DatasetServiceTypes.CreateDatasetRequest.FiltersEntry)
+ ))
+ ,
DESCRIPTOR = _DATASETSERVICETYPES_CREATEDATASETREQUEST,
__module__ = 'feast.core.DatasetService_pb2'
# @@protoc_insertion_point(class_scope:feast.core.DatasetServiceTypes.CreateDatasetRequest)
@@ -246,6 +299,7 @@
))
_sym_db.RegisterMessage(DatasetServiceTypes)
_sym_db.RegisterMessage(DatasetServiceTypes.CreateDatasetRequest)
+_sym_db.RegisterMessage(DatasetServiceTypes.CreateDatasetRequest.FiltersEntry)
_sym_db.RegisterMessage(DatasetServiceTypes.CreateDatasetResponse)
FeatureSet = _reflection.GeneratedProtocolMessageType('FeatureSet', (_message.Message,), dict(
@@ -264,6 +318,7 @@
DESCRIPTOR._options = None
+_DATASETSERVICETYPES_CREATEDATASETREQUEST_FILTERSENTRY._options = None
_DATASETSERVICE = _descriptor.ServiceDescriptor(
name='DatasetService',
@@ -271,8 +326,8 @@
file=DESCRIPTOR,
index=0,
serialized_options=None,
- serialized_start=473,
- serialized_end=617,
+ serialized_start=605,
+ serialized_end=749,
methods=[
_descriptor.MethodDescriptor(
name='CreateDataset',
diff --git a/sdk/python/feast/sdk/client.py b/sdk/python/feast/sdk/client.py
--- a/sdk/python/feast/sdk/client.py
+++ b/sdk/python/feast/sdk/client.py
@@ -169,7 +169,8 @@ def run(
return response.jobId
def create_dataset(
- self, feature_set, start_date, end_date, limit=None, name_prefix=None
+ self, feature_set, start_date, end_date, limit=None,
+ name_prefix=None, filters=None
):
"""
Create training dataset for a feature set. The training dataset
@@ -187,11 +188,21 @@ def create_dataset(
limit (int, optional): (default: None) maximum number of row
returned
name_prefix (str, optional): (default: None) name prefix.
+ filters (dict, optional): (default: None) conditional clause
+ that will be used to filter dataset. Keys of filters could be
+ feature id or job_id.
:return:
feast.resources.feature_set.DatasetInfo: DatasetInfo containing
- the information of training dataset
+ the information of training dataset.
"""
- self._check_create_dataset_args(feature_set, start_date, end_date, limit)
+ self._check_create_dataset_args(feature_set, start_date, end_date,
+ limit, filters)
+
+ conv_filters = None
+ if filters is not None:
+ conv_filters = {}
+ for k, v in filters.items():
+ conv_filters[str(k)] = str(v)
req = DatasetServiceTypes.CreateDatasetRequest(
featureSet=feature_set.proto,
@@ -199,6 +210,7 @@ def create_dataset(
endDate=_timestamp_from_datetime(_parse_date(end_date)),
limit=limit,
namePrefix=name_prefix,
+ filters=conv_filters
)
if self.verbose:
print(
@@ -421,7 +433,8 @@ def _apply_storage(self, storage):
)
return response.storageId
- def _check_create_dataset_args(self, feature_set, start_date, end_date, limit):
+ def _check_create_dataset_args(self, feature_set, start_date, end_date,
+ limit, filters):
if len(feature_set.features) < 1:
raise ValueError("feature set is empty")
@@ -433,6 +446,9 @@ def _check_create_dataset_args(self, feature_set, start_date, end_date, limit):
if limit is not None and limit < 1:
raise ValueError("limit is not a positive integer")
+ if filters is not None and not isinstance(filters, dict):
+ raise ValueError("filters is not dictionary")
+
def _parse_date(date):
try:
| diff --git a/core/src/test/java/feast/core/grpc/DatasetServiceImplTest.java b/core/src/test/java/feast/core/grpc/DatasetServiceImplTest.java
--- a/core/src/test/java/feast/core/grpc/DatasetServiceImplTest.java
+++ b/core/src/test/java/feast/core/grpc/DatasetServiceImplTest.java
@@ -4,6 +4,7 @@
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.ArgumentMatchers.anyMap;
import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -21,6 +22,9 @@
import io.grpc.inprocess.InProcessServerBuilder;
import io.grpc.testing.GrpcCleanupRule;
import java.text.ParseException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
@@ -78,7 +82,8 @@ public void shouldCallcreateDatasetWithCorrectRequest() {
any(Timestamp.class),
any(Timestamp.class),
anyLong(),
- anyString()))
+ anyString(),
+ anyMap()))
.thenReturn(datasetInfo);
long limit = 9999;
@@ -95,7 +100,44 @@ public void shouldCallcreateDatasetWithCorrectRequest() {
client.createDataset(request);
verify(trainingDatasetCreator)
- .createDataset(validFeatureSet, validStartDate, validEndDate, limit, namePrefix);
+ .createDataset(validFeatureSet, validStartDate, validEndDate, limit, namePrefix, Collections
+ .emptyMap());
+ }
+
+ @SuppressWarnings("ResultOfMethodCallIgnored")
+ @Test
+ public void shouldCallcreateDatasetWithCorrectRequestWithFilters() {
+ DatasetInfo datasetInfo =
+ DatasetInfo.newBuilder().setName("mydataset").setTableUrl("project.dataset.table").build();
+ when(trainingDatasetCreator.createDataset(
+ any(FeatureSet.class),
+ any(Timestamp.class),
+ any(Timestamp.class),
+ anyLong(),
+ anyString(),
+ anyMap()))
+ .thenReturn(datasetInfo);
+
+ long limit = 9999;
+ String namePrefix = "mydataset";
+ Map<String, String> filters = new HashMap<>();
+ filters.put("key1", "value1");
+ filters.put("key2", "value2");
+ CreateDatasetRequest request =
+ CreateDatasetRequest.newBuilder()
+ .setFeatureSet(validFeatureSet)
+ .setStartDate(validStartDate)
+ .setEndDate(validEndDate)
+ .setLimit(limit)
+ .setNamePrefix(namePrefix)
+ .putAllFilters(filters)
+ .build();
+
+ client.createDataset(request);
+
+
+ verify(trainingDatasetCreator)
+ .createDataset(validFeatureSet, validStartDate, validEndDate, limit, namePrefix, filters);
}
@Test
@@ -107,7 +149,8 @@ public void shouldPropagateCreatedDatasetInfo() {
any(Timestamp.class),
any(Timestamp.class),
anyLong(),
- anyString()))
+ anyString(),
+ anyMap()))
.thenReturn(datasetInfo);
long limit = 9999;
diff --git a/core/src/test/java/feast/core/training/BigQueryDatasetTemplaterTest.java b/core/src/test/java/feast/core/training/BigQueryDatasetTemplaterTest.java
--- a/core/src/test/java/feast/core/training/BigQueryDatasetTemplaterTest.java
+++ b/core/src/test/java/feast/core/training/BigQueryDatasetTemplaterTest.java
@@ -33,16 +33,18 @@
import feast.core.model.EntityInfo;
import feast.core.model.FeatureInfo;
import feast.core.storage.BigQueryStorageManager;
-import feast.core.training.BigQueryDatasetTemplater.Features;
import feast.specs.EntitySpecProto.EntitySpec;
import feast.specs.FeatureSpecProto.FeatureSpec;
import feast.specs.StorageSpecProto.StorageSpec;
+import feast.types.ValueProto.ValueType;
+import feast.types.ValueProto.ValueType.Enum;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
+import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.NoSuchElementException;
@@ -60,26 +62,26 @@ public class BigQueryDatasetTemplaterTest {
private BigQueryDatasetTemplater templater;
private BasicFormatterImpl formatter = new BasicFormatterImpl();
- @Mock
- private FeatureInfoRepository featureInfoRespository;
+ @Mock private FeatureInfoRepository featureInfoRespository;
private String sqlTemplate;
@Before
public void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
- StorageSpec storageSpec = StorageSpec.newBuilder()
- .setId("BIGQUERY1")
- .setType(BigQueryStorageManager.TYPE)
- .putOptions("project", "project")
- .putOptions("dataset", "dataset")
- .build();
+ StorageSpec storageSpec =
+ StorageSpec.newBuilder()
+ .setId("BIGQUERY1")
+ .setType(BigQueryStorageManager.TYPE)
+ .putOptions("project", "project")
+ .putOptions("dataset", "dataset")
+ .build();
Jinjava jinjava = new Jinjava();
Resource resource = new ClassPathResource("templates/bq_training.tmpl");
InputStream resourceInputStream = resource.getInputStream();
sqlTemplate = CharStreams.toString(new InputStreamReader(resourceInputStream, Charsets.UTF_8));
- templater = new BigQueryDatasetTemplater(jinjava, sqlTemplate, storageSpec,
- featureInfoRespository);
+ templater =
+ new BigQueryDatasetTemplater(jinjava, sqlTemplate, storageSpec, featureInfoRespository);
}
@Test(expected = NoSuchElementException.class)
@@ -89,21 +91,23 @@ public void shouldThrowNoSuchElementExceptionIfFeatureNotFound() {
.setEntityName("myentity")
.addAllFeatureIds(Arrays.asList("myentity.feature1", "myentity.feature2"))
.build();
- templater.createQuery(fs, Timestamps.fromSeconds(0), Timestamps.fromSeconds(1), 0);
+ templater.createQuery(
+ fs, Timestamps.fromSeconds(0), Timestamps.fromSeconds(1), 0, Collections.emptyMap());
}
@Test
public void shouldPassCorrectArgumentToTemplateEngine() {
- StorageSpec storageSpec = StorageSpec.newBuilder()
- .setId("BIGQUERY1")
- .setType(BigQueryStorageManager.TYPE)
- .putOptions("project", "project")
- .putOptions("dataset", "dataset")
- .build();
+ StorageSpec storageSpec =
+ StorageSpec.newBuilder()
+ .setId("BIGQUERY1")
+ .setType(BigQueryStorageManager.TYPE)
+ .putOptions("project", "project")
+ .putOptions("dataset", "dataset")
+ .build();
Jinjava jinjava = mock(Jinjava.class);
- templater = new BigQueryDatasetTemplater(jinjava, sqlTemplate, storageSpec,
- featureInfoRespository);
+ templater =
+ new BigQueryDatasetTemplater(jinjava, sqlTemplate, storageSpec, featureInfoRespository);
Timestamp startDate =
Timestamps.fromSeconds(Instant.parse("2018-01-01T00:00:00.00Z").getEpochSecond());
@@ -114,7 +118,7 @@ public void shouldPassCorrectArgumentToTemplateEngine() {
String featureName = "feature1";
when(featureInfoRespository.findAllById(any(List.class)))
- .thenReturn(Collections.singletonList(createFeatureInfo(featureId, featureName)));
+ .thenReturn(Collections.singletonList(createFeatureInfo(featureId, featureName, Enum.INT64)));
FeatureSet fs =
FeatureSet.newBuilder()
@@ -122,7 +126,7 @@ public void shouldPassCorrectArgumentToTemplateEngine() {
.addAllFeatureIds(Arrays.asList(featureId))
.build();
- templater.createQuery(fs, startDate, endDate, limit);
+ templater.createQuery(fs, startDate, endDate, limit, Collections.emptyMap());
ArgumentCaptor<String> templateArg = ArgumentCaptor.forClass(String.class);
ArgumentCaptor<Map<String, Object>> contextArg = ArgumentCaptor.forClass(Map.class);
@@ -136,9 +140,8 @@ public void shouldPassCorrectArgumentToTemplateEngine() {
assertThat(actualContext.get("end_date"), equalTo("2019-01-01"));
assertThat(actualContext.get("limit"), equalTo(String.valueOf(limit)));
- Features features = (Features) actualContext.get("feature_set");
- assertThat(features.getColumns().size(), equalTo(1));
- assertThat(features.getColumns().get(0), equalTo(featureName));
+ List<String> features = (List<String>) actualContext.get("features");
+ assertThat(features.get(0), equalTo(featureName));
}
@Test
@@ -148,12 +151,12 @@ public void shouldRenderCorrectQuery1() throws Exception {
String featureId2 = "myentity.feature2";
String featureName2 = "feature2";
- FeatureInfo featureInfo1 = createFeatureInfo(featureId1, featureName1);
- FeatureInfo featureInfo2 = createFeatureInfo(featureId2, featureName2);
+ FeatureInfo featureInfo1 = createFeatureInfo(featureId1, featureName1, Enum.INT64);
+ FeatureInfo featureInfo2 = createFeatureInfo(featureId2, featureName2, Enum.INT64);
String featureId3 = "myentity.feature3";
String featureName3 = "feature3";
- FeatureInfo featureInfo3 = createFeatureInfo(featureId3, featureName3);
+ FeatureInfo featureInfo3 = createFeatureInfo(featureId3, featureName3, Enum.INT64);
when(featureInfoRespository.findAllById(any(List.class)))
.thenReturn(Arrays.asList(featureInfo1, featureInfo2, featureInfo3));
@@ -169,7 +172,7 @@ public void shouldRenderCorrectQuery1() throws Exception {
Timestamps.fromSeconds(Instant.parse("2018-01-30T12:11:11.00Z").getEpochSecond());
int limit = 100;
- String query = templater.createQuery(fs, startDate, endDate, limit);
+ String query = templater.createQuery(fs, startDate, endDate, limit, Collections.emptyMap());
checkExpectedQuery(query, "expQuery1.sql");
}
@@ -182,7 +185,7 @@ public void shouldRenderCorrectQuery2() throws Exception {
String featureId = "myentity.feature1";
String featureName = "feature1";
- featureInfos.add(createFeatureInfo(featureId, featureName));
+ featureInfos.add(createFeatureInfo(featureId, featureName, Enum.INT64));
featureIds.add(featureId);
when(featureInfoRespository.findAllById(any(List.class))).thenReturn(featureInfos);
@@ -194,11 +197,149 @@ public void shouldRenderCorrectQuery2() throws Exception {
FeatureSet featureSet =
FeatureSet.newBuilder().setEntityName("myentity").addAllFeatureIds(featureIds).build();
- String query = templater.createQuery(featureSet, startDate, endDate, 1000);
+ String query =
+ templater.createQuery(featureSet, startDate, endDate, 1000, Collections.emptyMap());
checkExpectedQuery(query, "expQuery2.sql");
}
+ @Test
+ public void shouldRenderCorrectQueryWithNumberFilter() throws Exception {
+ List<FeatureInfo> featureInfos = new ArrayList<>();
+ List<String> featureIds = new ArrayList<>();
+
+ String featureId = "myentity.feature1";
+ String featureId2 = "myentity.feature2";
+ String featureName = "feature1";
+ String featureName2 = "feature2";
+
+ featureInfos.add(createFeatureInfo(featureId, featureName, Enum.INT64));
+ featureInfos.add(createFeatureInfo(featureId2, featureName2, Enum.INT64));
+ featureIds.add(featureId);
+ featureIds.add(featureId2);
+
+ when(featureInfoRespository.findAllById(any(List.class))).thenReturn(featureInfos);
+
+ Timestamp startDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-02T00:00:00.00Z").getEpochSecond());
+ Timestamp endDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-30T12:11:11.00Z").getEpochSecond());
+ FeatureSet featureSet =
+ FeatureSet.newBuilder().setEntityName("myentity").addAllFeatureIds(featureIds).build();
+
+ Map<String, String> filter = new HashMap<>();
+ filter.put("myentity.feature1", "10");
+
+ String query =
+ templater.createQuery(featureSet, startDate, endDate, 1000, filter);
+
+ checkExpectedQuery(query, "expQueryWithNumberFilter.sql");
+ }
+
+ @Test
+ public void shouldRenderCorrectQueryWithStringFilter() throws Exception {
+ List<FeatureInfo> featureInfos = new ArrayList<>();
+ List<String> featureIds = new ArrayList<>();
+
+ String featureId = "myentity.feature1";
+ String featureId2 = "myentity.feature2";
+ String featureName = "feature1";
+ String featureName2 = "feature2";
+
+ featureInfos.add(createFeatureInfo(featureId, featureName, Enum.STRING));
+ featureInfos.add(createFeatureInfo(featureId2, featureName2, Enum.STRING));
+ featureIds.add(featureId);
+ featureIds.add(featureId2);
+
+ when(featureInfoRespository.findAllById(any(List.class))).thenReturn(featureInfos);
+
+ Timestamp startDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-02T00:00:00.00Z").getEpochSecond());
+ Timestamp endDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-30T12:11:11.00Z").getEpochSecond());
+ FeatureSet featureSet =
+ FeatureSet.newBuilder().setEntityName("myentity").addAllFeatureIds(featureIds).build();
+
+ Map<String, String> filter = new HashMap<>();
+ filter.put("myentity.feature1", "10");
+
+ String query =
+ templater.createQuery(featureSet, startDate, endDate, 1000, filter);
+
+ checkExpectedQuery(query, "expQueryWithStringFilter.sql");
+ }
+
+
+ @Test
+ public void shouldRenderCorrectQueryWithStringAndNumberFilter() throws Exception {
+ List<FeatureInfo> featureInfos = new ArrayList<>();
+ List<String> featureIds = new ArrayList<>();
+
+ String featureId = "myentity.feature1";
+ String featureId2 = "myentity.feature2";
+ String featureName = "feature1";
+ String featureName2 = "feature2";
+
+ featureInfos.add(createFeatureInfo(featureId, featureName, Enum.INT64));
+ featureInfos.add(createFeatureInfo(featureId2, featureName2, Enum.STRING));
+ featureIds.add(featureId);
+ featureIds.add(featureId2);
+
+ when(featureInfoRespository.findAllById(any(List.class))).thenReturn(featureInfos);
+
+ Timestamp startDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-02T00:00:00.00Z").getEpochSecond());
+ Timestamp endDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-30T12:11:11.00Z").getEpochSecond());
+ FeatureSet featureSet =
+ FeatureSet.newBuilder().setEntityName("myentity").addAllFeatureIds(featureIds).build();
+
+ Map<String, String> filter = new HashMap<>();
+ filter.put("myentity.feature1", "10");
+ filter.put("myentity.feature2", "HELLO");
+
+ String query =
+ templater.createQuery(featureSet, startDate, endDate, 1000, filter);
+
+ checkExpectedQuery(query, "expQueryWithNumberAndStringFilter.sql");
+ }
+
+
+ @Test
+ public void shouldRenderCorrectQueryWithJobIdFilter() throws Exception {
+ List<FeatureInfo> featureInfos = new ArrayList<>();
+ List<String> featureIds = new ArrayList<>();
+
+ String featureId = "myentity.feature1";
+ String featureId2 = "myentity.feature2";
+ String featureName = "feature1";
+ String featureName2 = "feature2";
+
+ featureInfos.add(createFeatureInfo(featureId, featureName, Enum.INT64));
+ featureInfos.add(createFeatureInfo(featureId2, featureName2, Enum.STRING));
+ featureIds.add(featureId);
+ featureIds.add(featureId2);
+
+ when(featureInfoRespository.findAllById(any(List.class))).thenReturn(featureInfos);
+
+ Timestamp startDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-02T00:00:00.00Z").getEpochSecond());
+ Timestamp endDate =
+ Timestamps.fromSeconds(Instant.parse("2018-01-30T12:11:11.00Z").getEpochSecond());
+ FeatureSet featureSet =
+ FeatureSet.newBuilder().setEntityName("myentity").addAllFeatureIds(featureIds).build();
+
+ Map<String, String> filter = new HashMap<>();
+ filter.put("myentity.feature1", "10");
+ filter.put("myentity.feature2", "HELLO");
+ filter.put("job_id", "1234567890");
+
+ String query =
+ templater.createQuery(featureSet, startDate, endDate, 1000, filter);
+
+ checkExpectedQuery(query, "expQueryWithJobIdFilter.sql");
+ }
+
private void checkExpectedQuery(String query, String pathToExpQuery) throws Exception {
String tmpl =
CharStreams.toString(
@@ -212,11 +353,12 @@ private void checkExpectedQuery(String query, String pathToExpQuery) throws Exce
assertThat(query, equalTo(expQuery));
}
- private FeatureInfo createFeatureInfo(String featureId, String featureName) {
+ private FeatureInfo createFeatureInfo(String featureId, String featureName, ValueType.Enum valueType) {
FeatureSpec fs =
FeatureSpec.newBuilder()
.setId(featureId)
.setName(featureName)
+ .setValueType(valueType)
.build();
EntitySpec entitySpec = EntitySpec.newBuilder().setName(featureId.split("\\.")[0]).build();
diff --git a/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java b/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
--- a/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
+++ b/core/src/test/java/feast/core/training/BigQueryTraningDatasetCreatorTest.java
@@ -22,9 +22,11 @@
import feast.core.DatasetServiceProto.DatasetInfo;
import feast.core.DatasetServiceProto.FeatureSet;
import feast.core.storage.BigQueryStorageManager;
+import feast.core.util.UuidProvider;
import feast.specs.StorageSpecProto.StorageSpec;
import java.time.Instant;
import java.util.Arrays;
+import java.util.Collections;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
@@ -34,6 +36,7 @@
import static org.junit.Assert.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.ArgumentMatchers.anyMap;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -55,6 +58,8 @@ public class BigQueryTraningDatasetCreatorTest {
private BigQueryDatasetTemplater templater;
@Mock
private BigQuery bq;
+ @Mock
+ private UuidProvider uuidProvider;
@Before
public void setUp() {
@@ -65,10 +70,11 @@ public void setUp() {
.putOptions("project", "project")
.putOptions("dataset", "dataset")
.build());
- creator = new BigQueryTraningDatasetCreator(templater, projectId, datasetPrefix, bq);
+ creator = new BigQueryTraningDatasetCreator(templater, projectId, datasetPrefix, uuidProvider, bq);
+ when(uuidProvider.getUuid()).thenReturn("b0009f0f7df634ddc130571319e0deb9742eb1da");
when(templater.createQuery(
- any(FeatureSet.class), any(Timestamp.class), any(Timestamp.class), anyLong()))
+ any(FeatureSet.class), any(Timestamp.class), any(Timestamp.class), anyLong(), anyMap()))
.thenReturn("SELECT * FROM `project.dataset.table`");
}
@@ -89,7 +95,8 @@ public void shouldCreateCorrectDatasetIfPrefixNotSpecified() {
long limit = 999;
String namePrefix = "";
- DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
+ DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix, Collections
+ .emptyMap());
assertThat(
dsInfo.getName(), equalTo("feast_myentity_b0009f0f7df634ddc130571319e0deb9742eb1da"));
assertThat(
@@ -117,7 +124,7 @@ public void shouldCreateCorrectDatasetIfPrefixIsSpecified() {
long limit = 999;
String namePrefix = "mydataset";
- DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
+ DatasetInfo dsInfo = creator.createDataset(featureSet, startDate, endDate, limit, namePrefix, Collections.emptyMap());
assertThat(
dsInfo.getTableUrl(),
equalTo(
@@ -146,8 +153,8 @@ public void shouldPassArgumentToTemplater() {
long limit = 999;
String namePrefix = "";
- creator.createDataset(featureSet, startDate, endDate, limit, namePrefix);
+ creator.createDataset(featureSet, startDate, endDate, limit, namePrefix, Collections.emptyMap());
- verify(templater).createQuery(featureSet, startDate, endDate, limit);
+ verify(templater).createQuery(featureSet, startDate, endDate, limit, Collections.emptyMap());
}
}
diff --git a/core/src/test/resources/sql/expQueryWithJobIdFilter.sql b/core/src/test/resources/sql/expQueryWithJobIdFilter.sql
new file mode 100644
--- /dev/null
+++ b/core/src/test/resources/sql/expQueryWithJobIdFilter.sql
@@ -0,0 +1,8 @@
+SELECT
+ id,
+ event_timestamp,
+ feature1,feature2
+FROM
+ `project.dataset.myentity`
+WHERE event_timestamp >= TIMESTAMP("2018-01-02") AND event_timestamp <= TIMESTAMP(DATETIME_ADD("2018-01-30", INTERVAL 1 DAY)) AND feature1 = 10 AND feature2 = "HELLO" AND job_id = "1234567890"
+LIMIT 1000
\ No newline at end of file
diff --git a/core/src/test/resources/sql/expQueryWithNumberAndStringFilter.sql b/core/src/test/resources/sql/expQueryWithNumberAndStringFilter.sql
new file mode 100644
--- /dev/null
+++ b/core/src/test/resources/sql/expQueryWithNumberAndStringFilter.sql
@@ -0,0 +1,8 @@
+SELECT
+ id,
+ event_timestamp,
+ feature1,feature2
+FROM
+ `project.dataset.myentity`
+WHERE event_timestamp >= TIMESTAMP("2018-01-02") AND event_timestamp <= TIMESTAMP(DATETIME_ADD("2018-01-30", INTERVAL 1 DAY)) AND feature1 = 10 AND feature2 = "HELLO"
+LIMIT 1000
\ No newline at end of file
diff --git a/core/src/test/resources/sql/expQueryWithNumberFilter.sql b/core/src/test/resources/sql/expQueryWithNumberFilter.sql
new file mode 100644
--- /dev/null
+++ b/core/src/test/resources/sql/expQueryWithNumberFilter.sql
@@ -0,0 +1,8 @@
+SELECT
+ id,
+ event_timestamp,
+ feature1,feature2
+FROM
+ `project.dataset.myentity`
+WHERE event_timestamp >= TIMESTAMP("2018-01-02") AND event_timestamp <= TIMESTAMP(DATETIME_ADD("2018-01-30", INTERVAL 1 DAY)) AND feature1 = 10
+LIMIT 1000
\ No newline at end of file
diff --git a/core/src/test/resources/sql/expQueryWithStringFilter.sql b/core/src/test/resources/sql/expQueryWithStringFilter.sql
new file mode 100644
--- /dev/null
+++ b/core/src/test/resources/sql/expQueryWithStringFilter.sql
@@ -0,0 +1,8 @@
+SELECT
+ id,
+ event_timestamp,
+ feature1,feature2
+FROM
+ `project.dataset.myentity`
+WHERE event_timestamp >= TIMESTAMP("2018-01-02") AND event_timestamp <= TIMESTAMP(DATETIME_ADD("2018-01-30", INTERVAL 1 DAY)) AND feature1 = "10"
+LIMIT 1000
\ No newline at end of file
diff --git a/sdk/python/tests/sdk/test_client.py b/sdk/python/tests/sdk/test_client.py
--- a/sdk/python/tests/sdk/test_client.py
+++ b/sdk/python/tests/sdk/test_client.py
@@ -177,6 +177,10 @@ def test_create_dataset_invalid_args(self, client):
ValueError, match="limit is not a positive integer"):
client.create_dataset(feature_set, "2018-12-01", "2018-12-02", -1)
+ with pytest.raises(ValueError, match="filters is not dictionary"):
+ client.create_dataset(feature_set, "2018-12-01", "2018-12-02",
+ 10, filters="filter")
+
def test_create_dataset(self, client, mocker):
entity_name = "myentity"
feature_ids = ["myentity.feature1", "myentity.feature2"]
@@ -207,6 +211,40 @@ def test_create_dataset(self, client, mocker):
limit=None,
namePrefix=None))
+ def test_create_dataset_with_filters(self, client, mocker):
+ entity_name = "myentity"
+ feature_ids = ["myentity.feature1", "myentity.feature2"]
+ fs = FeatureSet(entity_name, feature_ids)
+ start_date = "2018-01-02"
+ end_date = "2018-12-31"
+
+ ds_pb = DatasetInfo_pb(
+ name="dataset_name", tableUrl="project.dataset.table")
+
+ mock_trn_stub = training.DatasetServiceStub(grpc.insecure_channel(""))
+ mocker.patch.object(
+ mock_trn_stub,
+ "CreateDataset",
+ return_value=DatasetServiceTypes.CreateDatasetResponse(
+ datasetInfo=ds_pb))
+ client._dataset_service_stub = mock_trn_stub
+
+ job_filter = {"job_id": 12345}
+ ds = client.create_dataset(fs, start_date, end_date,
+ filters=job_filter)
+
+ assert "dataset_name" == ds.name
+ assert "project.dataset.table" == ds.full_table_id
+ mock_trn_stub.CreateDataset.assert_called_once_with(
+ DatasetServiceTypes.CreateDatasetRequest(
+ featureSet=fs.proto,
+ startDate=_timestamp_from_datetime(_parse_date(start_date)),
+ endDate=_timestamp_from_datetime(_parse_date(end_date)),
+ limit=None,
+ namePrefix=None,
+ filters={"job_id": "12345"}))
+
+
def test_create_dataset_with_limit(self, client, mocker):
entity_name = "myentity"
feature_ids = ["myentity.feature1", "myentity.feature2"]
| Add filtering capability to create dataset api
**Is your feature request related to a problem? Please describe.**
Add support for querying training datasets but with filters on feature columns (non-entity, non-ts). These are essentially selections (WHERE clauses) with specific values (not ranges). The use case requires at least filtering on the job_id, and also columns (features).
This functionality has to be added to the Feast SDK as well as the Feast Core API
**Describe the solution you'd like**
Add `filters` field in CreateDatasetRequest which type is map from string to string. This field is used to add WHERE clause into the query for creating dataset. If multiple filter exists all condition will be AND-ed.
```
message DatasetServiceTypes {
message CreateDatasetRequest {
// set of features for which its training data should be created
FeatureSet featureSet = 1;
// start date of the training data (inclusive)
google.protobuf.Timestamp startDate = 2;
// end date of the training data (inclusive)
google.protobuf.Timestamp endDate = 3;
// (optional) number of row that should be generated
// (default: none)
int64 limit = 4;
// (optional) prefix for dataset name
string namePrefix = 5;
// (optional) additional WHERE clause, all filter entry will be combined with logic AND
map<string, string> filters = 6;
}
message CreateDatasetResponse {
// information of the created training dataset
DatasetInfo datasetInfo = 1;
}
}
```
And add `filters` optional field into `Client.create_dataset` method in Python SDK
```
def create_dataset(
self, feature_set, start_date, end_date, limit=None,
name_prefix=None, filters=None
)
```
`filters` accept dictionary with key is either "job_id" or feature_id which exists in feature store.
E.g.:
```
feature_set = FeatureSet(entity="ride",
features=["ride.log_trip_duration",
"ride.distance_haversine",
"ride.distance_dummy_manhattan",
"ride.month",
"ride.direction",
"ride.day_of_month",
"ride.hour",
"ride.day_of_week",
"ride.vi_1",
"ride.vi_2",
"ride.sf_n",
"ride.sf_y"])
dataset_info = fs.create_dataset(feature_set, "1970-01-01", "2019-01-01", filters={"job_id": job_id, "ride.sf_y": 0})
dataset = fs.download_dataset_to_df(dataset_info, staging_location=STAGING_LOCATION)
```
**Describe alternatives you've considered**
Provide free form WHERE clause, however it will expose risk of SQL injection.
| 2019-06-14T06:39:46 |
|
feast-dev/feast | 244 | feast-dev__feast-244 | [
"149",
"149"
] | e4d03b936474a63be06a50c44b67179e6885e60e | diff --git a/sdk/python/feast/config.py b/sdk/python/feast/config.py
--- a/sdk/python/feast/config.py
+++ b/sdk/python/feast/config.py
@@ -28,7 +28,7 @@
feast_configuration_properties = {"core_url": "URL", "serving_url": "URL"}
-CONFIGURATION_FILE_DIR = ".feast"
+CONFIGURATION_FILE_DIR = os.environ.get("FEAST_CONFIG", ".feast")
CONFIGURATION_FILE_NAME = "config.toml"
| Feast cli config file should be settable by an env var
**Is your feature request related to a problem? Please describe.**
If I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.
**Describe the solution you'd like**
export FEAST_CONFIG=path/to/feast/configfile
it should default to ~/.feast
Feast cli config file should be settable by an env var
**Is your feature request related to a problem? Please describe.**
If I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.
**Describe the solution you'd like**
export FEAST_CONFIG=path/to/feast/configfile
it should default to ~/.feast
| This is a good issue. We can easily add a flag to the Python SDK to support this. For 0.3.0 the CLI is created from the SDK.
This is a good issue. We can easily add a flag to the Python SDK to support this. For 0.3.0 the CLI is created from the SDK. | 2019-09-04T02:00:51 |
|
feast-dev/feast | 271 | feast-dev__feast-271 | [
"275"
] | cc09e56372ac7021244d76b54a9b0da10746584c | diff --git a/sdk/python/feast/feature_set.py b/sdk/python/feast/feature_set.py
--- a/sdk/python/feast/feature_set.py
+++ b/sdk/python/feast/feature_set.py
@@ -69,7 +69,7 @@ def __init__(
if entities is not None:
self.entities = entities
if source is None:
- self._source = KafkaSource()
+ self._source = None
else:
self._source = source
self._max_age = max_age
@@ -504,7 +504,7 @@ def to_proto(self) -> FeatureSetSpecProto:
name=self.name,
version=self.version,
max_age=self.max_age,
- source=self.source.to_proto(),
+ source=self.source.to_proto() if self.source is not None else None,
features=[
field.to_proto()
for field in self._fields.values()
diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -106,7 +106,7 @@ def dtype_to_value_type(dtype):
# TODO: to pass test_importer
def pandas_dtype_to_feast_value_type(dtype: pd.DataFrame.dtypes) -> ValueType:
type_map = {
- "float64": ValueType.FLOAT,
+ "float64": ValueType.DOUBLE,
"float32": ValueType.FLOAT,
"int64": ValueType.INT64,
"uint64": ValueType.INT64,
@@ -247,7 +247,7 @@ def pd_value_to_proto_value(feast_value_type, value) -> ProtoValue:
return ProtoValue(float_val=float(value))
elif feast_value_type == ValueType.DOUBLE:
assert type(value) is float
- return ProtoValue(float_val=value)
+ return ProtoValue(double_val=value)
elif feast_value_type == ValueType.STRING:
return ProtoValue(string_val=str(value))
elif feast_value_type == ValueType.BYTES:
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -36,7 +36,7 @@
"grpcio==1.*",
"pandas==0.*",
"pandavro==1.5.1",
- "protobuf==3.*",
+ "protobuf==3.10.*",
"PyYAML==5.1.2",
"fastavro==0.*",
"kafka-python==1.4.*",
| diff --git a/.prow/scripts/install_feast_and_run_e2e_test.sh b/.prow/scripts/install_feast_and_run_e2e_test.sh
deleted file mode 100755
--- a/.prow/scripts/install_feast_and_run_e2e_test.sh
+++ /dev/null
@@ -1,60 +0,0 @@
-#!/usr/bin/env bash
-
-set -e
-
-echo "============================================================"
-echo "Installing Feast Release"
-echo "============================================================"
-
-helm install --name ${FEAST_RELEASE_NAME} --wait --timeout 210 ${FEAST_HOME}/charts/feast -f integration-tests/feast-helm-values.yaml
-
-echo "============================================================"
-echo "Testing Batch Import"
-echo "============================================================"
-
-cd ${FEAST_HOME}/integration-tests/testdata
-
-feast apply entity entity_specs/entity_1.yaml
-feast apply feature feature_specs/entity_1*.yaml
-feast jobs run import_specs/batch_from_gcs.yaml --wait
-
-cd $FEAST_HOME/integration-tests
-
-python -m testutils.validate_feature_values \
- --entity_spec_file=testdata/entity_specs/entity_1.yaml \
- --feature_spec_files=testdata/feature_specs/entity_1*.yaml \
- --expected-warehouse-values-file=testdata/feature_values/ingestion_1.csv \
- --expected-serving-values-file=testdata/feature_values/serving_1.csv \
- --bigquery-dataset-for-warehouse=${FEAST_WAREHOUSE_DATASET} \
- --feast-serving-url=${FEAST_SERVING_URL}
-
-echo "============================================================"
-echo "Testing Streaming Import"
-echo "============================================================"
-
-cd $FEAST_HOME/integration-tests/testdata
-
-feast apply entity entity_specs/entity_2.yaml
-feast apply feature feature_specs/entity_2*.yaml
-feast jobs run import_specs/stream_from_kafka.yaml &
-
-IMPORT_JOB_PID=$!
-sleep 20
-
-cd $FEAST_HOME/integration-tests
-
-python -m testutils.kafka_producer \
- --bootstrap_servers=$KAFKA_BROKERS \
- --topic=$KAFKA_TOPICS \
- --entity_spec_file=testdata/entity_specs/entity_2.yaml \
- --feature_spec_files=testdata/feature_specs/entity_2*.yaml \
- --feature_values_file=testdata/feature_values/ingestion_2.csv
-sleep 20
-
-python -m testutils.validate_feature_values \
- --entity_spec_file=testdata/entity_specs/entity_2.yaml \
- --feature_spec_files=testdata/feature_specs/entity_2*.yaml \
- --expected-serving-values-file=testdata/feature_values/serving_2.csv \
- --feast-serving-url=$FEAST_SERVING_URL
-
-kill -9 ${IMPORT_JOB_PID}
diff --git a/.prow/scripts/install_test_tools.sh b/.prow/scripts/install_test_tools.sh
deleted file mode 100755
--- a/.prow/scripts/install_test_tools.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# This script installs the following Feast test utilities:
-# ============================================================
-# - gettext package so we can use envsubst command to provide values to helm template file
-# - Python 3.6 because Feast requires Python version 3.6 and above
-# - Golang if we need to build Feast CLI from source
-# - Helm if we want to install Feast release
-
-apt-get -qq update
-apt-get -y install curl wget gettext &> /dev/null
-
-curl -s https://repo.continuum.io/miniconda/Miniconda3-4.5.12-Linux-x86_64.sh -o /tmp/miniconda.sh
-bash /tmp/miniconda.sh -b -p /miniconda &> /dev/null
-export PATH=/miniconda/bin:$PATH
-
-wget -qO- https://dl.google.com/go/go1.12.5.linux-amd64.tar.gz | tar xzf -
-mv go /usr/local/
-export PATH=/usr/local/go/bin:$PATH
-export GO111MODULE=on
-
-wget -qO- https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz | tar xz
-mv linux-amd64/helm /usr/local/bin/helm
diff --git a/.prow/scripts/prepare_integration_test.sh b/.prow/scripts/prepare_integration_test.sh
deleted file mode 100755
--- a/.prow/scripts/prepare_integration_test.sh
+++ /dev/null
@@ -1,65 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-usage()
-{
- echo "usage: prepare_integration_test.sh [--skip-build true]"
-}
-
-while [ "$1" != "" ]; do
- case "$1" in
- --skip-build ) SKIP_BUILD=true; shift;;
- * ) usage; exit 1
- esac
- shift
-done
-
-# Authenticate to Google Cloud and GKE
-# ============================================================
-GOOGLE_PROJECT_ID=kf-feast
-KUBE_CLUSTER_NAME=primary-test-cluster
-KUBE_CLUSTER_ZONE=us-central1-a
-KEY_FILE=/etc/service-account/service-account.json
-
-gcloud -q auth activate-service-account --key-file=${KEY_FILE}
-gcloud -q auth configure-docker
-gcloud -q config set project ${GOOGLE_PROJECT_ID}
-gcloud -q container clusters get-credentials ${KUBE_CLUSTER_NAME} --zone ${KUBE_CLUSTER_ZONE} --project ${GOOGLE_PROJECT_ID}
-export GOOGLE_APPLICATION_CREDENTIALS=${KEY_FILE}
-
-# Install Python 3.6, Golang 1.12, Helm and Feast SDK
-# ============================================================
-. .prow/scripts/install_test_tools.sh
-. .prow/scripts/install_feast_sdk.sh
-.prow/scripts/prepare_maven_cache.sh --archive-uri gs://feast-templocation-kf-feast/.m2.tar --output-dir ${FEAST_HOME}
-
-# Prepare Feast test data and config
-# ============================================================
-
-bq -q mk --dataset ${FEAST_WAREHOUSE_DATASET}
-gsutil -q cp ${FEAST_HOME}/integration-tests/testdata/feature_values/ingestion_1.csv ${BATCH_IMPORT_DATA_GCS_PATH}
-
-BUILD_ID=${BUILD_ID:0:5}
-envsubst < integration-tests/feast-helm-values.yaml.template > integration-tests/feast-helm-values.yaml
-cd ${FEAST_HOME}/integration-tests/testdata/import_specs
-envsubst < batch_from_gcs.yaml.template > batch_from_gcs.yaml
-envsubst < stream_from_kafka.yaml.template > stream_from_kafka.yaml
-
-if [[ ! ${SKIP_BUILD} ]]; then
-
-echo "============================================================"
-echo "Building Feast for Testing"
-echo "============================================================"
-cd ${FEAST_HOME}
-docker build -t us.gcr.io/kf-feast/feast-core:${FEAST_IMAGE_TAG} -f Dockerfiles/core/Dockerfile . &
-docker build -t us.gcr.io/kf-feast/feast-serving:${FEAST_IMAGE_TAG} -f Dockerfiles/serving/Dockerfile . &
-wait
-docker push us.gcr.io/kf-feast/feast-core:${FEAST_IMAGE_TAG} &
-docker push us.gcr.io/kf-feast/feast-serving:${FEAST_IMAGE_TAG} &
-wait
-
-fi
-
-# Switch back context to original directory
-set +ex
-cd ${FEAST_HOME}
\ No newline at end of file
diff --git a/.prow/scripts/run_unit_test.sh b/.prow/scripts/run_unit_test.sh
deleted file mode 100755
--- a/.prow/scripts/run_unit_test.sh
+++ /dev/null
@@ -1,63 +0,0 @@
-#!/usr/bin/env bash
-
-# This script will run unit test for a specific Feast component:
-# - core, ingestion, serving or cli
-#
-# This script includes the pre and post test scripts, such as
-# - downloading maven cache repository
-# - saving the test output report so it can be viewed with Spyglass in Prow
-
-# Bucket in GCS used for running unit tests, when the unit tests need an
-# actual running GCS (e.g. because there is no existing mock implementation of the function to test)
-TEST_BUCKET=feast-templocation-kf-feast
-
-usage()
-{
- echo "usage: run_unit_test.sh
- --component {core, ingestion, serving, cli}"
-}
-
-while [ "$1" != "" ]; do
- case "$1" in
- --component ) COMPONENT="$2"; shift;;
- * ) usage; exit 1
- esac
- shift
-done
-
-if [[ ! ${COMPONENT} ]]; then
- usage; exit 1;
-fi
-
-. .prow/scripts/install_google_cloud_sdk.sh
-
-if [[ ${COMPONENT} == "core" ]] || [[ ${COMPONENT} == "ingestion" ]] || [[ ${COMPONENT} == "serving" ]]; then
-
- .prow/scripts/prepare_maven_cache.sh --archive-uri gs://feast-templocation-kf-feast/.m2.tar --output-dir /root/
- mvn --projects ${COMPONENT} -Dtestbucket=feast-templocation-kf-feast test
- TEST_EXIT_CODE=$?
- cp -r ${COMPONENT}/target/surefire-reports /logs/artifacts/surefire-reports
-
-elif [[ ${COMPONENT} == "cli" ]]; then
-
- # https://stackoverflow.com/questions/6871859/piping-command-output-to-tee-but-also-save-exit-code-of-command
- set -o pipefail
-
- go get -u github.com/jstemmer/go-junit-report
- go test -v ./cli/feast/... 2>&1 | tee test_output
- TEST_EXIT_CODE=$?
- cat test_output | ${GOPATH}/bin/go-junit-report > ${ARTIFACTS}/unittest-cli-report.xml
-
-elif [[ ${COMPONENT} == "python-sdk" ]]; then
-
- cd sdk/python
- pip install -r requirements-test.txt
- pip install .
- pytest ./tests --junitxml=${ARTIFACTS}/unittest-pythonsdk-report.xml
- TEST_EXIT_CODE=$?
-
-else
- usage; exit 1
-fi
-
-exit ${TEST_EXIT_CODE}
diff --git a/core/src/test/java/feast/core/job/direct/DirectRunnerJobManagerTest.java b/core/src/test/java/feast/core/job/direct/DirectRunnerJobManagerTest.java
--- a/core/src/test/java/feast/core/job/direct/DirectRunnerJobManagerTest.java
+++ b/core/src/test/java/feast/core/job/direct/DirectRunnerJobManagerTest.java
@@ -73,6 +73,7 @@ public void shouldStartDirectJobAndRegisterPipelineResult() throws IOException {
expectedPipelineOptions.setAppName("DirectRunnerJobManager");
expectedPipelineOptions.setRunner(DirectRunner.class);
expectedPipelineOptions.setBlockOnRun(false);
+ expectedPipelineOptions.setProject("");
expectedPipelineOptions.setStoreJson(Lists.newArrayList(printer.print(store)));
expectedPipelineOptions
.setFeatureSetSpecJson(Lists.newArrayList(printer.print(featureSetSpec)));
diff --git a/ingestion/src/test/java/feast/ingestion/ImportJobTest.java b/ingestion/src/test/java/feast/ingestion/ImportJobTest.java
--- a/ingestion/src/test/java/feast/ingestion/ImportJobTest.java
+++ b/ingestion/src/test/java/feast/ingestion/ImportJobTest.java
@@ -116,6 +116,7 @@ public void runPipeline_ShouldWriteToRedisCorrectlyGivenValidSpecAndFeatureRow()
options.setStoreJson(
Collections.singletonList(
JsonFormat.printer().omittingInsignificantWhitespace().print(redis)));
+ options.setProject("");
options.setBlockOnRun(false);
int inputSize = 4096;
diff --git a/ingestion/src/test/java/feast/test/TestUtil.java b/ingestion/src/test/java/feast/test/TestUtil.java
--- a/ingestion/src/test/java/feast/test/TestUtil.java
+++ b/ingestion/src/test/java/feast/test/TestUtil.java
@@ -92,7 +92,11 @@ public static void start(String kafkaHost, int kafkaPort, short kafkaReplication
public static void stop() {
if (server != null) {
- server.shutdown();
+ try {
+ server.shutdown();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
}
}
}
diff --git a/tests/e2e/conftest.py b/tests/e2e/conftest.py
--- a/tests/e2e/conftest.py
+++ b/tests/e2e/conftest.py
@@ -1,4 +1,4 @@
def pytest_addoption(parser):
parser.addoption("--core_url", action="store", default="localhost:6565")
- parser.addoption("--serving_url", action="store", default="localhost:6565")
- parser.addoption("--allow_dirty", action="store", default="false")
+ parser.addoption("--serving_url", action="store", default="localhost:6566")
+ parser.addoption("--allow_dirty", action="store", default="true")
diff --git a/tests/e2e/test_e2e.py b/tests/e2e/test_e2e.py
--- a/tests/e2e/test_e2e.py
+++ b/tests/e2e/test_e2e.py
@@ -71,8 +71,19 @@ def test_basic(client):
# Register feature set
client.apply(cust_trans_fs)
+ # Feast Core needs some time to fully commit the FeatureSet applied
+ # when there is no existing job yet for the Featureset
+ time.sleep(15)
cust_trans_fs = client.get_feature_set(name="customer_transactions", version=1)
+ if cust_trans_fs is None:
+ raise Exception(
+ "Client cannot retrieve 'customer_transactions' FeatureSet "
+ "after registration. Either Feast Core does not save the "
+ "FeatureSet correctly or the client needs to wait longer for FeatureSet "
+ "to be committed."
+ )
+
offset = random.randint(1000, 100000) # ensure a unique key space is used
customer_data = pd.DataFrame(
{
@@ -88,6 +99,8 @@ def test_basic(client):
# Poll serving for feature values until the correct values are returned
while True:
+ time.sleep(1)
+
response = client.get_online_features(
entity_rows=[
GetOnlineFeaturesRequest.EntityRow(
@@ -103,8 +116,8 @@ def test_basic(client):
"customer_transactions:1:total_transactions",
],
) # type: GetOnlineFeaturesResponse
+
if response is None:
- time.sleep(1)
continue
returned_daily_transactions = float(
@@ -124,7 +137,7 @@ def test_basic(client):
@pytest.mark.timeout(300)
def test_all_types(client):
- all_types_fs = client.get_feature_set(name="all_types", version="1")
+ all_types_fs = client.get_feature_set(name="all_types", version=1)
if all_types_fs is None:
# Register new feature set if it doesnt exist
@@ -152,7 +165,19 @@ def test_all_types(client):
# Register feature set
client.apply(all_types_fs)
- all_types_fs = client.get_feature_set(name="all_types", version="1")
+
+ # Feast Core needs some time to fully commit the FeatureSet applied
+ # when there is no existing job yet for the Featureset
+ time.sleep(10)
+ all_types_fs = client.get_feature_set(name="all_types", version=1)
+
+ if all_types_fs is None:
+ raise Exception(
+ "Client cannot retrieve 'all_types_fs' FeatureSet "
+ "after registration. Either Feast Core does not save the "
+ "FeatureSet correctly or the client needs to wait longer for FeatureSet "
+ "to be committed."
+ )
all_types_df = pd.DataFrame(
{
@@ -205,9 +230,12 @@ def test_all_types(client):
# Ingest user embedding data
all_types_fs.ingest(dataframe=all_types_df)
+ time.sleep(3)
# Poll serving for feature values until the correct values are returned
while True:
+ time.sleep(1)
+
response = client.get_online_features(
entity_rows=[
GetOnlineFeaturesRequest.EntityRow(
@@ -233,7 +261,6 @@ def test_all_types(client):
) # type: GetOnlineFeaturesResponse
if response is None:
- time.sleep(1)
continue
returned_float_list = (
@@ -268,10 +295,21 @@ def test_large_volume(client):
# Register feature set
client.apply(cust_trans_fs)
+ # Feast Core needs some time to fully commit the FeatureSet applied
+ # when there is no existing job yet for the Featureset
+ time.sleep(10)
cust_trans_fs = client.get_feature_set(
name="customer_transactions_large", version=1
)
+ if cust_trans_fs is None:
+ raise Exception(
+ "Client cannot retrieve 'customer_transactions' FeatureSet "
+ "after registration. Either Feast Core does not save the "
+ "FeatureSet correctly or the client needs to wait longer for FeatureSet "
+ "to be committed."
+ )
+
offset = random.randint(1000000, 10000000) # ensure a unique key space
customer_data = pd.DataFrame(
{
@@ -289,6 +327,8 @@ def test_large_volume(client):
# Poll serving for feature values until the correct values are returned
while True:
+ time.sleep(1)
+
response = client.get_online_features(
entity_rows=[
GetOnlineFeaturesRequest.EntityRow(
@@ -306,7 +346,6 @@ def test_large_volume(client):
) # type: GetOnlineFeaturesResponse
if response is None:
- time.sleep(1)
continue
returned_daily_transactions = float(
| Update Prow for Feast 0.3 to run tests for PRs on GitHub
Adding this issue to track Prow test running for PRs
| 2019-10-24T08:59:28 |
|
feast-dev/feast | 456 | feast-dev__feast-456 | [
"415"
] | 1b47ac422f72ca06b8852277453bb4d1e7f4f366 | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -13,6 +13,7 @@
# limitations under the License.
import os
+import subprocess
from setuptools import find_packages, setup
@@ -48,7 +49,13 @@
]
# README file from Feast repo root directory
-README_FILE = os.path.join(os.path.dirname(__file__), "..", "..", "README.md")
+repo_root = (
+ subprocess.Popen(["git", "rev-parse", "--show-toplevel"], stdout=subprocess.PIPE)
+ .communicate()[0]
+ .rstrip()
+ .decode("utf-8")
+)
+README_FILE = os.path.join(repo_root, "README.md")
with open(os.path.join(README_FILE), "r") as f:
LONG_DESCRIPTION = f.read()
| Deduplicate example notebooks
Currently we have two sets of example notebooks for Feast
1. [Examples](https://github.com/gojek/feast/tree/master/examples/basic)
2. [Docker compose](https://github.com/gojek/feast/tree/master/infra/docker-compose/jupyter/notebooks)
The docker compose notebooks can be deduplicated so that all examples are only contained in the root of the project. This would make management easier.
| hi @woop, I used the latest master branch and it failed in ingestion when i run [Examples](https://github.com/gojek/feast/tree/master/examples/basic) outside `http://localhost:8888/tree/feast-notebooks`.
So i tried to run it in `http://localhost:8888/tree/feast-notebooks` by changed the core-url and serving-url as in the [Docker compose](https://github.com/gojek/feast/tree/master/infra/docker-compose/jupyter/notebooks) and it works.
Is it possible to run [Examples](https://github.com/gojek/feast/tree/master/examples/basic) outside the `http://localhost:8888/tree/feast-notebooks` ? so i wouldn't enter the `jupyter` container to ingest data into feast. Thankyou
Hi @napitupulusThank you for being so patient.
Would it be possible to create a new issue perhaps? I dont think its on topic for this issue (deduplication of notebooks).
You should be able to run ingestion from outside Jupyter. I suspect that the problem is that you have a firewall blocking Kafka. The important part is that you expose the ingestion port from Kafka correctly. If you are unable to connect to the Kafka cluster then ingestion will fail. | 2020-02-02T03:06:48 |
|
feast-dev/feast | 536 | feast-dev__feast-536 | [
"463"
] | e5bc18c6833f742a954c1325764721b96f31faa1 | diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py
--- a/sdk/python/feast/feature.py
+++ b/sdk/python/feast/feature.py
@@ -56,7 +56,7 @@ def from_proto(cls, feature_proto: FeatureProto):
Feature object
"""
feature = cls(
- name=feature_proto.name, dtype=ValueType(feature_proto.value_type)
+ name=feature_proto.name, dtype=ValueType(feature_proto.value_type),
)
feature.update_presence_constraints(feature_proto)
feature.update_shape_type(feature_proto)
diff --git a/sdk/python/feast/loaders/ingest.py b/sdk/python/feast/loaders/ingest.py
--- a/sdk/python/feast/loaders/ingest.py
+++ b/sdk/python/feast/loaders/ingest.py
@@ -25,7 +25,9 @@
KAFKA_CHUNK_PRODUCTION_TIMEOUT = 120 # type: int
-def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[bytes]:
+def _encode_pa_tables(
+ file: str, feature_set: str, fields: dict, row_group_idx: int
+) -> List[bytes]:
"""
Helper function to encode a PyArrow table(s) read from parquet file(s) into
FeatureRows.
@@ -41,8 +43,11 @@ def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[byt
File directory of all the parquet file to encode.
Parquet file must have more than one row group.
- fs (feast.feature_set.FeatureSet):
- FeatureSet describing parquet files.
+ feature_set (str):
+ Feature set reference in the format f"{project}/{name}:{version}".
+
+ fields (dict[str, enum.Enum.ValueType]):
+ A mapping of field names to their value types.
row_group_idx(int):
Row group index to read and encode into byte like FeatureRow
@@ -61,12 +66,10 @@ def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[byt
# Preprocess the columns by converting all its values to Proto values
proto_columns = {
- field_name: pa_column_to_proto_column(field.dtype, table.column(field_name))
- for field_name, field in fs.fields.items()
+ field_name: pa_column_to_proto_column(dtype, table.column(field_name))
+ for field_name, dtype in fields.items()
}
- feature_set = f"{fs.project}/{fs.name}:{fs.version}"
-
# List to store result
feature_rows = []
@@ -120,8 +123,12 @@ def get_feature_row_chunks(
Iterable list of byte encoded FeatureRow(s).
"""
+ feature_set = f"{fs.project}/{fs.name}:{fs.version}"
+
+ field_map = {field.name: field.dtype for field in fs.fields.values()}
+
pool = Pool(max_workers)
- func = partial(_encode_pa_tables, file, fs)
+ func = partial(_encode_pa_tables, file, feature_set, field_map)
for chunk in pool.imap(func, row_groups):
yield chunk
return
| diff --git a/core/src/test/java/feast/core/service/JobServiceTest.java b/core/src/test/java/feast/core/service/JobServiceTest.java
--- a/core/src/test/java/feast/core/service/JobServiceTest.java
+++ b/core/src/test/java/feast/core/service/JobServiceTest.java
@@ -34,11 +34,8 @@
import feast.core.CoreServiceProto.RestartIngestionJobResponse;
import feast.core.CoreServiceProto.StopIngestionJobRequest;
import feast.core.CoreServiceProto.StopIngestionJobResponse;
-import feast.core.FeatureSetProto.FeatureSetStatus;
import feast.core.FeatureSetReferenceProto.FeatureSetReference;
import feast.core.IngestionJobProto.IngestionJob;
-import feast.core.SourceProto.KafkaSourceConfig;
-import feast.core.SourceProto.SourceType;
import feast.core.StoreProto.Store.RedisConfig;
import feast.core.StoreProto.Store.StoreType;
import feast.core.dao.JobRepository;
@@ -84,14 +81,7 @@ public void setup() {
// create mock objects for testing
// fake data source
- this.dataSource =
- new Source(
- SourceType.KAFKA,
- KafkaSourceConfig.newBuilder()
- .setBootstrapServers("kafka:9092")
- .setTopic("my-topic")
- .build(),
- true);
+ this.dataSource = TestObjectFactory.defaultSource;
// fake data store
this.dataStore =
new Store(
@@ -158,19 +148,12 @@ public void setupJobManager() {
// dummy model constructorss
private FeatureSet newDummyFeatureSet(String name, int version, String project) {
- Field feature = new Field(name + "_feature", Enum.INT64);
- Field entity = new Field(name + "_entity", Enum.STRING);
+ Field feature = TestObjectFactory.CreateFeatureField(name + "_feature", Enum.INT64);
+ Field entity = TestObjectFactory.CreateEntityField(name + "_entity", Enum.STRING);
FeatureSet fs =
- new FeatureSet(
- name,
- project,
- version,
- 100L,
- Arrays.asList(entity),
- Arrays.asList(feature),
- this.dataSource,
- FeatureSetStatus.STATUS_READY);
+ TestObjectFactory.CreateFeatureSet(
+ name, project, version, Arrays.asList(entity), Arrays.asList(feature));
fs.setCreated(Date.from(Instant.ofEpochSecond(10L)));
return fs;
}
diff --git a/core/src/test/java/feast/core/service/SpecServiceTest.java b/core/src/test/java/feast/core/service/SpecServiceTest.java
--- a/core/src/test/java/feast/core/service/SpecServiceTest.java
+++ b/core/src/test/java/feast/core/service/SpecServiceTest.java
@@ -39,10 +39,7 @@
import feast.core.FeatureSetProto;
import feast.core.FeatureSetProto.EntitySpec;
import feast.core.FeatureSetProto.FeatureSetSpec;
-import feast.core.FeatureSetProto.FeatureSetStatus;
import feast.core.FeatureSetProto.FeatureSpec;
-import feast.core.SourceProto.KafkaSourceConfig;
-import feast.core.SourceProto.SourceType;
import feast.core.StoreProto;
import feast.core.StoreProto.Store.RedisConfig;
import feast.core.StoreProto.Store.StoreType;
@@ -110,39 +107,24 @@ public class SpecServiceTest {
@Before
public void setUp() {
initMocks(this);
- defaultSource =
- new Source(
- SourceType.KAFKA,
- KafkaSourceConfig.newBuilder()
- .setBootstrapServers("kafka:9092")
- .setTopic("my-topic")
- .build(),
- true);
+ defaultSource = TestObjectFactory.defaultSource;
FeatureSet featureSet1v1 = newDummyFeatureSet("f1", 1, "project1");
FeatureSet featureSet1v2 = newDummyFeatureSet("f1", 2, "project1");
FeatureSet featureSet1v3 = newDummyFeatureSet("f1", 3, "project1");
FeatureSet featureSet2v1 = newDummyFeatureSet("f2", 1, "project1");
- Field f3f1 = new Field("f3f1", Enum.INT64);
- Field f3f2 = new Field("f3f2", Enum.INT64);
- Field f3e1 = new Field("f3e1", Enum.STRING);
+ Field f3f1 = TestObjectFactory.CreateFeatureField("f3f1", Enum.INT64);
+ Field f3f2 = TestObjectFactory.CreateFeatureField("f3f2", Enum.INT64);
+ Field f3e1 = TestObjectFactory.CreateEntityField("f3e1", Enum.STRING);
FeatureSet featureSet3v1 =
- new FeatureSet(
- "f3",
- "project1",
- 1,
- 100L,
- Arrays.asList(f3e1),
- Arrays.asList(f3f2, f3f1),
- defaultSource,
- FeatureSetStatus.STATUS_READY);
+ TestObjectFactory.CreateFeatureSet(
+ "f3", "project1", 1, Arrays.asList(f3e1), Arrays.asList(f3f2, f3f1));
featureSets =
Arrays.asList(featureSet1v1, featureSet1v2, featureSet1v3, featureSet2v1, featureSet3v1);
when(featureSetRepository.findAll()).thenReturn(featureSets);
when(featureSetRepository.findAllByOrderByNameAscVersionAsc()).thenReturn(featureSets);
-
when(featureSetRepository.findFeatureSetByNameAndProject_NameAndVersion("f1", "project1", 1))
.thenReturn(featureSets.get(0));
when(featureSetRepository.findAllByNameLikeAndProject_NameOrderByNameAscVersionAsc(
@@ -490,19 +472,12 @@ public void applyFeatureSetShouldIncrementFeatureSetVersionIfAlreadyExists()
public void applyFeatureSetShouldNotCreateFeatureSetIfFieldsUnordered()
throws InvalidProtocolBufferException {
- Field f3f1 = new Field("f3f1", Enum.INT64);
- Field f3f2 = new Field("f3f2", Enum.INT64);
- Field f3e1 = new Field("f3e1", Enum.STRING);
+ Field f3f1 = TestObjectFactory.CreateFeatureField("f3f1", Enum.INT64);
+ Field f3f2 = TestObjectFactory.CreateFeatureField("f3f2", Enum.INT64);
+ Field f3e1 = TestObjectFactory.CreateEntityField("f3e1", Enum.STRING);
FeatureSetProto.FeatureSet incomingFeatureSet =
- (new FeatureSet(
- "f3",
- "project1",
- 5,
- 100L,
- Arrays.asList(f3e1),
- Arrays.asList(f3f2, f3f1),
- defaultSource,
- FeatureSetStatus.STATUS_READY))
+ (TestObjectFactory.CreateFeatureSet(
+ "f3", "project1", 5, Arrays.asList(f3e1), Arrays.asList(f3f2, f3f1)))
.toProto();
ApplyFeatureSetResponse applyFeatureSetResponse =
@@ -630,16 +605,8 @@ public void applyFeatureSetShouldAcceptPresenceShapeAndDomainConstraints()
new ArrayList<>(appliedFeatureSetSpec.getFeaturesList());
appliedFeatureSpecs.sort(Comparator.comparing(FeatureSpec::getName));
- assertEquals(appliedEntitySpecs.size(), entitySpecs.size());
- assertEquals(appliedFeatureSpecs.size(), featureSpecs.size());
-
- for (int i = 0; i < appliedEntitySpecs.size(); i++) {
- assertEquals(entitySpecs.get(i), appliedEntitySpecs.get(i));
- }
-
- for (int i = 0; i < appliedFeatureSpecs.size(); i++) {
- assertEquals(featureSpecs.get(i), appliedFeatureSpecs.get(i));
- }
+ assertEquals(appliedEntitySpecs, entitySpecs);
+ assertEquals(appliedFeatureSpecs, featureSpecs);
}
@Test
@@ -713,19 +680,12 @@ public void applyFeatureSetShouldUpdateFeatureSetWhenConstraintsAreUpdated()
@Test
public void applyFeatureSetShouldCreateProjectWhenNotAlreadyExists()
throws InvalidProtocolBufferException {
- Field f3f1 = new Field("f3f1", Enum.INT64);
- Field f3f2 = new Field("f3f2", Enum.INT64);
- Field f3e1 = new Field("f3e1", Enum.STRING);
+ Field f3f1 = TestObjectFactory.CreateFeatureField("f3f1", Enum.INT64);
+ Field f3f2 = TestObjectFactory.CreateFeatureField("f3f2", Enum.INT64);
+ Field f3e1 = TestObjectFactory.CreateEntityField("f3e1", Enum.STRING);
FeatureSetProto.FeatureSet incomingFeatureSet =
- (new FeatureSet(
- "f3",
- "newproject",
- 5,
- 100L,
- Arrays.asList(f3e1),
- Arrays.asList(f3f2, f3f1),
- defaultSource,
- FeatureSetStatus.STATUS_READY))
+ (TestObjectFactory.CreateFeatureSet(
+ "f3", "newproject", 5, Arrays.asList(f3e1), Arrays.asList(f3f2, f3f1)))
.toProto();
ApplyFeatureSetResponse applyFeatureSetResponse =
@@ -739,19 +699,12 @@ public void applyFeatureSetShouldCreateProjectWhenNotAlreadyExists()
@Test
public void applyFeatureSetShouldFailWhenProjectIsArchived()
throws InvalidProtocolBufferException {
- Field f3f1 = new Field("f3f1", Enum.INT64);
- Field f3f2 = new Field("f3f2", Enum.INT64);
- Field f3e1 = new Field("f3e1", Enum.STRING);
+ Field f3f1 = TestObjectFactory.CreateFeatureField("f3f1", Enum.INT64);
+ Field f3f2 = TestObjectFactory.CreateFeatureField("f3f2", Enum.INT64);
+ Field f3e1 = TestObjectFactory.CreateEntityField("f3e1", Enum.STRING);
FeatureSetProto.FeatureSet incomingFeatureSet =
- (new FeatureSet(
- "f3",
- "archivedproject",
- 5,
- 100L,
- Arrays.asList(f3e1),
- Arrays.asList(f3f2, f3f1),
- defaultSource,
- FeatureSetStatus.STATUS_READY))
+ (TestObjectFactory.CreateFeatureSet(
+ "f3", "archivedproject", 5, Arrays.asList(f3e1), Arrays.asList(f3f2, f3f1)))
.toProto();
expectedException.expect(IllegalArgumentException.class);
@@ -759,6 +712,101 @@ public void applyFeatureSetShouldFailWhenProjectIsArchived()
specService.applyFeatureSet(incomingFeatureSet);
}
+ @Test
+ public void applyFeatureSetShouldAcceptFeatureLabels() throws InvalidProtocolBufferException {
+ List<EntitySpec> entitySpecs = new ArrayList<>();
+ entitySpecs.add(EntitySpec.newBuilder().setName("entity1").setValueType(Enum.INT64).build());
+
+ Map<String, String> featureLabels0 =
+ new HashMap<>() {
+ {
+ put("label1", "feast1");
+ }
+ };
+
+ Map<String, String> featureLabels1 =
+ new HashMap<>() {
+ {
+ put("label1", "feast1");
+ put("label2", "feast2");
+ }
+ };
+
+ List<Map<String, String>> featureLabels = new ArrayList<>();
+ featureLabels.add(featureLabels0);
+ featureLabels.add(featureLabels1);
+
+ List<FeatureSpec> featureSpecs = new ArrayList<>();
+ featureSpecs.add(
+ FeatureSpec.newBuilder()
+ .setName("feature1")
+ .setValueType(Enum.INT64)
+ .putAllLabels(featureLabels.get(0))
+ .build());
+ featureSpecs.add(
+ FeatureSpec.newBuilder()
+ .setName("feature2")
+ .setValueType(Enum.INT64)
+ .putAllLabels(featureLabels.get(1))
+ .build());
+
+ FeatureSetSpec featureSetSpec =
+ FeatureSetSpec.newBuilder()
+ .setProject("project1")
+ .setName("featureSetWithConstraints")
+ .addAllEntities(entitySpecs)
+ .addAllFeatures(featureSpecs)
+ .build();
+ FeatureSetProto.FeatureSet featureSet =
+ FeatureSetProto.FeatureSet.newBuilder().setSpec(featureSetSpec).build();
+
+ ApplyFeatureSetResponse applyFeatureSetResponse = specService.applyFeatureSet(featureSet);
+ FeatureSetSpec appliedFeatureSetSpec = applyFeatureSetResponse.getFeatureSet().getSpec();
+
+ // appliedEntitySpecs needs to be sorted because the list returned by specService may not
+ // follow the order in the request
+ List<EntitySpec> appliedEntitySpecs = new ArrayList<>(appliedFeatureSetSpec.getEntitiesList());
+ appliedEntitySpecs.sort(Comparator.comparing(EntitySpec::getName));
+
+ // appliedFeatureSpecs needs to be sorted because the list returned by specService may not
+ // follow the order in the request
+ List<FeatureSpec> appliedFeatureSpecs =
+ new ArrayList<>(appliedFeatureSetSpec.getFeaturesList());
+ appliedFeatureSpecs.sort(Comparator.comparing(FeatureSpec::getName));
+
+ var featureSpecsLabels =
+ featureSpecs.stream().map(e -> e.getLabelsMap()).collect(Collectors.toList());
+ assertEquals(appliedEntitySpecs, entitySpecs);
+ assertEquals(appliedFeatureSpecs, featureSpecs);
+ assertEquals(featureSpecsLabels, featureLabels);
+ }
+
+ @Test
+ public void applyFeatureSetShouldAcceptFeatureSetLabels() throws InvalidProtocolBufferException {
+ Map<String, String> featureSetLabels =
+ new HashMap<>() {
+ {
+ put("description", "My precious feature set");
+ }
+ };
+
+ FeatureSetSpec featureSetSpec =
+ FeatureSetSpec.newBuilder()
+ .setProject("project1")
+ .setName("preciousFeatureSet")
+ .putAllLabels(featureSetLabels)
+ .build();
+ FeatureSetProto.FeatureSet featureSet =
+ FeatureSetProto.FeatureSet.newBuilder().setSpec(featureSetSpec).build();
+
+ ApplyFeatureSetResponse applyFeatureSetResponse = specService.applyFeatureSet(featureSet);
+ FeatureSetSpec appliedFeatureSetSpec = applyFeatureSetResponse.getFeatureSet().getSpec();
+
+ var appliedLabels = appliedFeatureSetSpec.getLabelsMap();
+
+ assertEquals(featureSetLabels, appliedLabels);
+ }
+
@Test
public void shouldUpdateStoreIfConfigChanges() throws InvalidProtocolBufferException {
when(storeRepository.findById("SERVING")).thenReturn(Optional.of(stores.get(0)));
@@ -806,19 +854,18 @@ public void shouldFailIfGetFeatureSetWithoutProject() throws InvalidProtocolBuff
}
private FeatureSet newDummyFeatureSet(String name, int version, String project) {
- Field feature = new Field("feature", Enum.INT64);
- Field entity = new Field("entity", Enum.STRING);
+ FeatureSpec f1 =
+ FeatureSpec.newBuilder()
+ .setName("feature")
+ .setValueType(Enum.STRING)
+ .putLabels("key", "value")
+ .build();
+ Field feature = new Field(f1);
+ Field entity = TestObjectFactory.CreateEntityField("entity", Enum.STRING);
FeatureSet fs =
- new FeatureSet(
- name,
- project,
- version,
- 100L,
- Arrays.asList(entity),
- Arrays.asList(feature),
- defaultSource,
- FeatureSetStatus.STATUS_READY);
+ TestObjectFactory.CreateFeatureSet(
+ name, project, version, Arrays.asList(entity), Arrays.asList(feature));
fs.setCreated(Date.from(Instant.ofEpochSecond(10L)));
return fs;
}
diff --git a/core/src/test/java/feast/core/service/TestObjectFactory.java b/core/src/test/java/feast/core/service/TestObjectFactory.java
new file mode 100644
--- /dev/null
+++ b/core/src/test/java/feast/core/service/TestObjectFactory.java
@@ -0,0 +1,62 @@
+/*
+ * SPDX-License-Identifier: Apache-2.0
+ * Copyright 2018-2020 The Feast Authors
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package feast.core.service;
+
+import feast.core.FeatureSetProto;
+import feast.core.SourceProto;
+import feast.core.model.FeatureSet;
+import feast.core.model.Field;
+import feast.core.model.Source;
+import feast.types.ValueProto;
+import java.util.HashMap;
+import java.util.List;
+
+public class TestObjectFactory {
+
+ public static Source defaultSource =
+ new Source(
+ SourceProto.SourceType.KAFKA,
+ SourceProto.KafkaSourceConfig.newBuilder()
+ .setBootstrapServers("kafka:9092")
+ .setTopic("my-topic")
+ .build(),
+ true);
+
+ public static FeatureSet CreateFeatureSet(
+ String name, String project, int version, List<Field> entities, List<Field> features) {
+ return new FeatureSet(
+ name,
+ project,
+ version,
+ 100L,
+ entities,
+ features,
+ defaultSource,
+ new HashMap<>(),
+ FeatureSetProto.FeatureSetStatus.STATUS_READY);
+ }
+
+ public static Field CreateFeatureField(String name, ValueProto.ValueType.Enum valueType) {
+ return new Field(
+ FeatureSetProto.FeatureSpec.newBuilder().setName(name).setValueType(valueType).build());
+ }
+
+ public static Field CreateEntityField(String name, ValueProto.ValueType.Enum valueType) {
+ return new Field(
+ FeatureSetProto.EntitySpec.newBuilder().setName(name).setValueType(valueType).build());
+ }
+}
diff --git a/core/src/test/java/feast/core/util/TypeConversionTest.java b/core/src/test/java/feast/core/util/TypeConversionTest.java
--- a/core/src/test/java/feast/core/util/TypeConversionTest.java
+++ b/core/src/test/java/feast/core/util/TypeConversionTest.java
@@ -18,8 +18,7 @@
import static com.jayway.jsonpath.matchers.JsonPathMatchers.hasJsonPath;
import static org.hamcrest.Matchers.equalTo;
-import static org.junit.Assert.assertThat;
-import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.*;
import com.google.protobuf.Timestamp;
import java.util.*;
@@ -70,6 +69,12 @@ public void convertMapToJsonStringShouldReturnJsonStringForGivenMap() {
TypeConversion.convertMapToJsonString(input), hasJsonPath("$.key", equalTo("value")));
}
+ @Test
+ public void convertMapToJsonStringShouldReturnEmptyJsonForAnEmptyMap() {
+ Map<String, String> input = new HashMap<>();
+ assertThat(TypeConversion.convertMapToJsonString(input), equalTo("{}"));
+ }
+
@Test
public void convertJsonStringToArgsShouldReturnCorrectListOfArgs() {
Map<String, String> input = new HashMap<>();
diff --git a/core/src/test/java/feast/core/validators/FeatureSetValidatorTest.java b/core/src/test/java/feast/core/validators/FeatureSetValidatorTest.java
new file mode 100644
--- /dev/null
+++ b/core/src/test/java/feast/core/validators/FeatureSetValidatorTest.java
@@ -0,0 +1,87 @@
+/*
+ * SPDX-License-Identifier: Apache-2.0
+ * Copyright 2018-2020 The Feast Authors
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package feast.core.validators;
+
+import feast.core.FeatureSetProto;
+import feast.types.ValueProto;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+public class FeatureSetValidatorTest {
+
+ @Rule public final ExpectedException expectedException = ExpectedException.none();
+
+ @Test
+ public void shouldThrowExceptionForFeatureLabelsWithAnEmptyKey() {
+ Map<String, String> featureLabels =
+ new HashMap<>() {
+ {
+ put("", "empty_key");
+ }
+ };
+
+ List<FeatureSetProto.FeatureSpec> featureSpecs = new ArrayList<>();
+ featureSpecs.add(
+ FeatureSetProto.FeatureSpec.newBuilder()
+ .setName("feature1")
+ .setValueType(ValueProto.ValueType.Enum.INT64)
+ .putAllLabels(featureLabels)
+ .build());
+
+ FeatureSetProto.FeatureSetSpec featureSetSpec =
+ FeatureSetProto.FeatureSetSpec.newBuilder()
+ .setProject("project1")
+ .setName("featureSetWithConstraints")
+ .addAllFeatures(featureSpecs)
+ .build();
+ FeatureSetProto.FeatureSet featureSet =
+ FeatureSetProto.FeatureSet.newBuilder().setSpec(featureSetSpec).build();
+
+ expectedException.expect(IllegalArgumentException.class);
+ expectedException.expectMessage("Feature label keys must not be empty");
+ FeatureSetValidator.validateSpec(featureSet);
+ }
+
+ @Test
+ public void shouldThrowExceptionForFeatureSetLabelsWithAnEmptyKey() {
+
+ Map<String, String> featureSetLabels =
+ new HashMap<>() {
+ {
+ put("", "empty_key");
+ }
+ };
+
+ FeatureSetProto.FeatureSetSpec featureSetSpec =
+ FeatureSetProto.FeatureSetSpec.newBuilder()
+ .setProject("project1")
+ .setName("featureSetWithConstraints")
+ .putAllLabels(featureSetLabels)
+ .build();
+ FeatureSetProto.FeatureSet featureSet =
+ FeatureSetProto.FeatureSet.newBuilder().setSpec(featureSetSpec).build();
+
+ expectedException.expect(IllegalArgumentException.class);
+ expectedException.expectMessage("Feature set label keys must not be empty");
+ FeatureSetValidator.validateSpec(featureSet);
+ }
+}
diff --git a/sdk/python/tests/test_client.py b/sdk/python/tests/test_client.py
--- a/sdk/python/tests/test_client.py
+++ b/sdk/python/tests/test_client.py
@@ -559,7 +559,16 @@ def test_apply_feature_set_success(self, test_client):
and feature_sets[0].name == "my-feature-set-1"
and feature_sets[0].features[0].name == "fs1-my-feature-1"
and feature_sets[0].features[0].dtype == ValueType.INT64
+ and feature_sets[0].features[1].name == "fs1-my-feature-2"
+ and feature_sets[0].features[1].dtype == ValueType.STRING
+ and feature_sets[0].entities[0].name == "fs1-my-entity-1"
+ and feature_sets[0].entities[0].dtype == ValueType.INT64
+ and feature_sets[1].features[0].name == "fs2-my-feature-1"
+ and feature_sets[1].features[0].dtype == ValueType.STRING_LIST
+ and feature_sets[1].features[1].name == "fs2-my-feature-2"
and feature_sets[1].features[1].dtype == ValueType.BYTES_LIST
+ and feature_sets[1].entities[0].name == "fs2-my-entity-1"
+ and feature_sets[1].entities[0].dtype == ValueType.INT64
)
@pytest.mark.parametrize(
diff --git a/tests/e2e/basic-ingest-redis-serving.py b/tests/e2e/basic-ingest-redis-serving.py
--- a/tests/e2e/basic-ingest-redis-serving.py
+++ b/tests/e2e/basic-ingest-redis-serving.py
@@ -2,12 +2,15 @@
import math
import random
import time
+import grpc
from feast.entity import Entity
from feast.serving.ServingService_pb2 import (
GetOnlineFeaturesRequest,
GetOnlineFeaturesResponse,
)
from feast.core.IngestionJob_pb2 import IngestionJobStatus
+from feast.core.CoreService_pb2_grpc import CoreServiceStub
+from feast.core import CoreService_pb2
from feast.types.Value_pb2 import Value as Value
from feast.client import Client
from feast.feature_set import FeatureSet, FeatureSetRef
@@ -26,6 +29,7 @@
FLOAT_TOLERANCE = 0.00001
PROJECT_NAME = 'basic_' + uuid.uuid4().hex.upper()[0:6]
+
@pytest.fixture(scope='module')
def core_url(pytestconfig):
return pytestconfig.getoption("core_url")
@@ -109,6 +113,7 @@ def test_basic_ingest_success(client, basic_dataframe):
client.ingest(cust_trans_fs, basic_dataframe)
time.sleep(5)
+
@pytest.mark.timeout(45)
@pytest.mark.run(order=12)
def test_basic_retrieve_online_success(client, basic_dataframe):
@@ -146,12 +151,13 @@ def test_basic_retrieve_online_success(client, basic_dataframe):
basic_dataframe.iloc[0]["daily_transactions"])
if math.isclose(
- sent_daily_transactions,
- returned_daily_transactions,
- abs_tol=FLOAT_TOLERANCE,
+ sent_daily_transactions,
+ returned_daily_transactions,
+ abs_tol=FLOAT_TOLERANCE,
):
break
+
@pytest.mark.timeout(300)
@pytest.mark.run(order=19)
def test_basic_ingest_jobs(client, basic_dataframe):
@@ -319,20 +325,20 @@ def test_all_types_retrieve_online_success(client, all_types_dataframe):
if response is None:
continue
-
returned_float_list = (
response.field_values[0]
- .fields[PROJECT_NAME+"/float_list_feature"]
+ .fields[PROJECT_NAME + "/float_list_feature"]
.float_list_val.val
)
sent_float_list = all_types_dataframe.iloc[0]["float_list_feature"]
if math.isclose(
- returned_float_list[0], sent_float_list[0], abs_tol=FLOAT_TOLERANCE
+ returned_float_list[0], sent_float_list[0], abs_tol=FLOAT_TOLERANCE
):
break
+
@pytest.mark.timeout(300)
@pytest.mark.run(order=29)
def test_all_types_ingest_jobs(client, all_types_dataframe):
@@ -355,6 +361,7 @@ def test_all_types_ingest_jobs(client, all_types_dataframe):
ingest_job.wait(IngestionJobStatus.ABORTED)
assert ingest_job.status == IngestionJobStatus.ABORTED
+
@pytest.fixture(scope='module')
def large_volume_dataframe():
ROW_COUNT = 100000
@@ -445,9 +452,9 @@ def test_large_volume_retrieve_online_success(client, large_volume_dataframe):
large_volume_dataframe.iloc[0]["daily_transactions_large"])
if math.isclose(
- sent_daily_transactions,
- returned_daily_transactions,
- abs_tol=FLOAT_TOLERANCE,
+ sent_daily_transactions,
+ returned_daily_transactions,
+ abs_tol=FLOAT_TOLERANCE,
):
break
@@ -462,14 +469,14 @@ def all_types_parquet_file():
"customer_id": [np.int32(random.randint(0, 10000)) for _ in
range(COUNT)],
"int32_feature_parquet": [np.int32(random.randint(0, 10000)) for _ in
- range(COUNT)],
+ range(COUNT)],
"int64_feature_parquet": [np.int64(random.randint(0, 10000)) for _ in
- range(COUNT)],
+ range(COUNT)],
"float_feature_parquet": [np.float(random.random()) for _ in range(COUNT)],
"double_feature_parquet": [np.float64(random.random()) for _ in
- range(COUNT)],
+ range(COUNT)],
"string_feature_parquet": ["one" + str(random.random()) for _ in
- range(COUNT)],
+ range(COUNT)],
"bytes_feature_parquet": [b"one" for _ in range(COUNT)],
"int32_list_feature_parquet": [
np.array([1, 2, 3, random.randint(0, 10000)], dtype=np.int32)
@@ -509,6 +516,7 @@ def all_types_parquet_file():
df.to_parquet(file_path, allow_truncated_timestamps=True)
return file_path
+
@pytest.mark.timeout(300)
@pytest.mark.run(order=40)
def test_all_types_parquet_register_feature_set_success(client):
@@ -539,10 +547,86 @@ def test_all_types_parquet_register_feature_set_success(client):
@pytest.mark.timeout(600)
@pytest.mark.run(order=41)
def test_all_types_infer_register_ingest_file_success(client,
- all_types_parquet_file):
+ all_types_parquet_file):
# Get feature set
all_types_fs = client.get_feature_set(name="all_types_parquet")
# Ingest user embedding data
client.ingest(feature_set=all_types_fs, source=all_types_parquet_file,
force_update=True)
+
+
+# TODO: rewrite these using python SDK once the labels are implemented there
+class TestsBasedOnGrpc:
+ LAST_VERSION = 0
+ GRPC_CONNECTION_TIMEOUT = 3
+ LABEL_KEY = "my"
+ LABEL_VALUE = "label"
+
+ @pytest.fixture(scope="module")
+ def core_service_stub(self, core_url):
+ if core_url.endswith(":443"):
+ core_channel = grpc.secure_channel(
+ core_url, grpc.ssl_channel_credentials()
+ )
+ else:
+ core_channel = grpc.insecure_channel(core_url)
+
+ try:
+ grpc.channel_ready_future(core_channel).result(timeout=self.GRPC_CONNECTION_TIMEOUT)
+ except grpc.FutureTimeoutError:
+ raise ConnectionError(
+ f"Connection timed out while attempting to connect to Feast "
+ f"Core gRPC server {core_url} "
+ )
+ core_service_stub = CoreServiceStub(core_channel)
+ return core_service_stub
+
+ def apply_feature_set(self, core_service_stub, feature_set_proto):
+ try:
+ apply_fs_response = core_service_stub.ApplyFeatureSet(
+ CoreService_pb2.ApplyFeatureSetRequest(feature_set=feature_set_proto),
+ timeout=self.GRPC_CONNECTION_TIMEOUT,
+ ) # type: ApplyFeatureSetResponse
+ except grpc.RpcError as e:
+ raise grpc.RpcError(e.details())
+ return apply_fs_response.feature_set
+
+ def get_feature_set(self, core_service_stub, name, project):
+ try:
+ get_feature_set_response = core_service_stub.GetFeatureSet(
+ CoreService_pb2.GetFeatureSetRequest(
+ project=project, name=name.strip(), version=self.LAST_VERSION
+ )
+ ) # type: GetFeatureSetResponse
+ except grpc.RpcError as e:
+ raise grpc.RpcError(e.details())
+ return get_feature_set_response.feature_set
+
+ @pytest.mark.timeout(45)
+ @pytest.mark.run(order=51)
+ def test_register_feature_set_with_labels(self, core_service_stub):
+ feature_set_name = "test_feature_set_labels"
+ feature_set_proto = FeatureSet(feature_set_name, PROJECT_NAME).to_proto()
+ feature_set_proto.spec.labels[self.LABEL_KEY] = self.LABEL_VALUE
+ self.apply_feature_set(core_service_stub, feature_set_proto)
+
+ retrieved_feature_set = self.get_feature_set(core_service_stub, feature_set_name, PROJECT_NAME)
+
+ assert self.LABEL_KEY in retrieved_feature_set.spec.labels
+ assert retrieved_feature_set.spec.labels[self.LABEL_KEY] == self.LABEL_VALUE
+
+ @pytest.mark.timeout(45)
+ @pytest.mark.run(order=52)
+ def test_register_feature_with_labels(self, core_service_stub):
+ feature_set_name = "test_feature_labels"
+ feature_set_proto = FeatureSet(feature_set_name, PROJECT_NAME, features=[Feature("rating", ValueType.INT64)]) \
+ .to_proto()
+ feature_set_proto.spec.features[0].labels[self.LABEL_KEY] = self.LABEL_VALUE
+ self.apply_feature_set(core_service_stub, feature_set_proto)
+
+ retrieved_feature_set = self.get_feature_set(core_service_stub, feature_set_name, PROJECT_NAME)
+ retrieved_feature = retrieved_feature_set.spec.features[0]
+
+ assert self.LABEL_KEY in retrieved_feature.labels
+ assert retrieved_feature.labels[self.LABEL_KEY] == self.LABEL_VALUE
| Extend feature set and/or feature metadata
This issue tracks the addition of new fields to the current feature set specification that allow a user to add metadata to either the feature set or features. These fields are optional and are intended to provide users with the flexibility to include feature level or feature level information.
The current proposal is to only add a single string field called `description` to `FeatureSpec`
| We would like to integrate Feast with a Data Governance Tool (one of [these](https://www.softwaretestinghelp.com/data-governance-tools/)).
It would be helpful to have additional metadata other than "description". E.g.
- owner
- team
- scope
- deprecated
- source
- sensitive
- pii-level
- relationships
How about just adding a `metadata` field of type `map<string, string>`?
> We would like to integrate Feast with a Data Governance Tool (one of [these](https://www.softwaretestinghelp.com/data-governance-tools/)).
>
> It would be helpful to have additional metadata other than "description". E.g.
>
> * owner
> * team
> * scope
> * deprecated
> * source
> * sensitive
> * pii-level
> * relationships
>
> How about just adding a `metadata` field of type `map<string, string>`?
Hi @Yanson, thanks for the input!
We'd love to support that use case, in fact @ches and some of our other users have also asked for this. In those discussions the idea was brought up that we could add a label/annotation/tags field (string map) to either a feature set or a feature. That would allow users to add any number of properties to their spec. That sounds very similar to what you are describing above.
The challenge there is not so much in capturing the information, but more in how we expose it. In your use case, were you looking for something like this (and perhaps something similar on a UI based search):
```
client.list_features(meta={"source":"my-db"})
```
or how would you end up consuming the meta/labels/annotations?
Our basic needs would be that you do `list_features()` with no args and you get everything with associated metadata. We would have a scheduled process that does this and pushes the results to the Data Governance Tool.
While the ultimate goal is to use the Data Governance Tool for discovery, I have a feeling it will be rather heavyweight so a Feast UI (which itself would require the "search" functionality you describe) would probably provide immediate value to Data Scientists.
One very important reason for using a Feature Store is to share and discover Features, but I don't think _search_ should be the ultimate goal of the Feast project itself. Keep it simple (API only) and create another "app" that can act as a UI, search, rating, discussion tool etc which users can either deploy too or integrate something else as preferred.
> Keep it simple (API only) and create another "app" that can act as a UI, search, rating, discussion tool etc which users can either deploy too or integrate something else as preferred.
Yip, this is what we had in mind as well.
The question is just if the `description` and `metadata` should be the same thing or separate. We planned on adding the `metadata` type field (aka label/annotation/tags) in 0.6, but I am happy to accelerate this if its needed by folks already.
> The question is just if the `description` and `metadata` should be the same thing or separate.
Honestly, don't have much of an opinion. I wouldn't want to see the description "misused" though, if it's the only field (think; custom CSV, JSON content in there).
> We planned on adding the `metadata` type field (aka label/annotation/tags) in 0.6, but I am happy to accelerate this if its needed by folks already.
Not in a desperate hurry. We can contribute if it's that urgent for us.
Personally, I would suggest that they should be separate. If you don't provide people the option to add metadata, but only a description, then I expect people will abuse it.
> Personally, I would suggest that they should be separate. If you don't provide people the option to add metadata, but only a description, then I expect people will abuse it.
I was thinking that we could start with the reverse. Metadata first, and add the description field later. The description field is only valuable above metadata in the case where we want to encourage users to set that specific key, and we perhaps want to print out the contents on a user interface. There is no use case for it right now, but it seems we do have one for metadata.
Yeah, that makes sense to me.
Thanks!
I'll try to chime in here shortly since, yes, this topic comes up over and over for us. I will go ahead and cross-reference #363 as a thread that I wanted to find to refer back to on this.
A possible elephant in the room too… This issue title is explicitly "feature set metadata" and we may want to keep it limited to that, but we've touched on some potential use cases for feature-level metadata as well (governance tags and descriptions for humans are both relevant to me, at feature level also). Clearly the complexity of registration might explode with that, but perhaps it's essential complexity, and could be optional.
Especially if feature sets might increasingly be downplayed (for similar reasons that consumers don't want to care about feature sets, they are probably less interesting in a registry browsing UI than entities, features, and projects), perhaps it'd be worth bringing feature-level into scope from the outset of metadata discussion.
> I'll try to chime in here shortly since, yes, this topic comes up over and over for us. I will go ahead and cross-reference #363 as a thread that I wanted to find to refer back to on this.
>
> A possible elephant in the room too… This issue title is explicitly "feature set metadata" and we may want to keep it limited to that, but we've touched on some potential use cases for feature-level metadata as well (governance tags and descriptions for humans are both relevant to me, at feature level also). Clearly the complexity of registration might explode with that, but perhaps it's essential complexity, and could be optional.
Agreed on the title being a bit misleading, I will update it to include both in scope. I think the feature level discussion is much more relevant right now.
> Especially if feature sets might increasingly be downplayed (for similar reasons that consumers don't want to care about feature sets, they are probably less interesting in a registry browsing UI than entities, features, and projects), perhaps it'd be worth bringing feature-level into scope from the outset of metadata discussion.
I agree on downplaying feature sets here. It seems like we can immediately add value by providing a means of capturing metadata at the feature level.
I want to try and gauge the appetite for including feature level tags/meta in 0.5. @ches @tfurmston @Yanson do we need to spec this out at a higher level with discovery and exploration, or are we comfortable with the addition of a field to feature specs and a means of configuring it, and leaving the higher level APIs to future versions?
So potentially a proposal could be as above
```
message FeatureSpec {
string name = 1;
feast.types.ValueType.Enum value_type = 2;
// other fields
map<string, string> labels = 19;
}
```
with the Python SDK having a `set_label(key, value)` method and a `remove_label(key)` method on the Feature class. `list_feature_sets()` would print out this information as well, but filtering will be left for a future release.
In terms of names, I am open to suggestions. The following have been proposed
- meta
- tags
- labels
- annotations
My preference is `labels`, mostly because it mirrors the way that it has been used in [Prometheus](https://prometheus.io/docs/practices/naming/#labels) and [Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
I would rule-out `tags` because that doesn't sound like key+value.
Kubernetes has `labels` **and** `annotations`, both of which set under `metadata`.
> Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects.
Regardless, I am happy with `labels` with search, filtering, etc left until later.
Completely agree that feature level makes sense. As an end user, that is likely how I would expect to use it.
From my point of view, I don't think we are at a point of a more concrete set of requirements, so happy to leave that for a later point.
Unless there are any objections, we will implement `labels` as a `map<string, string>` at the feature level for 0.5. We will also add basic getters/setters at the feature set level. We will leave the discovery implementation for future releases.
Please vote with a thumbs down if you want to discuss this further. | 2020-03-16T04:42:15 |
feast-dev/feast | 636 | feast-dev__feast-636 | [
"626"
] | dfc81b9e7530b176df607defb4926dfd87df9820 | diff --git a/sdk/python/feast/loaders/abstract_producer.py b/sdk/python/feast/loaders/abstract_producer.py
--- a/sdk/python/feast/loaders/abstract_producer.py
+++ b/sdk/python/feast/loaders/abstract_producer.py
@@ -25,8 +25,6 @@ class AbstractProducer:
def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
self.brokers = brokers
self.row_count = row_count
- self.error_count = 0
- self.last_exception = ""
# Progress bar will always display average rate
self.pbar = tqdm(
@@ -45,8 +43,7 @@ def _inc_pbar(self, meta):
self.pbar.update(1)
def _set_error(self, exception: str):
- self.error_count += 1
- self.last_exception = exception
+ raise Exception(exception)
def print_results(self) -> None:
"""
@@ -62,24 +59,7 @@ def print_results(self) -> None:
print("Ingestion complete!")
- failed_message = (
- ""
- if self.error_count == 0
- else f"\nFail: {self.error_count / self.row_count}"
- )
-
- last_exception_message = (
- ""
- if self.last_exception == ""
- else f"\nLast exception:\n{self.last_exception}"
- )
-
- print(
- f"\nIngestion statistics:"
- f"\nSuccess: {self.pbar.n}/{self.row_count}"
- f"{failed_message}"
- f"{last_exception_message}"
- )
+ print(f"\nIngestion statistics:" f"\nSuccess: {self.pbar.n}/{self.row_count}")
return None
@@ -129,7 +109,10 @@ def flush(self, timeout: Optional[int]):
Returns:
int: Number of messages still in queue.
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def _delivery_callback(self, err: str, msg) -> None:
"""
@@ -200,7 +183,10 @@ def flush(self, timeout: Optional[int]):
KafkaTimeoutError: failure to flush buffered records within the
provided timeout
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def get_producer(
| diff --git a/sdk/python/tests/test_client.py b/sdk/python/tests/test_client.py
--- a/sdk/python/tests/test_client.py
+++ b/sdk/python/tests/test_client.py
@@ -601,6 +601,38 @@ def test_feature_set_ingest_success(self, dataframe, test_client, mocker):
# Ingest data into Feast
test_client.ingest("driver-feature-set", dataframe)
+ @pytest.mark.parametrize(
+ "dataframe,test_client,exception",
+ [(dataframes.GOOD, pytest.lazy_fixture("client"), Exception)],
+ )
+ def test_feature_set_ingest_throws_exception_if_kafka_down(
+ self, dataframe, test_client, exception, mocker
+ ):
+
+ test_client.set_project("project1")
+ driver_fs = FeatureSet(
+ "driver-feature-set",
+ source=KafkaSource(brokers="localhost:4412", topic="test"),
+ )
+ driver_fs.add(Feature(name="feature_1", dtype=ValueType.FLOAT))
+ driver_fs.add(Feature(name="feature_2", dtype=ValueType.STRING))
+ driver_fs.add(Feature(name="feature_3", dtype=ValueType.INT64))
+ driver_fs.add(Entity(name="entity_id", dtype=ValueType.INT64))
+
+ # Register with Feast core
+ test_client.apply(driver_fs)
+ driver_fs = driver_fs.to_proto()
+ driver_fs.meta.status = FeatureSetStatusProto.STATUS_READY
+
+ mocker.patch.object(
+ test_client._core_service_stub,
+ "GetFeatureSet",
+ return_value=GetFeatureSetResponse(feature_set=driver_fs),
+ )
+
+ with pytest.raises(exception):
+ test_client.ingest("driver-feature-set", dataframe)
+
@pytest.mark.parametrize(
"dataframe,exception,test_client",
[
| No exception when connecting to Kafka fails
Hey guys,
I have installed feast(0.4.4) via Helm3 on GKE
The Basic example is working until the ingestion (online serving) part
There I get ->>
Waiting for feature set to be ready for ingestion...
0%| | 0/15 [00:00<?, ?rows/s]
but there is no progress.
When I look on GCPs bigquery interface I can see that the project "customer_project" is created with the correct columns in "customer_transactions".
But for sure no data
get_feature_set gives me
{
"spec": {
"name": "customer_transactions",
"version": 1,
"entities": [
{
"name": "customer_id",
"valueType": "INT64"
}
],
"features": [
{
"name": "daily_transactions",
"valueType": "DOUBLE"
},
{
"name": "total_transactions",
"valueType": "INT64"
}
],
"maxAge": "432000s",
"source": {
"type": "KAFKA",
"kafkaSourceConfig": {
"bootstrapServers": "feast-kafka:9092",
"topic": "feast"
}
},
"project": "customer_project_1"
},
"meta": {
"createdTimestamp": "2020-04-15T10:26:51Z",
"status": "STATUS_READY"
}
}
I had to modify some port service setup in the chart so it can be that some of feast have connection issues between kafka etc.
But There are no errors in the logs of the core and serving pod.
What can be the problem and how is a way to debug that?
| 2020-04-20T03:12:15 |
|
feast-dev/feast | 707 | feast-dev__feast-707 | [
"663"
] | 1e12d3f8c67ed6f657b8472e16693af2162b4eea | diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py
--- a/sdk/python/feast/feature.py
+++ b/sdk/python/feast/feature.py
@@ -28,6 +28,7 @@ def to_proto(self) -> FeatureProto:
return FeatureProto(
name=self.name,
value_type=value_type,
+ labels=self.labels,
presence=self.presence,
group_presence=self.group_presence,
shape=self.shape,
@@ -57,7 +58,9 @@ def from_proto(cls, feature_proto: FeatureProto):
Feature object
"""
feature = cls(
- name=feature_proto.name, dtype=ValueType(feature_proto.value_type),
+ name=feature_proto.name,
+ dtype=ValueType(feature_proto.value_type),
+ labels=feature_proto.labels,
)
feature.update_presence_constraints(feature_proto)
feature.update_shape_type(feature_proto)
diff --git a/sdk/python/feast/feature_set.py b/sdk/python/feast/feature_set.py
--- a/sdk/python/feast/feature_set.py
+++ b/sdk/python/feast/feature_set.py
@@ -13,7 +13,7 @@
# limitations under the License.
import warnings
from collections import OrderedDict
-from typing import Dict, List, Optional
+from typing import Dict, List, MutableMapping, Optional
import pandas as pd
from google.protobuf import json_format
@@ -56,6 +56,7 @@ def __init__(
entities: List[Entity] = None,
source: Source = None,
max_age: Optional[Duration] = None,
+ labels: Optional[MutableMapping[str, str]] = None,
):
self._name = name
self._project = project
@@ -68,6 +69,10 @@ def __init__(
self._source = None
else:
self._source = source
+ if labels is None:
+ self._labels = OrderedDict()
+ else:
+ self._labels = labels
self._max_age = max_age
self._status = None
self._created_timestamp = None
@@ -84,7 +89,8 @@ def __eq__(self, other):
return False
if (
- self.name != other.name
+ self.labels != other.labels
+ or self.name != other.name
or self.project != other.project
or self.max_age != other.max_age
):
@@ -217,6 +223,21 @@ def max_age(self, max_age):
"""
self._max_age = max_age
+ @property
+ def labels(self):
+ """
+ Returns the labels of this feature set. This is the user defined metadata
+ defined as a dictionary.
+ """
+ return self._labels
+
+ @labels.setter
+ def labels(self, labels: MutableMapping[str, str]):
+ """
+ Set the labels for this feature set
+ """
+ self._labels = labels
+
@property
def status(self):
"""
@@ -245,6 +266,18 @@ def created_timestamp(self, created_timestamp):
"""
self._created_timestamp = created_timestamp
+ def set_label(self, key: str, value: str):
+ """
+ Sets the label value for a given key
+ """
+ self.labels[key] = value
+
+ def remove_label(self, key: str):
+ """
+ Removes a label based on key
+ """
+ del self.labels[key]
+
def add(self, resource):
"""
Adds a resource (Feature, Entity) to this Feature Set.
@@ -279,11 +312,7 @@ def drop(self, name: str):
Args:
name: Name of Feature or Entity to be removed
"""
- if name not in self._fields:
- raise ValueError("Could not find field " + name + ", no action taken")
- if name in self._fields:
- del self._fields[name]
- return
+ del self._fields[name]
def _add_fields(self, fields: List[Field]):
"""
@@ -796,6 +825,7 @@ def from_proto(cls, feature_set_proto: FeatureSetProto):
and feature_set_proto.spec.max_age.nanos == 0
else feature_set_proto.spec.max_age
),
+ labels=feature_set_proto.spec.labels,
source=(
None
if feature_set_proto.spec.source.type == 0
@@ -825,6 +855,7 @@ def to_proto(self) -> FeatureSetProto:
name=self.name,
project=self.project,
max_age=self.max_age,
+ labels=self.labels,
source=self.source.to_proto() if self.source is not None else None,
features=[
field.to_proto()
diff --git a/sdk/python/feast/field.py b/sdk/python/feast/field.py
--- a/sdk/python/feast/field.py
+++ b/sdk/python/feast/field.py
@@ -11,7 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Union
+from collections import OrderedDict
+from typing import MutableMapping, Optional, Union
from feast.core.FeatureSet_pb2 import EntitySpec, FeatureSpec
from feast.value_type import ValueType
@@ -24,11 +25,20 @@ class Field:
features.
"""
- def __init__(self, name: str, dtype: ValueType):
+ def __init__(
+ self,
+ name: str,
+ dtype: ValueType,
+ labels: Optional[MutableMapping[str, str]] = None,
+ ):
self._name = name
if not isinstance(dtype, ValueType):
raise ValueError("dtype is not a valid ValueType")
self._dtype = dtype
+ if labels is None:
+ self._labels = OrderedDict()
+ else:
+ self._labels = labels
self._presence = None
self._group_presence = None
self._shape = None
@@ -47,7 +57,11 @@ def __init__(self, name: str, dtype: ValueType):
self._time_of_day_domain = None
def __eq__(self, other):
- if self.name != other.name or self.dtype != other.dtype:
+ if (
+ self.name != other.name
+ or self.dtype != other.dtype
+ or self.labels != other.labels
+ ):
return False
return True
@@ -65,6 +79,13 @@ def dtype(self) -> ValueType:
"""
return self._dtype
+ @property
+ def labels(self) -> MutableMapping[str, str]:
+ """
+ Getter for labels of this field
+ """
+ return self._labels
+
@property
def presence(self) -> schema_pb2.FeaturePresence:
"""
| diff --git a/sdk/python/tests/test_client.py b/sdk/python/tests/test_client.py
--- a/sdk/python/tests/test_client.py
+++ b/sdk/python/tests/test_client.py
@@ -276,6 +276,7 @@ def test_get_feature_set(self, mocked_client, mocker):
spec=FeatureSetSpecProto(
name="my_feature_set",
max_age=Duration(seconds=3600),
+ labels={"key1": "val1", "key2": "val2"},
features=[
FeatureSpecProto(
name="my_feature_1",
@@ -308,6 +309,10 @@ def test_get_feature_set(self, mocked_client, mocker):
assert (
feature_set.name == "my_feature_set"
+ and "key1" in feature_set.labels
+ and feature_set.labels["key1"] == "val1"
+ and "key2" in feature_set.labels
+ and feature_set.labels["key2"] == "val2"
and feature_set.fields["my_feature_1"].name == "my_feature_1"
and feature_set.fields["my_feature_1"].dtype == ValueType.FLOAT
and feature_set.fields["my_entity_1"].name == "my_entity_1"
diff --git a/sdk/python/tests/test_feature_set.py b/sdk/python/tests/test_feature_set.py
--- a/sdk/python/tests/test_feature_set.py
+++ b/sdk/python/tests/test_feature_set.py
@@ -13,6 +13,7 @@
# limitations under the License.
import pathlib
+from collections import OrderedDict
from concurrent import futures
from datetime import datetime
@@ -62,7 +63,7 @@ def test_add_remove_features_success(self):
assert len(fs.features) == 1 and fs.features[0].name == "my-feature-2"
def test_remove_feature_failure(self):
- with pytest.raises(ValueError):
+ with pytest.raises(KeyError):
fs = FeatureSet("my-feature-set")
fs.drop(name="my-feature-1")
@@ -287,6 +288,98 @@ def make_tfx_schema_domain_info_inline(schema):
feature.int_domain.MergeFrom(domain_ref_to_int_domain[domain_ref])
+def test_feature_set_class_contains_labels():
+ fs = FeatureSet("my-feature-set", labels={"key1": "val1", "key2": "val2"})
+ assert "key1" in fs.labels.keys() and fs.labels["key1"] == "val1"
+ assert "key2" in fs.labels.keys() and fs.labels["key2"] == "val2"
+
+
+def test_feature_class_contains_labels():
+ fs = FeatureSet("my-feature-set", labels={"key1": "val1", "key2": "val2"})
+ fs.add(
+ Feature(
+ name="my-feature-1",
+ dtype=ValueType.INT64,
+ labels={"feature_key1": "feature_val1"},
+ )
+ )
+ assert "feature_key1" in fs.features[0].labels.keys()
+ assert fs.features[0].labels["feature_key1"] == "feature_val1"
+
+
+def test_feature_set_without_labels_empty_dict():
+ fs = FeatureSet("my-feature-set")
+ assert fs.labels == OrderedDict()
+ assert len(fs.labels) == 0
+
+
+def test_feature_without_labels_empty_dict():
+ f = Feature("my feature", dtype=ValueType.INT64)
+ assert f.labels == OrderedDict()
+ assert len(f.labels) == 0
+
+
+def test_set_label_feature_set():
+ fs = FeatureSet("my-feature-set")
+ fs.set_label("k1", "v1")
+ assert fs.labels["k1"] == "v1"
+
+
+def test_set_labels_overwrites_existing():
+ fs = FeatureSet("my-feature-set")
+ fs.set_label("k1", "v1")
+ fs.set_label("k1", "v2")
+ assert fs.labels["k1"] == "v2"
+
+
+def test_remove_labels_empty_failure():
+ fs = FeatureSet("my-feature-set")
+ with pytest.raises(KeyError):
+ fs.remove_label("key1")
+
+
+def test_remove_labels_invalid_key_failure():
+ fs = FeatureSet("my-feature-set")
+ fs.set_label("k1", "v1")
+ with pytest.raises(KeyError):
+ fs.remove_label("key1")
+
+
+def test_unequal_feature_based_on_labels():
+ f1 = Feature(name="feature-1", dtype=ValueType.INT64, labels={"k1": "v1"})
+ f2 = Feature(name="feature-1", dtype=ValueType.INT64, labels={"k1": "v1"})
+ assert f1 == f2
+ f3 = Feature(name="feature-1", dtype=ValueType.INT64)
+ assert f1 != f3
+ f4 = Feature(name="feature-1", dtype=ValueType.INT64, labels={"k1": "notv1"})
+ assert f1 != f4
+
+
+def test_unequal_feature_set_based_on_labels():
+ fs1 = FeatureSet("my-feature-set")
+ fs2 = FeatureSet("my-feature-set")
+ assert fs1 == fs2
+ fs1.set_label("k1", "v1")
+ fs2.set_label("k1", "v1")
+ assert fs1 == fs2
+ fs2.set_label("k1", "unequal")
+ assert not fs1 == fs2
+
+
+def test_unequal_feature_set_other_has_no_labels():
+ fs1 = FeatureSet("my-feature-set")
+ fs2 = FeatureSet("my-feature-set")
+ assert fs1 == fs2
+ fs1.set_label("k1", "v1")
+ assert not fs1 == fs2
+
+
+def test_unequal_feature_other_has_no_labels():
+ f1 = Feature(name="feature-1", dtype=ValueType.INT64, labels={"k1": "v1"})
+ f2 = Feature(name="feature-1", dtype=ValueType.INT64)
+ assert f1 != f2
+
+
class TestFeatureSetRef:
def test_from_feature_set(self):
feature_set = FeatureSet("test", "test")
diff --git a/tests/e2e/redis/basic-ingest-redis-serving.py b/tests/e2e/redis/basic-ingest-redis-serving.py
--- a/tests/e2e/redis/basic-ingest-redis-serving.py
+++ b/tests/e2e/redis/basic-ingest-redis-serving.py
@@ -688,11 +688,16 @@ def get_feature_set(self, core_service_stub, name, project):
@pytest.mark.run(order=51)
def test_register_feature_set_with_labels(self, core_service_stub):
feature_set_name = "test_feature_set_labels"
- feature_set_proto = FeatureSet(feature_set_name, PROJECT_NAME).to_proto()
- feature_set_proto.spec.labels[self.LABEL_KEY] = self.LABEL_VALUE
+ feature_set_proto = FeatureSet(
+ name=feature_set_name,
+ project=PROJECT_NAME,
+ labels={self.LABEL_KEY: self.LABEL_VALUE},
+ ).to_proto()
self.apply_feature_set(core_service_stub, feature_set_proto)
- retrieved_feature_set = self.get_feature_set(core_service_stub, feature_set_name, PROJECT_NAME)
+ retrieved_feature_set = self.get_feature_set(
+ core_service_stub, feature_set_name, PROJECT_NAME
+ )
assert self.LABEL_KEY in retrieved_feature_set.spec.labels
assert retrieved_feature_set.spec.labels[self.LABEL_KEY] == self.LABEL_VALUE
@@ -701,12 +706,22 @@ def test_register_feature_set_with_labels(self, core_service_stub):
@pytest.mark.run(order=52)
def test_register_feature_with_labels(self, core_service_stub):
feature_set_name = "test_feature_labels"
- feature_set_proto = FeatureSet(feature_set_name, PROJECT_NAME, features=[Feature("rating", ValueType.INT64)]) \
- .to_proto()
- feature_set_proto.spec.features[0].labels[self.LABEL_KEY] = self.LABEL_VALUE
+ feature_set_proto = FeatureSet(
+ name=feature_set_name,
+ project=PROJECT_NAME,
+ features=[
+ Feature(
+ name="rating",
+ dtype=ValueType.INT64,
+ labels={self.LABEL_KEY: self.LABEL_VALUE},
+ )
+ ],
+ ).to_proto()
self.apply_feature_set(core_service_stub, feature_set_proto)
- retrieved_feature_set = self.get_feature_set(core_service_stub, feature_set_name, PROJECT_NAME)
+ retrieved_feature_set = self.get_feature_set(
+ core_service_stub, feature_set_name, PROJECT_NAME
+ )
retrieved_feature = retrieved_feature_set.spec.features[0]
assert self.LABEL_KEY in retrieved_feature.labels
| Update Python SDK to support labels
This issue follows #463 which has a partial implementation in #536.
One additional PR is needed to get #436 into a consistent and complete state for end users. Currently #536 makes changes to Feast Core by adding support for labels to feature sets and features, but it doesn't add support for labels to feature sets in the Python SDK.
The task here is to add this support and update any end-to-end tests that test both the Python SDK and Feast Core w.r.t label metadata.
| Unless there is any objection then I think we can push this out to 0.6. We'd be introducing new API methods and not taking anything away? I don't see a compelling reason to block 0.5 over this.
Sounds reasonable, I also think this could be a great community contribution if there's someone wishing to use this functionality from the Python SDK who can prioritize it before the maintainers are able to.
I would be happy to pick this up somewhere next week. From #463:
> with the Python SDK having a set_label(key, value) method and a remove_label(key) method on the Feature class. list_feature_sets() would print out this information as well, but filtering will be left for a future release.
The idea is still to implement the 2 suggested functions and update `list_feature_sets()` right?
Hi @Joostrothweiler, that would be great!
Correct, it's about adding those to the `feature set` as opposed to the `feature`. Methods like `list_feature_sets()` should also be tested to see that they expose this information. | 2020-05-14T21:09:39 |
feast-dev/feast | 731 | feast-dev__feast-731 | [
"727"
] | 85e8b1de9b676c775e8d00f895bf2188d89d50fc | diff --git a/sdk/python/feast/cli.py b/sdk/python/feast/cli.py
--- a/sdk/python/feast/cli.py
+++ b/sdk/python/feast/cli.py
@@ -156,12 +156,20 @@ def feature_set_create(filename):
@feature_set.command("describe")
@click.argument("name", type=click.STRING)
-def feature_set_describe(name: str):
[email protected](
+ "--project",
+ "-p",
+ help="Project that feature set belongs to",
+ type=click.STRING,
+ default="default",
+)
+def feature_set_describe(name: str, project: str):
"""
Describe a feature set
"""
feast_client = Client() # type: Client
- fs = feast_client.get_feature_set(name=name)
+ fs = feast_client.get_feature_set(name=name, project=project)
+
if not fs:
print(f'Feature set with name "{name}" could not be found')
return
| feast cli feature-sets describe broken
## Expected Behavior
`feast feature-sets describe <name>` cannot be used because it doesn't allow the user to set a project, nor does it default to any value.
## Steps to reproduce
Call `feast feature-sets describe <some_feature_set>`
### Specifications
- Version: 0.5.0
## Possible Solution
The method calls `fs = feast_client.get_feature_set(name=name)`. Since no project is provided to `get_feature_set`, a default project needs to be set in the client.
Either
1. Allow users to pass feature set ids with projects specified (`project/feature_set`) or
2. Allow users to set a default project.
The method should fall back to a default project (`default`) should one not be provided.
| 2020-05-22T13:46:09 |
||
feast-dev/feast | 983 | feast-dev__feast-983 | [
"824"
] | fe8488a626df9a304f515db75a1dd29dd2deac40 | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -13,6 +13,7 @@
# limitations under the License.
import os
+import re
import subprocess
from setuptools import find_packages, setup
@@ -59,6 +60,13 @@
with open(os.path.join(README_FILE), "r") as f:
LONG_DESCRIPTION = f.read()
+# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
+# Regex modified from default tag regex in:
+# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9
+TAG_REGEX = re.compile(
+ r"^(?:[\/\w-]+)?(?P<version>[vV]?\d+(?:\.\d+){0,2}[^\+]*)(?:\+.*)?$"
+)
+
setup(
name=NAME,
author=AUTHOR,
@@ -83,6 +91,6 @@
"Programming Language :: Python :: 3.6",
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
- use_scm_version={"root": "../..", "relative_to": __file__},
+ use_scm_version={"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX},
setup_requires=["setuptools_scm"],
)
| Go SDK: go get by Version Tag Does Not Work
## Problem & Expected Behavour
`go get` to install the Feast Go SDK should work by version tag:
```
go get github.com/feast-dev/feast/sdk/[email protected]
```
Returns:
```
go get github.com/feast-dev/feast/sdk/[email protected]: module github.com/feast-dev/[email protected] found, but does not contain package github.com/feast-dev/feast/sdk/go
```
Instead of installing the Go SDK at release version tag v0.5.1.
## Proposed Solution
- Update [setup.py](https://github.com/feast-dev/feast/blob/89883d418b4935d595585689d63d246ad133cabe/sdk/python/setup.py#L86) to support non semver tags. (ie `sdk/go/v0.5.1`)
- Introduce additional tag `sdk/go/v<VERSION>` each release. This tag will allow us to pull the go module by a versioned tag. [Reference](https://github.com/golang/go/wiki/Modules#faqs--multi-module-repositories)
| > Introduce additional tag sdk/go/v<VERSION> each release
Does this mean you have 2 tags on one commit?
>Does this mean you have 2 tags on one commit?
Yes two tags pointing to the tagged release commit.
As a side note, there are `go.mod` and `go.sum` files in project root. They appear to be used to track dependencies of `make compile-protos-go`. Whatever their purpose is, go will treat project root as a go module and the actual go SDK as another module, making it a [multiple module repository as described in this GO FAQ](https://github.com/golang/go/wiki/Modules#faqs--multi-module-repositories)
> > Does this mean you have 2 tags on one commit?
>
> Yes two tags pointing to the tagged release commit.
>
> As a side note, there are `go.mod` and `go.sum` files in project root. They appear to be used to track dependencies of `make compile-protos-go`. Whatever their purpose is, go will treat project root as a go module and the actual go SDK as another module, making it a [multiple module repository as described in this GO FAQ](https://github.com/golang/go/wiki/Modules#faqs--multi-module-repositories)
Would removing the `go.mod` and `go.sum` in the root be possible and lead to any benefit?
From my own testing, removing the root `go.mod` and `go.sum` would break `go get` ability of the Go SDK `sdk/go` go module entirely.
So that is not an option.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
| 2020-09-04T04:31:18 |
|
feast-dev/feast | 1,002 | feast-dev__feast-1002 | [
"944"
] | 046bf6f2aa2c910badb9f495e465767584d59465 | diff --git a/sdk/python/feast/job.py b/sdk/python/feast/job.py
--- a/sdk/python/feast/job.py
+++ b/sdk/python/feast/job.py
@@ -134,13 +134,9 @@ def to_dataframe(
) -> pd.DataFrame:
"""
Wait until a job is done to get an iterable rows of result. This method
- will split the response into chunked DataFrame of a specified size to
- to be yielded to the instance calling it.
+ will return the response as a DataFrame.
Args:
- max_chunk_size (int):
- Maximum number of rows that the DataFrame should contain.
-
timeout_sec (int):
Max no of seconds to wait until job is done. If "timeout_sec"
is exceeded, an exception will be raised.
@@ -180,14 +176,14 @@ def to_chunked_dataframe(
# Max chunk size defined by user
for result in self.result(timeout_sec=timeout_sec):
- result.append(records)
+ records.append(result)
if len(records) == max_chunk_size:
df = pd.DataFrame.from_records(records)
records.clear() # Empty records array
yield df
# Handle for last chunk that is < max_chunk_size
- if not records:
+ if records:
yield pd.DataFrame.from_records(records)
def __iter__(self):
| AttributeError: 'dict' object has no attribute 'append' in job.to_chunked_dataframe()
## Expected Behavior
Return a generator of a chunked dataframe
## Current Behavior
Giving error :
```
/home/dev/feast-venv/lib/python3.7/site-packages/feast/job.py in to_chunked_dataframe(self, max_chunk_size, timeout_sec)
187 records = []
188 for result in self.result(timeout_sec=timeout_sec):
--> 189 result.append(records)
190 if len(records) == max_chunk_size:
191 df = pd.DataFrame.from_records(records)
AttributeError: 'dict' object has no attribute 'append'
```
## Steps to reproduce
```
test = job.to_chunked_dataframe(10)
next(test)
```
### Specifications
- Version: 0.5.0
- Platform: Python 3.7
- Subsystem:
## Possible Solution
In line 189, it should be `records.append(result)` instead of `result.append(records)`
| 2020-09-14T07:32:29 |
||
feast-dev/feast | 1,014 | feast-dev__feast-1014 | [
"405"
] | f5a5640f9c93898898f8cc90be22f8f813c149ad | diff --git a/sdk/python/feast/cli.py b/sdk/python/feast/cli.py
--- a/sdk/python/feast/cli.py
+++ b/sdk/python/feast/cli.py
@@ -25,6 +25,7 @@
from feast.config import Config
from feast.contrib.job_controller.client import Client as JCClient
from feast.core.IngestionJob_pb2 import IngestionJobStatus
+from feast.entity import EntityV2
from feast.feature_set import FeatureSet, FeatureSetRef
from feast.loaders.yaml import yaml_loader
@@ -114,6 +115,99 @@ def config_set(prop, value):
sys.exit(1)
[email protected](name="entities")
+def entity():
+ """
+ Create and manage entities
+ """
+ pass
+
+
[email protected]("apply")
[email protected](
+ "--filename",
+ "-f",
+ help="Path to an entity configuration file that will be applied",
+ type=click.Path(exists=True),
+)
[email protected](
+ "--project",
+ "-p",
+ help="Project that entity belongs to",
+ type=click.STRING,
+ default="default",
+)
+def entity_create(filename, project):
+ """
+ Create or update an entity
+ """
+
+ entities = [
+ EntityV2.from_dict(entity_dict) for entity_dict in yaml_loader(filename)
+ ]
+ feast_client = Client() # type: Client
+ feast_client.apply_entity(entities, project)
+
+
[email protected]("describe")
[email protected]("name", type=click.STRING)
[email protected](
+ "--project",
+ "-p",
+ help="Project that entity belongs to",
+ type=click.STRING,
+ default="default",
+)
+def entity_describe(name: str, project: str):
+ """
+ Describe an entity
+ """
+ feast_client = Client() # type: Client
+ entity = feast_client.get_entity(name=name, project=project)
+
+ if not entity:
+ print(f'Entity with name "{name}" could not be found')
+ return
+
+ print(
+ yaml.dump(
+ yaml.safe_load(str(entity)), default_flow_style=False, sort_keys=False
+ )
+ )
+
+
[email protected](name="list")
[email protected](
+ "--project",
+ "-p",
+ help="Project that entity belongs to",
+ type=click.STRING,
+ default="",
+)
[email protected](
+ "--labels",
+ "-l",
+ help="Labels to filter for entities",
+ type=click.STRING,
+ default="",
+)
+def entity_list(project: str, labels: str):
+ """
+ List all entities
+ """
+ feast_client = Client() # type: Client
+
+ labels_dict = _get_labels_dict(labels)
+
+ table = []
+ for entity in feast_client.list_entities(project=project, labels=labels_dict):
+ table.append([entity.name, entity.description, entity.value_type])
+
+ from tabulate import tabulate
+
+ print(tabulate(table, headers=["NAME", "DESCRIPTION", "TYPE"], tablefmt="plain"))
+
+
@cli.group(name="features")
def feature():
"""
diff --git a/sdk/python/feast/client.py b/sdk/python/feast/client.py
--- a/sdk/python/feast/client.py
+++ b/sdk/python/feast/client.py
@@ -19,7 +19,6 @@
import tempfile
import time
import uuid
-from collections import OrderedDict
from math import ceil
from typing import Any, Dict, List, Optional, Tuple, Union, cast
@@ -43,16 +42,22 @@
FEAST_DEFAULT_OPTIONS,
)
from feast.core.CoreService_pb2 import (
+ ApplyEntityRequest,
+ ApplyEntityResponse,
ApplyFeatureSetRequest,
ApplyFeatureSetResponse,
ArchiveProjectRequest,
ArchiveProjectResponse,
CreateProjectRequest,
CreateProjectResponse,
+ GetEntityRequest,
+ GetEntityResponse,
GetFeastCoreVersionRequest,
GetFeatureSetRequest,
GetFeatureSetResponse,
GetFeatureStatisticsRequest,
+ ListEntitiesRequest,
+ ListEntitiesResponse,
ListFeatureSetsRequest,
ListFeatureSetsResponse,
ListFeaturesRequest,
@@ -62,8 +67,9 @@
)
from feast.core.CoreService_pb2_grpc import CoreServiceStub
from feast.core.FeatureSet_pb2 import FeatureSetStatus
+from feast.entity import EntityV2
from feast.feature import Feature, FeatureRef
-from feast.feature_set import Entity, FeatureSet
+from feast.feature_set import FeatureSet
from feast.grpc import auth as feast_auth
from feast.grpc.grpc import create_grpc_channel
from feast.job import RetrievalJob
@@ -287,6 +293,8 @@ def project(self) -> Union[str, None]:
Returns:
Project name
"""
+ if not self._config.get(CONFIG_PROJECT_KEY):
+ raise ValueError("No project has been configured.")
return self._config.get(CONFIG_PROJECT_KEY)
def set_project(self, project: Optional[str] = None):
@@ -353,6 +361,130 @@ def archive_project(self, project):
if self._project == project:
self._project = FEAST_DEFAULT_OPTIONS[CONFIG_PROJECT_KEY]
+ def apply_entity(
+ self, entities: Union[List[EntityV2], EntityV2], project: str = None
+ ):
+ """
+ Idempotently registers entities with Feast Core. Either a single
+ entity or a list can be provided.
+
+ Args:
+ entities: List of entities that will be registered
+
+ Examples:
+ >>> from feast import Client
+ >>> from feast.entity import EntityV2
+ >>> from feast.value_type import ValueType
+ >>>
+ >>> feast_client = Client(core_url="localhost:6565")
+ >>> entity = EntityV2(
+ >>> name="driver_entity",
+ >>> description="Driver entity for car rides",
+ >>> value_type=ValueType.STRING,
+ >>> labels={
+ >>> "key": "val"
+ >>> }
+ >>> )
+ >>> feast_client.apply_entity(entity)
+ """
+
+ if project is None:
+ project = self.project
+
+ if not isinstance(entities, list):
+ entities = [entities]
+ for entity in entities:
+ if isinstance(entity, EntityV2):
+ self._apply_entity(project, entity) # type: ignore
+ continue
+ raise ValueError(f"Could not determine entity type to apply {entity}")
+
+ def _apply_entity(self, project: str, entity: EntityV2):
+ """
+ Registers a single entity with Feast
+
+ Args:
+ entity: Entity that will be registered
+ """
+
+ entity.is_valid()
+ entity_proto = entity.to_spec_proto()
+
+ # Convert the entity to a request and send to Feast Core
+ try:
+ apply_entity_response = self._core_service.ApplyEntity(
+ ApplyEntityRequest(project=project, spec=entity_proto), # type: ignore
+ timeout=self._config.getint(CONFIG_GRPC_CONNECTION_TIMEOUT_DEFAULT_KEY),
+ metadata=self._get_grpc_metadata(),
+ ) # type: ApplyEntityResponse
+ except grpc.RpcError as e:
+ raise grpc.RpcError(e.details())
+
+ # Extract the returned entity
+ applied_entity = EntityV2.from_proto(apply_entity_response.entity)
+
+ # Deep copy from the returned entity to the local entity
+ entity._update_from_entity(applied_entity)
+
+ def list_entities(
+ self, project: str = None, labels: Dict[str, str] = dict()
+ ) -> List[EntityV2]:
+ """
+ Retrieve a list of entities from Feast Core
+
+ Args:
+ project: Filter entities based on project name
+ labels: User-defined labels that these entities are associated with
+
+ Returns:
+ List of entities
+ """
+
+ if project is None:
+ project = self.project
+
+ filter = ListEntitiesRequest.Filter(project=project, labels=labels)
+
+ # Get latest entities from Feast Core
+ entity_protos = self._core_service.ListEntities(
+ ListEntitiesRequest(filter=filter), metadata=self._get_grpc_metadata(),
+ ) # type: ListEntitiesResponse
+
+ # Extract entities and return
+ entities = []
+ for entity_proto in entity_protos.entities:
+ entity = EntityV2.from_proto(entity_proto)
+ entity._client = self
+ entities.append(entity)
+ return entities
+
+ def get_entity(self, name: str, project: str = None) -> Union[EntityV2, None]:
+ """
+ Retrieves an entity.
+
+ Args:
+ project: Feast project that this entity belongs to
+ name: Name of entity
+
+ Returns:
+ Returns either the specified entity, or raises an exception if
+ none is found
+ """
+
+ if project is None:
+ project = self.project
+
+ try:
+ get_entity_response = self._core_service.GetEntity(
+ GetEntityRequest(project=project, name=name.strip()),
+ metadata=self._get_grpc_metadata(),
+ ) # type: GetEntityResponse
+ except grpc.RpcError as e:
+ raise grpc.RpcError(e.details())
+ entity = EntityV2.from_proto(get_entity_response.entity)
+
+ return entity
+
def apply(self, feature_sets: Union[List[FeatureSet], FeatureSet]):
"""
Idempotently registers feature set(s) with Feast Core. Either a single
@@ -528,18 +660,6 @@ def list_features_by_ref(
return features_dict
- def list_entities(self) -> Dict[str, Entity]:
- """
- Returns a dictionary of entities across all feature sets
- Returns:
- Dictionary of entities, indexed by name
- """
- entities_dict = OrderedDict()
- for fs in self.list_feature_sets():
- for entity in fs.entities:
- entities_dict[entity.name] = entity
- return entities_dict
-
def get_historical_features(
self,
feature_refs: List[str],
diff --git a/sdk/python/feast/entity.py b/sdk/python/feast/entity.py
--- a/sdk/python/feast/entity.py
+++ b/sdk/python/feast/entity.py
@@ -12,8 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from typing import Dict, MutableMapping, Optional
+
+import yaml
+from google.protobuf import json_format
+from google.protobuf.json_format import MessageToDict, MessageToJson
+from google.protobuf.timestamp_pb2 import Timestamp
+
+from feast.core.Entity_pb2 import Entity as EntityV2Proto
+from feast.core.Entity_pb2 import EntityMeta as EntityMetaProto
+from feast.core.Entity_pb2 import EntitySpecV2 as EntitySpecProto
from feast.core.FeatureSet_pb2 import EntitySpec as EntityProto
from feast.field import Field
+from feast.loaders import yaml as feast_yaml
from feast.types import Value_pb2 as ValueTypeProto
from feast.value_type import ValueType
@@ -44,3 +55,273 @@ def from_proto(cls, entity_proto: EntityProto):
"""
entity = cls(name=entity_proto.name, dtype=ValueType(entity_proto.value_type))
return entity
+
+
+class EntityV2:
+ """
+ Represents a collection of entities and associated metadata.
+ """
+
+ def __init__(
+ self,
+ name: str,
+ description: str,
+ value_type: ValueType,
+ labels: Optional[MutableMapping[str, str]] = None,
+ ):
+ self._name = name
+ self._description = description
+ self._value_type = value_type
+ if labels is None:
+ self._labels = dict() # type: MutableMapping[str, str]
+ else:
+ self._labels = labels
+
+ self._created_timestamp: Optional[Timestamp] = None
+ self._last_updated_timestamp: Optional[Timestamp] = None
+
+ def __eq__(self, other):
+ if not isinstance(other, EntityV2):
+ raise TypeError("Comparisons should only involve EntityV2 class objects.")
+
+ if isinstance(self.value_type, int):
+ self.value_type = ValueType(self.value_type).name
+ if isinstance(other.value_type, int):
+ other.value_type = ValueType(other.value_type).name
+
+ if (
+ self.labels != other.labels
+ or self.name != other.name
+ or self.description != other.description
+ or self.value_type != other.value_type
+ ):
+ return False
+
+ return True
+
+ def __str__(self):
+ return str(MessageToJson(self.to_proto()))
+
+ @property
+ def name(self):
+ """
+ Returns the name of this entity
+ """
+ return self._name
+
+ @name.setter
+ def name(self, name):
+ """
+ Sets the name of this entity
+ """
+ self._name = name
+
+ @property
+ def description(self):
+ """
+ Returns the description of this entity
+ """
+ return self._description
+
+ @description.setter
+ def description(self, description):
+ """
+ Sets the description of this entity
+ """
+ self._description = description
+
+ @property
+ def value_type(self):
+ """
+ Returns the type of this entity
+ """
+ return self._value_type
+
+ @value_type.setter
+ def value_type(self, value_type: ValueType):
+ """
+ Set the type for this entity
+ """
+ self._value_type = value_type
+
+ @property
+ def labels(self):
+ """
+ Returns the labels of this entity. This is the user defined metadata
+ defined as a dictionary.
+ """
+ return self._labels
+
+ @labels.setter
+ def labels(self, labels: MutableMapping[str, str]):
+ """
+ Set the labels for this entity
+ """
+ self._labels = labels
+
+ @property
+ def created_timestamp(self):
+ """
+ Returns the created_timestamp of this entity
+ """
+ return self._created_timestamp
+
+ @property
+ def last_updated_timestamp(self):
+ """
+ Returns the last_updated_timestamp of this entity
+ """
+ return self._last_updated_timestamp
+
+ def is_valid(self):
+ """
+ Validates the state of a entity locally. Raises an exception
+ if entity is invalid.
+ """
+
+ if not self.name:
+ raise ValueError("No name found in entity.")
+
+ if not self.value_type:
+ raise ValueError("No type found in entity {self.value_type}")
+
+ @classmethod
+ def from_yaml(cls, yml: str):
+ """
+ Creates an entity from a YAML string body or a file path
+
+ Args:
+ yml: Either a file path containing a yaml file or a YAML string
+
+ Returns:
+ Returns a EntityV2 object based on the YAML file
+ """
+
+ return cls.from_dict(feast_yaml.yaml_loader(yml, load_single=True))
+
+ @classmethod
+ def from_dict(cls, entity_dict):
+ """
+ Creates an entity from a dict
+
+ Args:
+ entity_dict: A dict representation of an entity
+
+ Returns:
+ Returns a EntityV2 object based on the entity dict
+ """
+
+ entity_proto = json_format.ParseDict(
+ entity_dict, EntityV2Proto(), ignore_unknown_fields=True
+ )
+ return cls.from_proto(entity_proto)
+
+ @classmethod
+ def from_proto(cls, entity_proto: EntityV2Proto):
+ """
+ Creates an entity from a protobuf representation of an entity
+
+ Args:
+ entity_proto: A protobuf representation of an entity
+
+ Returns:
+ Returns a EntityV2 object based on the entity protobuf
+ """
+
+ entity = cls(
+ name=entity_proto.spec.name,
+ description=entity_proto.spec.description,
+ value_type=ValueType(entity_proto.spec.value_type).name, # type: ignore
+ labels=entity_proto.spec.labels,
+ )
+
+ entity._created_timestamp = entity_proto.meta.created_timestamp
+ entity._last_updated_timestamp = entity_proto.meta.last_updated_timestamp
+
+ return entity
+
+ def to_proto(self) -> EntityV2Proto:
+ """
+ Converts an entity object to its protobuf representation
+
+ Returns:
+ EntityV2Proto protobuf
+ """
+
+ meta = EntityMetaProto(
+ created_timestamp=self.created_timestamp,
+ last_updated_timestamp=self.last_updated_timestamp,
+ )
+ if isinstance(self.value_type, ValueType):
+ self.value_type = self.value_type.value
+
+ spec = EntitySpecProto(
+ name=self.name,
+ description=self.description,
+ value_type=self.value_type,
+ labels=self.labels,
+ )
+
+ return EntityV2Proto(spec=spec, meta=meta)
+
+ def to_dict(self) -> Dict:
+ """
+ Converts entity to dict
+
+ Returns:
+ Dictionary object representation of entity
+ """
+
+ entity_dict = MessageToDict(self.to_proto())
+
+ # Remove meta when empty for more readable exports
+ if entity_dict["meta"] == {}:
+ del entity_dict["meta"]
+
+ return entity_dict
+
+ def to_yaml(self):
+ """
+ Converts a entity to a YAML string.
+
+ Returns:
+ Entity string returned in YAML format
+ """
+ entity_dict = self.to_dict()
+ return yaml.dump(entity_dict, allow_unicode=True, sort_keys=False)
+
+ def to_spec_proto(self) -> EntitySpecProto:
+ """
+ Converts an EntityV2 object to its protobuf representation.
+ Used when passing EntitySpecV2 object to Feast request.
+
+ Returns:
+ EntitySpecV2 protobuf
+ """
+
+ if isinstance(self.value_type, ValueType):
+ self.value_type = self.value_type.value
+
+ spec = EntitySpecProto(
+ name=self.name,
+ description=self.description,
+ value_type=self.value_type,
+ labels=self.labels,
+ )
+
+ return spec
+
+ def _update_from_entity(self, entity):
+ """
+ Deep replaces one entity with another
+
+ Args:
+ entity: Entity to use as a source of configuration
+ """
+
+ self.name = entity.name
+ self.description = entity.description
+ self.value_type = entity.value_type
+ self.labels = entity.labels
+ self._created_timestamp = entity.created_timestamp
+ self._last_updated_timestamp = entity.last_updated_timestamp
| diff --git a/core/src/test/java/feast/core/service/SpecServiceIT.java b/core/src/test/java/feast/core/service/SpecServiceIT.java
--- a/core/src/test/java/feast/core/service/SpecServiceIT.java
+++ b/core/src/test/java/feast/core/service/SpecServiceIT.java
@@ -68,6 +68,27 @@ public static void globalSetUp(@Value("${grpc.server.port}") int port) {
public void initState() {
SourceProto.Source source = DataGenerator.getDefaultSource();
+ apiClient.simpleApplyEntity(
+ "default",
+ DataGenerator.createEntitySpecV2(
+ "entity1",
+ "Entity 1 description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("label_key", "label_value")));
+ apiClient.simpleApplyEntity(
+ "default",
+ DataGenerator.createEntitySpecV2(
+ "entity2",
+ "Entity 2 description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("label_key2", "label_value2")));
+ apiClient.simpleApplyEntity(
+ "project1",
+ DataGenerator.createEntitySpecV2(
+ "entity3",
+ "Entity 3 description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("label_key2", "label_value2")));
apiClient.simpleApplyFeatureSet(
DataGenerator.createFeatureSet(
source,
@@ -88,7 +109,7 @@ public void initState() {
"project1",
"fs3",
ImmutableList.of(
- DataGenerator.createEntity("user_id", ValueProto.ValueType.Enum.STRING)),
+ DataGenerator.createEntitySpec("user_id", ValueProto.ValueType.Enum.STRING)),
ImmutableList.of(
DataGenerator.createFeature(
"feature1", ValueProto.ValueType.Enum.INT32, Collections.emptyMap()),
@@ -101,7 +122,7 @@ public void initState() {
"project1",
"fs4",
ImmutableList.of(
- DataGenerator.createEntity("customer_id", ValueProto.ValueType.Enum.STRING)),
+ DataGenerator.createEntitySpec("customer_id", ValueProto.ValueType.Enum.STRING)),
ImmutableList.of(
DataGenerator.createFeature(
"feature2",
@@ -114,7 +135,7 @@ public void initState() {
"project1",
"fs5",
ImmutableList.of(
- DataGenerator.createEntity("customer_id", ValueProto.ValueType.Enum.STRING)),
+ DataGenerator.createEntitySpec("customer_id", ValueProto.ValueType.Enum.STRING)),
ImmutableList.of(
DataGenerator.createFeature(
"feature3",
@@ -199,6 +220,51 @@ public void shouldThrowExceptionGivenMissingFeatureSetName() {
}
}
+ @Nested
+ class ListEntities {
+ @Test
+ public void shouldFilterEntitiesByLabels() {
+ List<EntityProto.Entity> entities =
+ apiClient.simpleListEntities("", ImmutableMap.of("label_key2", "label_value2"));
+
+ assertThat(entities, hasSize(1));
+ assertThat(entities, hasItem(hasProperty("spec", hasProperty("name", equalTo("entity2")))));
+ }
+
+ @Test
+ public void shouldUseDefaultProjectIfProjectUnspecified() {
+ List<EntityProto.Entity> entities = apiClient.simpleListEntities("");
+
+ assertThat(entities, hasSize(2));
+ assertThat(entities, hasItem(hasProperty("spec", hasProperty("name", equalTo("entity1")))));
+ }
+
+ @Test
+ public void shouldFilterEntitiesByProjectAndLabels() {
+ List<EntityProto.Entity> entities =
+ apiClient.simpleListEntities("project1", ImmutableMap.of("label_key2", "label_value2"));
+
+ assertThat(entities, hasSize(1));
+ assertThat(entities, hasItem(hasProperty("spec", hasProperty("name", equalTo("entity3")))));
+ }
+
+ @Test
+ public void shouldThrowExceptionGivenWildcardProject() {
+ CoreServiceProto.ListEntitiesRequest.Filter filter =
+ CoreServiceProto.ListEntitiesRequest.Filter.newBuilder().setProject("default*").build();
+ StatusRuntimeException exc =
+ assertThrows(StatusRuntimeException.class, () -> apiClient.simpleListEntities(filter));
+
+ assertThat(
+ exc.getMessage(),
+ equalTo(
+ String.format(
+ "INVALID_ARGUMENT: invalid value for project resource, %s: "
+ + "argument must only contain alphanumeric characters and underscores.",
+ filter.getProject())));
+ }
+ }
+
@Nested
class ApplyFeatureSet {
@Test
@@ -547,7 +613,8 @@ public void shouldUpdateLabels() {
"project1",
"fs4",
ImmutableList.of(
- DataGenerator.createEntity("customer_id", ValueProto.ValueType.Enum.STRING)),
+ DataGenerator.createEntitySpec(
+ "customer_id", ValueProto.ValueType.Enum.STRING)),
ImmutableList.of(
DataGenerator.createFeature(
"feature2",
@@ -574,7 +641,8 @@ public void shouldAcceptFeatureSetLabels() {
"",
"some",
ImmutableList.of(
- DataGenerator.createEntity("customer_id", ValueProto.ValueType.Enum.STRING)),
+ DataGenerator.createEntitySpec(
+ "customer_id", ValueProto.ValueType.Enum.STRING)),
ImmutableList.of(),
ImmutableMap.of("label", "some")));
@@ -584,6 +652,139 @@ public void shouldAcceptFeatureSetLabels() {
}
}
+ @Nested
+ class ApplyEntity {
+ @Test
+ public void shouldThrowExceptionGivenEntityWithDash() {
+ StatusRuntimeException exc =
+ assertThrows(
+ StatusRuntimeException.class,
+ () ->
+ apiClient.simpleApplyEntity(
+ "default",
+ DataGenerator.createEntitySpecV2(
+ "dash-entity",
+ "Dash Entity description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("test_key", "test_value"))));
+
+ assertThat(
+ exc.getMessage(),
+ equalTo(
+ String.format(
+ "INTERNAL: invalid value for %s resource, %s: %s",
+ "entity",
+ "dash-entity",
+ "argument must only contain alphanumeric characters and underscores.")));
+ }
+
+ @Test
+ public void shouldThrowExceptionIfTypeChanged() {
+ String projectName = "default";
+
+ EntityProto.EntitySpecV2 spec =
+ DataGenerator.createEntitySpecV2(
+ "entity1",
+ "Entity description",
+ ValueProto.ValueType.Enum.FLOAT,
+ ImmutableMap.of("label_key", "label_value"));
+
+ StatusRuntimeException exc =
+ assertThrows(
+ StatusRuntimeException.class, () -> apiClient.simpleApplyEntity("default", spec));
+
+ assertThat(
+ exc.getMessage(),
+ equalTo(
+ String.format(
+ "INTERNAL: You are attempting to change the type of this entity in %s project from %s to %s. This isn't allowed. Please create a new entity.",
+ "default", "STRING", spec.getValueType())));
+ }
+
+ @Test
+ public void shouldReturnEntityIfEntityHasNotChanged() {
+ String projectName = "default";
+ EntityProto.EntitySpecV2 spec = apiClient.simpleGetEntity(projectName, "entity1").getSpec();
+
+ CoreServiceProto.ApplyEntityResponse response =
+ apiClient.simpleApplyEntity(projectName, spec);
+
+ assertThat(response.getEntity().getSpec().getName(), equalTo(spec.getName()));
+ assertThat(response.getEntity().getSpec().getDescription(), equalTo(spec.getDescription()));
+ assertThat(response.getEntity().getSpec().getLabelsMap(), equalTo(spec.getLabelsMap()));
+ assertThat(response.getEntity().getSpec().getValueType(), equalTo(spec.getValueType()));
+ }
+
+ @Test
+ public void shouldApplyEntityIfNotExists() {
+ String projectName = "default";
+ EntityProto.EntitySpecV2 spec =
+ DataGenerator.createEntitySpecV2(
+ "new_entity",
+ "Entity description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("label_key", "label_value"));
+
+ CoreServiceProto.ApplyEntityResponse response =
+ apiClient.simpleApplyEntity(projectName, spec);
+
+ assertThat(response.getEntity().getSpec().getName(), equalTo(spec.getName()));
+ assertThat(response.getEntity().getSpec().getDescription(), equalTo(spec.getDescription()));
+ assertThat(response.getEntity().getSpec().getLabelsMap(), equalTo(spec.getLabelsMap()));
+ assertThat(response.getEntity().getSpec().getValueType(), equalTo(spec.getValueType()));
+ }
+
+ @Test
+ public void shouldCreateProjectWhenNotAlreadyExists() {
+ EntityProto.EntitySpecV2 spec =
+ DataGenerator.createEntitySpecV2(
+ "new_entity2",
+ "Entity description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("key1", "val1"));
+ CoreServiceProto.ApplyEntityResponse response =
+ apiClient.simpleApplyEntity("new_project", spec);
+
+ assertThat(response.getEntity().getSpec().getName(), equalTo(spec.getName()));
+ assertThat(response.getEntity().getSpec().getDescription(), equalTo(spec.getDescription()));
+ assertThat(response.getEntity().getSpec().getLabelsMap(), equalTo(spec.getLabelsMap()));
+ assertThat(response.getEntity().getSpec().getValueType(), equalTo(spec.getValueType()));
+ }
+
+ @Test
+ public void shouldFailWhenProjectIsArchived() {
+ apiClient.createProject("archived");
+ apiClient.archiveProject("archived");
+
+ StatusRuntimeException exc =
+ assertThrows(
+ StatusRuntimeException.class,
+ () ->
+ apiClient.simpleApplyEntity(
+ "archived",
+ DataGenerator.createEntitySpecV2(
+ "new_entity3",
+ "Entity description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("key1", "val1"))));
+ assertThat(exc.getMessage(), equalTo("INTERNAL: Project is archived: archived"));
+ }
+
+ @Test
+ public void shouldUpdateLabels() {
+ EntityProto.EntitySpecV2 spec =
+ DataGenerator.createEntitySpecV2(
+ "entity1",
+ "Entity description",
+ ValueProto.ValueType.Enum.STRING,
+ ImmutableMap.of("label_key", "label_value", "label_key2", "label_value2"));
+
+ CoreServiceProto.ApplyEntityResponse response = apiClient.simpleApplyEntity("default", spec);
+
+ assertThat(response.getEntity().getSpec().getLabelsMap(), equalTo(spec.getLabelsMap()));
+ }
+ }
+
@Nested
class UpdateStore {
@Test
@@ -624,6 +825,25 @@ public void shouldThrowExceptionGivenMissingFeatureSet() {
}
}
+ @Nested
+ class GetEntity {
+ @Test
+ public void shouldThrowExceptionGivenMissingEntity() {
+ StatusRuntimeException exc =
+ assertThrows(
+ StatusRuntimeException.class, () -> apiClient.simpleGetEntity("default", ""));
+
+ assertThat(exc.getMessage(), equalTo("INVALID_ARGUMENT: No entity name provided"));
+ }
+
+ public void shouldRetrieveFromDefaultIfProjectNotSpecified() {
+ String entityName = "entity1";
+ EntityProto.Entity entity = apiClient.simpleGetEntity("", entityName);
+
+ assertThat(entity.getSpec().getName(), equalTo(entityName));
+ }
+ }
+
@Nested
class ListStores {
@Test
diff --git a/sdk/python/tests/test_entity.py b/sdk/python/tests/test_entity.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/test_entity.py
@@ -0,0 +1,86 @@
+# Copyright 2020 The Feast Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import socket
+from concurrent import futures
+from contextlib import closing
+
+import grpc
+import pytest
+
+from feast.client import Client
+from feast.core import CoreService_pb2_grpc as Core
+from feast.entity import EntityV2
+from feast.value_type import ValueType
+from feast_core_server import CoreServicer
+
+
+def find_free_port():
+ with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
+ s.bind(("", 0))
+ s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ return s.getsockname()[1]
+
+
+free_port = find_free_port()
+
+
+class TestEntity:
+ @pytest.fixture(scope="function")
+ def server(self):
+ server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
+ Core.add_CoreServiceServicer_to_server(CoreServicer(), server)
+ server.add_insecure_port(f"[::]:{free_port}")
+ server.start()
+ yield server
+ server.stop(0)
+
+ @pytest.fixture
+ def client(self, server):
+ return Client(core_url=f"localhost:{free_port}")
+
+ def test_entity_import_export_yaml(self):
+
+ test_entity = EntityV2(
+ name="car_driver_entity",
+ description="Driver entity for car rides",
+ value_type=ValueType.STRING,
+ labels={"team": "matchmaking"},
+ )
+
+ # Create a string YAML representation of the entity
+ string_yaml = test_entity.to_yaml()
+
+ # Create a new entity object from the YAML string
+ actual_entity_from_string = EntityV2.from_yaml(string_yaml)
+
+ # Ensure equality is upheld to original entity
+ assert test_entity == actual_entity_from_string
+
+
+def test_entity_class_contains_labels():
+ entity = EntityV2(
+ "my-entity",
+ description="My entity",
+ value_type=ValueType.STRING,
+ labels={"key1": "val1", "key2": "val2"},
+ )
+ assert "key1" in entity.labels.keys() and entity.labels["key1"] == "val1"
+ assert "key2" in entity.labels.keys() and entity.labels["key2"] == "val2"
+
+
+def test_entity_without_labels_empty_dict():
+ entity = EntityV2("my-entity", description="My entity", value_type=ValueType.STRING)
+ assert entity.labels == dict()
+ assert len(entity.labels) == 0
| Entity types as a higher-level concept
## Introduction
Currently an entity, or more formally an entity type, is treated as a special type of field within a feature set. There has been an attempt to simplify the creation and management of entities and to keep them consistent with features, however some challenges exist with our current approach.
Note: The terms entity and entity type will be used interchangeable in the following issue.
## How are entities created?
- Users define an entity as part of a feature set. An entity in this case is a field like any other within the feature set. More than one entity can exist within a feature set.
- An entity's name must be unique within a feature set.
- There are no constraints on entities outside of a feature set, either at the project or global level. This means that multiple feature sets can define the same entities again.
## How are entities used?
- Retrieving feature values: Entities are used as a key for retrieving features. In order to retrieve feature values within a feature set, all entities must be provided as part of the lookup.
- Joining feature sets: In the event that feature values are being retrieved from multiple feature sets, entities are used to look up these feature values. Entities are also used to join across these feature sets to construct a single result set.
## What is the problem?
1. Discovery: It seems intuitive that users would start their discovery experience from the point of view of an entity type, since their business problem is generally framed around one or more entities. By nesting entities within feature sets and within projects and not providing a discovery means, it makes discovery harder.
2. Consistency: Entities are typically consistent across all projects and systems in most organizations. This consistency is not enforced in Feast at the moment. Users are bound to redefine entities in their local projects if no consistency is enforced at an organizational level. Failure would occur when lookups happen or when joins happen across feature sets, especially when joins need to happen across projects.
3. Key building: If entities and features maintain mutual compatibility in terms of supported data types, then support must be maintained for building keys from all feature value types. This adds a lot of complexity to key building since support must be maintained to serialize complex composite data structures in order to build these keys.
## Proposals
### 1. Project-level entities
Functionality
- Entities are created outside of feature sets, but they still reside in a specific project namespace.
- Entities have their own distinct API and supported data types (which may be more limited than features)
- Entities must be unique within a project namespace, but can be duplicated across an organization. Uniqueness is ensured through a full entity reference (`gojek/customer`).
- Entities are still defined as part of a feature set, but this is a selection process instead of creation.
Advantages
- Entities receive all the sharing and isolation benefits of "projects". Entities would not have to be treated separately from a logical and/or development standpoint. There would also be no explosion of a global entity namespace
- Users are free to experiment and develop within their projects without affecting other users, since duplication is allowed across projects.
- No need for a central team to gate-keep the creation of entities.
Disadvantages
- By not elevating entities to the global level, end users would be required to know which projects contain the entities they should be referencing. This means an organizational process must exist in order to select these entities.
- Most projects would have to reference entities from another more authoritative project. In fact, it's likely that an organization will have a central project which contains only entities. This could be a little counter-intuitive if a feature set contains fields that are referencing an external project.
### 2. Global-level entities
Functionality
- Entities are defined globally for a Feast deployment.
- Entities have their own distinct API and supported data types (which may be more limited than features).
- Entities must be globally unique.
- Entities are still defined as part of a feature set, but this is a selection process instead of creation.
Advantages
- Central authoritative listing of entities within an organization.
- Easier to discover which entities should be used, without needing an organizational policy.
- Easy to reason about and easier to understand when referencing an entity within a feature set.
Disadvantages
- Requires development of separate logic from projects, feature sets, and features.
- Requires a team and process to manage the creation of entities.
- No way to isolate conflicts. If one team wants to use a `float` and another wants to use a `string` for an entity data type, then it would likely result in two entities being created. This would still be the case in the Project-level entity proposal, but at least in that proposal the unorthodox approach (maybe `string`) could be isolated to a specific project.
### 3. Default project entities
Functionality
- If a user does not specify a project, then they are automatically located inside of the `default` project. This would be similar to how Kubernetes does namespacing.
- All other functionality would be the same as the `project level entities` proposal, except users don't actually have to create an entity inside of a named project.
- Feature references could be created that allow users to reference entities without a project. So instead of having `my_company/customer`, it would be possible to refer to "global" entities by either using `customer` or `default/customer`.
Advantages
- All of the advantages of `project-level entities`.
- Most of the advantages of `global-level entities`, except that this default project would still not be a true global namespace. There would still need to be an organizational process that informs users to use the entities in this `project`.
- Simplifies development since `project-level` sharing and isolation can be reused.
Disadvantages
- Still requires access control on the default namespace.
| > Most projects would have to reference entities from another more authoritative project
What would be an example scenario where this approach is the most sensible? For Gojek at least, i would imagine that project based entities make more sense. One project per service type (food, ride, gopay), each having entities which might share the same name (customer id, driver id).
> > Most projects would have to reference entities from another more authoritative project
>
> What would be an example scenario where this approach is the most sensible? For Gojek at least, i would imagine that project based entities make more sense. One project per service type (food, ride, gopay), each having entities which might share the same name (customer id, driver id).
The example you are referring to would be for `project-level entities`. Meaning an organization could have authoritative projects like:
- gojek/customer
- gopay/customer
It seems to provide a cleaner isolation, but it is also the case that "users" would have to define their own projects and feature sets from which they would reference these authoritative entities.
So I am only seeing one option here, not two. The disadvantage comes from having to know whether to use either of these two projects.
Another possible solution would be a hybrid model between global and project level entities. I have added this as (3) in the comment above, titled `3. Default project entities`
I am in favour of 3. Option 2 (unique global entity name) may lead to complicated entity management for some cases. For example, let say we have drivers for different countries. Option no 2 dictates that we cannot have the same entity for all country (eg. driver), but instead, multiple different entities. (eg. driver_vn, driver_th, driver_sg). It is likely that in an end to end machine learning workflow, the code section involving the drivers will be similar regardless of country (eg. Extracting driver entity value from JSON request during prediction step). So, for option no 2, the pipeline will need to know that driver_vn, driver_sg and driver_ th all belongs to the same group and should be handled the same way, which leads to extra configurations on the user side.
Though, if we go for option 3, we might want to explore if the concept of default project should be extended to feature retrieval as well, for consistency. For example, if no project / default project has been set and project is not explicitly specified in feature ref, then the fallback would be the 'default' project.
> I am in favour of 3. Option 2 (unique global entity name) may lead to complicated entity management for some cases. For example, let say we have drivers for different countries. Option no 2 dictates that we cannot have the same entity for all country (eg. driver), but instead, multiple different entities. (eg. driver_vn, driver_th, driver_sg). It is likely that in an end to end machine learning workflow, the code section involving the drivers will be similar regardless of country (eg. Extracting driver entity value from JSON request during prediction step). So, for option no 2, the pipeline will need to know that driver_vn, driver_sg and driver_ th all belongs to the same group and should be handled the same way, which leads to extra configurations on the user side.
Its not clear what you mean here. What prevents you from having simply `driver` as a global entity?
> Though, if we go for option 3, we might want to explore if the concept of default project should be extended to feature retrieval as well, for consistency. For example, if no project / default project has been set and project is not explicitly specified in feature ref, then the fallback would be the 'default' project.
Absolutely, that was my hope as well!
> > I am in favour of 3. Option 2 (unique global entity name) may lead to complicated entity management for some cases. For example, let say we have drivers for different countries. Option no 2 dictates that we cannot have the same entity for all country (eg. driver), but instead, multiple different entities. (eg. driver_vn, driver_th, driver_sg). It is likely that in an end to end machine learning workflow, the code section involving the drivers will be similar regardless of country (eg. Extracting driver entity value from JSON request during prediction step). So, for option no 2, the pipeline will need to know that driver_vn, driver_sg and driver_ th all belongs to the same group and should be handled the same way, which leads to extra configurations on the user side.
>
> Its not clear what you mean here. What prevents you from having simply `driver` as a global entity?
Actually, yeah you are correct, I can just have driver in a global project instead of having the entity defined in each regional project. Too entrenched in the code base that I am currently working on and didn't consider this possibility.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Moving this out of the 0.6 milestone because I think we can live without it for the time being.
Isn't 3. the same as 1. with just a special project called default? The fact there is a special default project doesn't change the fact that all entities are scoped to a project ie 1. Right?
> Isn't 3. the same as 1. with just a special project called default? The fact there is a special default project doesn't change the fact that all entities are scoped to a project ie 1. Right?
Correct. | 2020-09-23T05:42:28 |
feast-dev/feast | 1,214 | feast-dev__feast-1214 | [
"1183"
] | 7abaaace319fa11d691a030e97d9aa7bdeb362e8 | diff --git a/sdk/python/feast/cli.py b/sdk/python/feast/cli.py
--- a/sdk/python/feast/cli.py
+++ b/sdk/python/feast/cli.py
@@ -145,7 +145,7 @@ def entity_create(filename, project):
entities = [Entity.from_dict(entity_dict) for entity_dict in yaml_loader(filename)]
feast_client = Client() # type: Client
- feast_client.apply_entity(entities, project)
+ feast_client.apply(entities, project)
@entity.command("describe")
@@ -252,7 +252,7 @@ def feature_table_create(filename):
FeatureTable.from_dict(ft_dict) for ft_dict in yaml_loader(filename)
]
feast_client = Client() # type: Client
- feast_client.apply_feature_table(feature_tables)
+ feast_client.apply(feature_tables)
@feature_table.command("describe")
diff --git a/sdk/python/feast/client.py b/sdk/python/feast/client.py
--- a/sdk/python/feast/client.py
+++ b/sdk/python/feast/client.py
@@ -16,6 +16,7 @@
import os
import shutil
import uuid
+import warnings
from datetime import datetime
from itertools import groupby
from typing import Any, Dict, List, Optional, Union
@@ -105,6 +106,8 @@
CPU_COUNT: int = multiprocessing.cpu_count()
+warnings.simplefilter("once", DeprecationWarning)
+
class Client:
"""
@@ -442,13 +445,17 @@ def archive_project(self, project):
if self._project == project:
self._project = opt().PROJECT
- def apply_entity(self, entities: Union[List[Entity], Entity], project: str = None):
+ def apply(
+ self,
+ objects: Union[List[Union[Entity, FeatureTable]], Entity, FeatureTable],
+ project: str = None,
+ ):
"""
- Idempotently registers entities with Feast Core. Either a single
- entity or a list can be provided.
+ Idempotently registers entities and feature tables with Feast Core. Either a single
+ entity or feature table or a list can be provided.
Args:
- entities: List of entities that will be registered
+ objects: List of entities and/or feature tables that will be registered
Examples:
>>> from feast import Client
@@ -464,9 +471,33 @@ def apply_entity(self, entities: Union[List[Entity], Entity], project: str = Non
>>> "key": "val"
>>> }
>>> )
- >>> feast_client.apply_entity(entity)
+ >>> feast_client.apply(entity)
"""
+ if project is None:
+ project = self.project
+
+ if not isinstance(objects, list):
+ objects = [objects]
+ for obj in objects:
+ if isinstance(obj, Entity):
+ self._apply_entity(project, obj) # type: ignore
+ elif isinstance(obj, FeatureTable):
+ self._apply_feature_table(project, obj) # type: ignore
+ else:
+ raise ValueError(
+ f"Could not determine object type to apply {obj} with type {type(obj)}. Type must be Entity or FeatureTable."
+ )
+
+ def apply_entity(self, entities: Union[List[Entity], Entity], project: str = None):
+ """
+ Deprecated. Please see apply().
+ """
+ warnings.warn(
+ "The method apply_entity() is being deprecated. Please use apply() instead. Feast 0.10 and onwards will not support apply_entity().",
+ DeprecationWarning,
+ )
+
if project is None:
project = self.project
@@ -570,12 +601,12 @@ def apply_feature_table(
project: str = None,
):
"""
- Idempotently registers feature tables with Feast Core. Either a single
- feature table or a list can be provided.
-
- Args:
- feature_tables: List of feature tables that will be registered
+ Deprecated. Please see apply().
"""
+ warnings.warn(
+ "The method apply_feature_table() is being deprecated. Please use apply() instead. Feast 0.10 and onwards will not support apply_feature_table().",
+ DeprecationWarning,
+ )
if project is None:
project = self.project
| diff --git a/sdk/python/tests/test_client.py b/sdk/python/tests/test_client.py
--- a/sdk/python/tests/test_client.py
+++ b/sdk/python/tests/test_client.py
@@ -377,7 +377,7 @@ def test_apply_entity_success(self, test_client):
)
# Register Entity with Core
- test_client.apply_entity(entity)
+ test_client.apply(entity)
entities = test_client.list_entities()
@@ -429,7 +429,7 @@ def test_apply_feature_table_success(self, test_client):
)
# Register Feature Table with Core
- test_client.apply_feature_table(ft1)
+ test_client.apply(ft1)
feature_tables = test_client.list_feature_tables()
diff --git a/sdk/python/tests/test_historical_feature_retrieval.py b/sdk/python/tests/test_historical_feature_retrieval.py
--- a/sdk/python/tests/test_historical_feature_retrieval.py
+++ b/sdk/python/tests/test_historical_feature_retrieval.py
@@ -109,12 +109,12 @@ def client_with_local_spark(tmpdir):
@pytest.fixture()
def driver_entity(client):
- return client.apply_entity(Entity("driver_id", "description", ValueType.INT32))
+ return client.apply(Entity("driver_id", "description", ValueType.INT32))
@pytest.fixture()
def customer_entity(client):
- return client.apply_entity(Entity("customer_id", "description", ValueType.INT32))
+ return client.apply(Entity("customer_id", "description", ValueType.INT32))
def create_temp_parquet_file(
@@ -191,7 +191,7 @@ def transactions_feature_table(spark, client):
feature_table = FeatureTable(
"transactions", ["customer_id"], features, batch_source=file_source
)
- yield client.apply_feature_table(feature_table)
+ yield client.apply(feature_table)
shutil.rmtree(temp_dir)
@@ -239,7 +239,7 @@ def bookings_feature_table(spark, client):
feature_table = FeatureTable(
"bookings", ["driver_id"], features, batch_source=file_source, max_age=max_age
)
- yield client.apply_feature_table(feature_table)
+ yield client.apply(feature_table)
shutil.rmtree(temp_dir)
@@ -288,7 +288,7 @@ def bookings_feature_table_with_mapping(spark, client):
feature_table = FeatureTable(
"bookings", ["driver_id"], features, batch_source=file_source, max_age=max_age
)
- yield client.apply_feature_table(feature_table)
+ yield client.apply(feature_table)
shutil.rmtree(temp_dir)
diff --git a/sdk/python/tests/test_streaming_control_loop.py b/sdk/python/tests/test_streaming_control_loop.py
--- a/sdk/python/tests/test_streaming_control_loop.py
+++ b/sdk/python/tests/test_streaming_control_loop.py
@@ -66,7 +66,7 @@ def _create_ft(self, client: Client, features) -> None:
)
# Register Entity with Core
- client.apply_entity(entity)
+ client.apply(entity)
# Create Feature Tables
batch_source = FileSource(
@@ -95,7 +95,7 @@ def _create_ft(self, client: Client, features) -> None:
)
# Register Feature Table with Core
- client.apply_feature_table(ft1)
+ client.apply(ft1)
def _delete_ft(self, client: Client):
client.delete_feature_table(self.table_name)
diff --git a/tests/e2e/test_historical_features.py b/tests/e2e/test_historical_features.py
--- a/tests/e2e/test_historical_features.py
+++ b/tests/e2e/test_historical_features.py
@@ -70,7 +70,7 @@ def test_historical_features(
customer_entity = Entity(
name="user_id", description="Customer", value_type=ValueType.INT64
)
- feast_client.apply_entity(customer_entity)
+ feast_client.apply(customer_entity)
max_age = Duration()
max_age.FromSeconds(2 * 86400)
@@ -86,7 +86,7 @@ def test_historical_features(
max_age=max_age,
)
- feast_client.apply_feature_table(transactions_feature_table)
+ feast_client.apply(transactions_feature_table)
transactions_df, customers_df = generate_data()
feast_client.ingest(transactions_feature_table, transactions_df)
diff --git a/tests/e2e/test_online_features.py b/tests/e2e/test_online_features.py
--- a/tests/e2e/test_online_features.py
+++ b/tests/e2e/test_online_features.py
@@ -55,8 +55,8 @@ def test_offline_ingestion(
batch_source=batch_source,
)
- feast_client.apply_entity(entity)
- feast_client.apply_feature_table(feature_table)
+ feast_client.apply(entity)
+ feast_client.apply(feature_table)
original = generate_data()
feast_client.ingest(feature_table, original) # write to batch (offline) storage
@@ -95,8 +95,8 @@ def test_offline_ingestion_from_bq_view(pytestconfig, bq_dataset, feast_client:
),
)
- feast_client.apply_entity(entity)
- feast_client.apply_feature_table(feature_table)
+ feast_client.apply(entity)
+ feast_client.apply(feature_table)
ingest_and_verify(feast_client, feature_table, original)
@@ -126,8 +126,8 @@ def test_streaming_ingestion(
),
)
- feast_client.apply_entity(entity)
- feast_client.apply_feature_table(feature_table)
+ feast_client.apply(entity)
+ feast_client.apply(feature_table)
job = feast_client.start_stream_to_online_ingestion(feature_table)
diff --git a/tests/e2e/test_register.py b/tests/e2e/test_register.py
--- a/tests/e2e/test_register.py
+++ b/tests/e2e/test_register.py
@@ -143,8 +143,8 @@ def test_get_list_basic(
):
# ApplyEntity
- feast_client.apply_entity(customer_entity)
- feast_client.apply_entity(driver_entity)
+ feast_client.apply(customer_entity)
+ feast_client.apply(driver_entity)
# GetEntity Check
assert feast_client.get_entity(name="customer_id") == customer_entity
@@ -162,7 +162,7 @@ def test_get_list_basic(
assert len(actual_matchmaking_entities) == 1
# ApplyFeatureTable
- feast_client.apply_feature_table(basic_featuretable)
+ feast_client.apply(basic_featuretable)
# GetFeatureTable Check
actual_get_feature_table = feast_client.get_feature_table(name="basic_featuretable")
@@ -181,7 +181,7 @@ def test_get_list_alltypes(
feast_client: Client, alltypes_entity: Entity, alltypes_featuretable: FeatureTable
):
# ApplyEntity
- feast_client.apply_entity(alltypes_entity)
+ feast_client.apply(alltypes_entity)
# GetEntity Check
assert feast_client.get_entity(name="alltypes_id") == alltypes_entity
@@ -194,7 +194,7 @@ def test_get_list_alltypes(
assert len(actual_alltypes_entities) == 1
# ApplyFeatureTable
- feast_client.apply_feature_table(alltypes_featuretable)
+ feast_client.apply(alltypes_featuretable)
# GetFeatureTable Check
actual_get_feature_table = feast_client.get_feature_table(name="alltypes")
@@ -234,11 +234,11 @@ def test_ingest_into_bq(
)
# ApplyEntity
- feast_client.apply_entity(customer_entity)
- feast_client.apply_entity(driver_entity)
+ feast_client.apply(customer_entity)
+ feast_client.apply(driver_entity)
# ApplyFeatureTable
- feast_client.apply_feature_table(ft)
+ feast_client.apply(ft)
feast_client.ingest(ft, bq_dataframe, timeout=120)
bq_client = bigquery.Client(project=bq_project)
| Add support for feature tables to the apply method
We need to deprecate [client.apply_entity()](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/client.py#L443) and [client.apply_feature_table()](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/client.py#L565). Both of these methods were added because `client.apply()` was previously used for applying feature sets.
In Feast 0.9 we should
- Add support for applying feature tables and entities through `client.apply()`. This method doesn't currently exist. Please have a look at [Feast 0.7](https://github.com/feast-dev/feast/blob/v0.7.1/sdk/python/feast/client.py#L359) for a reference implementation. We can follow a similar approach.
- Add deprecation warnings on `apply_feature_table()` and `apply_entity()`
- In Feast 0.10 we will remove `apply_feature_table()` and `apply_entity()`, and `apply()` will only work on feature tables and entities.
| Hi @woop, I would like to help with this one. Since this will be my first contribution in Feat hopefully you'll help me out.
Let me raise an MR for this based on the reference you pointed by this weekend.
Hi @priyankvex, feel free to go ahead. Please ask if you have any questions! If you get stuck at any point, please reach out on Slack (#Feast) | 2020-12-03T20:16:24 |
feast-dev/feast | 1,227 | feast-dev__feast-1227 | [
"1167"
] | 8256b05b817ff44f98348a300823ed981ed7ffe5 | diff --git a/sdk/python/feast/pyspark/launchers/aws/emr_utils.py b/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
--- a/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
+++ b/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
@@ -107,7 +107,7 @@ def _sync_offline_to_online_step(
"--class",
"feast.ingestion.IngestionJob",
"--packages",
- "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.17.2",
+ "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.18.0",
jar_path,
]
+ args,
@@ -332,7 +332,7 @@ def _stream_ingestion_step(
+ jars_args
+ [
"--packages",
- "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.17.2",
+ "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.18.0",
jar_path,
]
+ args,
diff --git a/sdk/python/feast/pyspark/launchers/standalone/local.py b/sdk/python/feast/pyspark/launchers/standalone/local.py
--- a/sdk/python/feast/pyspark/launchers/standalone/local.py
+++ b/sdk/python/feast/pyspark/launchers/standalone/local.py
@@ -222,7 +222,7 @@ class StandaloneClusterLauncher(JobLauncher):
Submits jobs to a standalone Spark cluster in client mode.
"""
- BQ_CONNECTOR_VERSION = "2.12:0.17.3"
+ BQ_CONNECTOR_VERSION = "2.12:0.18.0"
def __init__(self, master_url: str, spark_home: str = None):
"""
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -36,12 +36,12 @@
"pandas~=1.0.0",
"pandavro==1.5.*",
"protobuf>=3.10",
- "PyYAML==5.1.*",
+ "PyYAML==5.3.*",
"fastavro>=0.22.11,<0.23",
"tabulate==0.8.*",
"toml==0.10.*",
"tqdm==4.*",
- "pyarrow<0.16.0,>=0.15.1",
+ "pyarrow==2.0.0",
"numpy",
"google",
]
| diff --git a/spark/ingestion/src/test/scala/feast/ingestion/SparkSpec.scala b/spark/ingestion/src/test/scala/feast/ingestion/SparkSpec.scala
--- a/spark/ingestion/src/test/scala/feast/ingestion/SparkSpec.scala
+++ b/spark/ingestion/src/test/scala/feast/ingestion/SparkSpec.scala
@@ -39,6 +39,7 @@ class SparkSpec extends UnitSpec with BeforeAndAfter {
)
.set("spark.metrics.conf.*.sink.statsd.host", "localhost")
.set("spark.metrics.conf.*.sink.statsd.port", "8125")
+ .set("spark.sql.legacy.allowUntypedScalaUDF", "true")
sparkSession = SparkSession
.builder()
| Spark 3+ support
## Expected Behavior
Can use spark 3+ clusters to run jobs.
## Current Behavior
If used together with spark 3+, there is an exception
```Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/StreamWriteSupport```
which is caused by old (2+) spark dependencies for IngestionJob.
## Steps to reproduce
I guess `pip install pyspark>=3` and start any job will be enough
### Specifications
- Version: 0.8.0
- Platform: linux
- Subsystem:
## Possible Solution
Bump spark.version in dependencies and update job code accordingly.
| @oavdeev @pyalex I believe there was a specific reason why we couldn't use Spark 3.0 right? Protobuf/grpc dependencies?
EDIT: So I think the question is whether we can write the job for Spark 2.0 but make it compatible with 3+
FYI https://github.com/RedisLabs/spark-redis/issues/253
Main reason for spark 2.4 was that [bq connector](https://github.com/GoogleCloudDataproc/spark-bigquery-connector) was not supporting 3.x on that moment and that dataproc stable version is still 2.4.
BQ connector seems not an issue anymore | 2020-12-14T04:31:34 |
feast-dev/feast | 1,284 | feast-dev__feast-1284 | [
"1275"
] | 775d673776ca0d47be2365bdbac66af0818bea3c | diff --git a/sdk/python/feast/pyspark/launchers/aws/emr_utils.py b/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
--- a/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
+++ b/sdk/python/feast/pyspark/launchers/aws/emr_utils.py
@@ -81,7 +81,11 @@ def _random_string(length) -> str:
def _upload_jar(jar_s3_prefix: str, jar_path: str) -> str:
- if jar_path.startswith("https://"):
+ if (
+ jar_path.startswith("s3://")
+ or jar_path.startswith("s3a://")
+ or jar_path.startswith("https://")
+ ):
return jar_path
with open(jar_path, "rb") as f:
uri = urlparse(os.path.join(jar_s3_prefix, os.path.basename(jar_path)))
| Python SDK: start_offline_to_online_ingestion Fails with default ingestion jar configuration
## Expected Behavior
In the [minimal_ride_hailing.ipynb](https://github.com/feast-dev/feast/blob/11e1fa7/examples/minimal/minimal_ride_hailing.ipynb) example notebook, I expect the following cell to run:
```
job = client.start_offline_to_online_ingestion(
driver_statistics,
datetime(2020, 10, 10),
datetime(2020, 10, 20)
)
# expect offline to online ingestion job to run
```
## Current Behavior
In the minimal_ride_hailing.ipynb example notebook, the following cell:
```
job = client.start_offline_to_online_ingestion(
driver_statistics,
datetime(2020, 10, 10),
datetime(2020, 10, 20)
)
```
Produces the following error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-72-e6363419621a> in <module>
2 driver_statistics,
3 datetime(2020, 10, 10),
----> 4 datetime(2020, 10, 20)
5 )
~/.local/lib/python3.7/site-packages/feast/client.py in start_offline_to_online_ingestion(self, feature_table, start, end)
1065 feature_table=feature_table,
1066 start=start,
-> 1067 end=end,
1068 )
1069 else:
~/.local/lib/python3.7/site-packages/feast/pyspark/launcher.py in start_offline_to_online_ingestion(client, project, feature_table, start, end)
250 ),
251 deadletter_path=client._config.get(opt.DEADLETTER_PATH),
--> 252 stencil_url=client._config.get(opt.STENCIL_URL),
253 )
254 )
~/.local/lib/python3.7/site-packages/feast/pyspark/launchers/aws/emr.py in offline_to_online_ingestion(self, ingestion_job_params)
256
257 jar_s3_path = _upload_jar(
--> 258 self._staging_location, ingestion_job_params.get_main_file_path()
259 )
260 step = _sync_offline_to_online_step(
~/.local/lib/python3.7/site-packages/feast/pyspark/launchers/aws/emr_utils.py in _upload_jar(jar_s3_prefix, local_path)
127
128 def _upload_jar(jar_s3_prefix: str, local_path: str) -> str:
--> 129 with open(local_path, "rb") as f:
130 return _s3_upload(
131 f,
FileNotFoundError: [Errno 2] No such file or directory: 'https://storage.googleapis.com/feast-jobs/spark/ingestion/feast-ingestion-spark-develop.jar'
```
## Steps to reproduce
1. install feast
```
pip install feast==0.8.2
```
2. Run the cells in the example notebook. Note that I have configured the following `Client` fields, but not the `spark_ingestion_jar` config (this config works fine for defining features in feast):
```
client = Client(
core_url='feast-feast-core.feast-dev:6565',
spark_launcher="emr",
emr_cluster_id="<redacted>",
emr_region="<redacted>",
spark_staging_location="<redacted>",
emr_log_location="<redacted>",
historical_feature_output_location="<redacted>"
)
```
### Specifications
- Version: 0.8.2
| I now see this appears related to #1266
I get a similar error even when defining `spark_ingestion_jar` configuration on the `client`, i.e. `spark_ingestion_jar=s3://my-bucket/feast-ingestion.jar` (with a valid jar of course):
1. The jar exists:
```
$ aws s3 ls s3://my-bucket/feast-ingestion.jar
2021-01-19 23:44:12 45031646 feast-ingestion.jar
```
2. Failed attempt to kick off ingestion:
```
job = client.start_offline_to_online_ingestion(
driver_statistics,
datetime(2020, 10, 10),
datetime(2020, 10, 20)
)
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-127-e6363419621a> in <module>
2 driver_statistics,
3 datetime(2020, 10, 10),
----> 4 datetime(2020, 10, 20)
5 )
~/.local/lib/python3.7/site-packages/feast/client.py in start_offline_to_online_ingestion(self, feature_table, start, end)
1167 feature_table=feature_table,
1168 start=start,
-> 1169 end=end,
1170 )
1171 else:
~/.local/lib/python3.7/site-packages/feast/pyspark/launcher.py in start_offline_to_online_ingestion(client, project, feature_table, start, end)
280 ),
281 deadletter_path=client._config.get(opt.DEADLETTER_PATH),
--> 282 stencil_url=client._config.get(opt.STENCIL_URL),
283 )
284 )
~/.local/lib/python3.7/site-packages/feast/pyspark/launchers/aws/emr.py in offline_to_online_ingestion(self, ingestion_job_params)
269
270 jar_s3_path = _upload_jar(
--> 271 self._staging_location, ingestion_job_params.get_main_file_path()
272 )
273 step = _sync_offline_to_online_step(
~/.local/lib/python3.7/site-packages/feast/pyspark/launchers/aws/emr_utils.py in _upload_jar(jar_s3_prefix, local_path)
82
83 def _upload_jar(jar_s3_prefix: str, local_path: str) -> str:
---> 84 with open(local_path, "rb") as f:
85 uri = urlparse(os.path.join(jar_s3_prefix, os.path.basename(local_path)))
86 return urlunparse(
FileNotFoundError: [Errno 2] No such file or directory: 's3://my-bucket/feast-ingestion.jar'
``` | 2021-01-23T01:49:16 |
|
feast-dev/feast | 1,310 | feast-dev__feast-1310 | [
"1238"
] | 5a655375921db78ab4bca321b12ef4f3e47598c4 | diff --git a/sdk/python/feast/entity.py b/sdk/python/feast/entity.py
--- a/sdk/python/feast/entity.py
+++ b/sdk/python/feast/entity.py
@@ -53,11 +53,6 @@ def __eq__(self, other):
if not isinstance(other, Entity):
raise TypeError("Comparisons should only involve Entity class objects.")
- if isinstance(self.value_type, int):
- self.value_type = ValueType(self.value_type).name
- if isinstance(other.value_type, int):
- other.value_type = ValueType(other.value_type).name
-
if (
self.labels != other.labels
or self.name != other.name
@@ -100,7 +95,7 @@ def description(self, description):
self._description = description
@property
- def value_type(self):
+ def value_type(self) -> ValueType:
"""
Returns the type of this entity
"""
@@ -200,7 +195,7 @@ def from_proto(cls, entity_proto: EntityV2Proto):
entity = cls(
name=entity_proto.spec.name,
description=entity_proto.spec.description,
- value_type=ValueType(entity_proto.spec.value_type).name, # type: ignore
+ value_type=ValueType(entity_proto.spec.value_type),
labels=entity_proto.spec.labels,
)
@@ -221,13 +216,11 @@ def to_proto(self) -> EntityV2Proto:
created_timestamp=self.created_timestamp,
last_updated_timestamp=self.last_updated_timestamp,
)
- if isinstance(self.value_type, ValueType):
- self.value_type = self.value_type.value
spec = EntitySpecProto(
name=self.name,
description=self.description,
- value_type=self.value_type,
+ value_type=self.value_type.value,
labels=self.labels,
)
@@ -268,13 +261,10 @@ def to_spec_proto(self) -> EntitySpecProto:
EntitySpecV2 protobuf
"""
- if isinstance(self.value_type, ValueType):
- self.value_type = self.value_type.value
-
spec = EntitySpecProto(
name=self.name,
description=self.description,
- value_type=self.value_type,
+ value_type=self.value_type.value,
labels=self.labels,
)
diff --git a/sdk/python/feast/pyspark/launcher.py b/sdk/python/feast/pyspark/launcher.py
--- a/sdk/python/feast/pyspark/launcher.py
+++ b/sdk/python/feast/pyspark/launcher.py
@@ -151,7 +151,7 @@ def _feature_table_to_argument(
"project": project,
"name": feature_table.name,
"entities": [
- {"name": n, "type": client.get_entity(n, project=project).value_type}
+ {"name": n, "type": client.get_entity(n, project=project).value_type.name}
for n in feature_table.entities
],
"max_age": feature_table.max_age.ToSeconds() if feature_table.max_age else None,
| diff --git a/sdk/python/tests/test_client.py b/sdk/python/tests/test_client.py
--- a/sdk/python/tests/test_client.py
+++ b/sdk/python/tests/test_client.py
@@ -385,7 +385,7 @@ def test_apply_entity_success(self, test_client):
assert (
len(entities) == 1
and entity.name == "driver_car_id"
- and entity.value_type == ValueType(ValueProto.ValueType.STRING).name
+ and entity.value_type == ValueType(ValueProto.ValueType.STRING)
and entity.description == "Car driver id"
and "team" in entity.labels
and entity.labels["team"] == "matchmaking"
| Entity from_yaml() initialises with value_type as name string representation instead of enum value
## Expected Behavior
When loading an entity from `yaml`, one would expect `value_type` to be set to the enum value as described in the python class `__init__`:
```python
class Entity:
...
def __init__(
self,
name: str,
description: str,
value_type: ValueType,
labels: Optional[MutableMapping[str, str]] = None,
):
self._name = name
self._description = description
self._value_type = value_type
...
```
## Current Behavior
Instead, it's initialised with `value_type` as the name of the type (e.g. `"STRING"` instead of `ValueType.STRING`).
For `FeatureTable`, the `dtype` is initialised to the python enum.
```python
>>> from feast import FeatureTable
>>> ft = FeatureTable.from_yaml("path/to/feature_table.yaml")
>>> ft.features[0].dtype
<ValueType.DOUBLE: 5>
```
## Steps to reproduce
```python
>>> from feast import Entity
>>> entity = Entity.from_yaml("path/to/entity.yaml")
>>> entity.value_type
'STRING'
```
### Specifications
- Version: 0.8.0
- Platform: Mac
- Subsystem:
## Possible Solution
In `Entity.from_proto()` we initialise with the `.name` value and explicitly ignore the type error:
```python
entity = cls(
name=entity_proto.spec.name,
description=entity_proto.spec.description,
value_type=ValueType(entity_proto.spec.value_type).name, # type: ignore
labels=entity_proto.spec.labels,
)
```
| 2021-02-03T02:00:47 |
|
feast-dev/feast | 1,481 | feast-dev__feast-1481 | [
"1480"
] | b8ef24ea1a880045aa793e5b55724690464c13ae | diff --git a/sdk/python/feast/infra/provider.py b/sdk/python/feast/infra/provider.py
--- a/sdk/python/feast/infra/provider.py
+++ b/sdk/python/feast/infra/provider.py
@@ -257,7 +257,7 @@ def _coerce_datetime(ts):
feature_dict = {}
for feature in feature_view.features:
idx = table.column_names.index(feature.name)
- value = python_value_to_proto_value(row[idx])
+ value = python_value_to_proto_value(row[idx], feature.dtype)
feature_dict[feature.name] = value
event_timestamp_idx = table.column_names.index(
feature_view.input.event_timestamp_column
diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -313,8 +313,10 @@ def _python_value_to_proto_value(feast_value_type, value) -> ProtoValue:
raise Exception(f"Unsupported data type: ${str(type(value))}")
-def python_value_to_proto_value(value: Any) -> ProtoValue:
- value_type = python_type_to_feast_value_type("", value)
+def python_value_to_proto_value(
+ value: Any, feature_type: ValueType = None
+) -> ProtoValue:
+ value_type = python_type_to_feast_value_type("", value) if value else feature_type
return _python_value_to_proto_value(value_type, value)
| diff --git a/sdk/python/tests/test_offline_online_store_consistency.py b/sdk/python/tests/test_offline_online_store_consistency.py
--- a/sdk/python/tests/test_offline_online_store_consistency.py
+++ b/sdk/python/tests/test_offline_online_store_consistency.py
@@ -4,7 +4,7 @@
import uuid
from datetime import datetime, timedelta
from pathlib import Path
-from typing import Iterator, Tuple, Union
+from typing import Iterator, Optional, Tuple, Union
import pandas as pd
import pytest
@@ -26,7 +26,7 @@ def create_dataset() -> pd.DataFrame:
ts = pd.Timestamp(now).round("ms")
data = {
"id": [1, 2, 1, 3, 3],
- "value": [0.1, 0.2, 0.3, 4, 5],
+ "value": [0.1, None, 0.3, 4, 5],
"ts_1": [
ts - timedelta(hours=4),
ts,
@@ -147,13 +147,17 @@ def check_offline_and_online_features(
fv: FeatureView,
driver_id: int,
event_timestamp: datetime,
- expected_value: float,
+ expected_value: Optional[float],
) -> None:
# Check online store
response_dict = fs.get_online_features(
[f"{fv.name}:value"], [{"driver": driver_id}]
).to_dict()
- assert abs(response_dict[f"{fv.name}__value"][0] - expected_value) < 1e-6
+
+ if expected_value:
+ assert abs(response_dict[f"{fv.name}__value"][0] - expected_value) < 1e-6
+ else:
+ assert response_dict[f"{fv.name}__value"][0] is None
# Check offline store
df = fs.get_historical_features(
@@ -163,7 +167,11 @@ def check_offline_and_online_features(
feature_refs=[f"{fv.name}:value"],
).to_df()
- assert abs(df.to_dict()[f"{fv.name}__value"][0] - expected_value) < 1e-6
+ if expected_value:
+ assert abs(df.to_dict()[f"{fv.name}__value"][0] - expected_value) < 1e-6
+ else:
+ df = df.where(pd.notnull(df), None)
+ assert df.to_dict()[f"{fv.name}__value"][0] is None
def run_offline_online_store_consistency_test(
@@ -181,6 +189,10 @@ def run_offline_online_store_consistency_test(
fs=fs, fv=fv, driver_id=1, event_timestamp=end_date, expected_value=0.3
)
+ check_offline_and_online_features(
+ fs=fs, fv=fv, driver_id=2, event_timestamp=end_date, expected_value=None
+ )
+
# check prior value for materialize_incremental()
check_offline_and_online_features(
fs=fs, fv=fv, driver_id=3, event_timestamp=end_date, expected_value=4
| Error during feast materialize-incremental: AttributeError: 'NoneType' object has no attribute 'dtype'
## Expected Behavior
It should be possible to materialize TableView if some values for some rows are None
## Current Behavior
In the current version error appears when any value in historical store is None.
On command:
```
feast materialize-incremental 2021-05-21T23:46:43
```
I have error:
```
Materializing feature view app_features from 2021-04-19 00:03:27.010110+00:00 to 2021-04-21 21:55:52+00:00Traceback (most recent call last):
File "/opt/conda/bin/feast", line 8, in <module>
sys.exit(cli())
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/feast/cli.py", line 247, in materialize_incremental_command
store.materialize_incremental(
File "/opt/conda/lib/python3.8/site-packages/feast/feature_store.py", line 337, in materialize_incremental
provider.materialize_single_feature_view(
File "/opt/conda/lib/python3.8/site-packages/feast/infra/local.py", line 188, in materialize_single_feature_view
rows_to_write = _convert_arrow_to_proto(table, feature_view, join_keys)
File "/opt/conda/lib/python3.8/site-packages/feast/infra/provider.py", line 260, in _convert_arrow_to_proto
value = python_value_to_proto_value(row[idx])
File "/opt/conda/lib/python3.8/site-packages/feast/type_map.py", line 317, in python_value_to_proto_value
value_type = python_type_to_feast_value_type("", value)
File "/opt/conda/lib/python3.8/site-packages/feast/type_map.py", line 160, in python_type_to_feast_value_type
return type_map[value.dtype.__str__()]
AttributeError: 'NoneType' object has no attribute 'dtype'
```
## Steps to reproduce
The error will appear for any input which contains None value in any row.
### Specifications
- Version: Feast SDK Version: "feast 0.10.0"
- Platform: Python
- Subsystem: Linux
## Possible Solution
We can use feature ValueType instead of parsing type from value if value is None
| 2021-04-20T00:15:24 |
|
feast-dev/feast | 1,504 | feast-dev__feast-1504 | [
"1502"
] | 762aeb05bc818455f327821f83902bca5d41a28f | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -35,7 +35,7 @@
DESCRIPTION = "Python SDK for Feast"
URL = "https://github.com/feast-dev/feast"
AUTHOR = "Feast"
-REQUIRES_PYTHON = ">=3.6.0"
+REQUIRES_PYTHON = ">=3.7.0"
REQUIRED = [
"Click==7.*",
@@ -191,7 +191,7 @@ def run(self):
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
use_scm_version={"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX},
| Python 3.6 Local Mode breaks due to usage of pathlib.Path and sqlite3
## Expected Behavior
Python 3.6 in local mode works when following the quickstart
## Current Behavior
Unable to follow quickstart due to the error as shown below
## Steps to reproduce
1. Create a Python 3.6 environment. E.g. `conda create --name=feast36 python=3.6`
2. Install feast and other deps `pip install feast`
3. Follow the quickstart
When running the quick start it will fail with the following msg.
```
(feast36) ➜ temp_feat$ feast init feat1
Creating a new Feast repository in /home/user/Documents/temp_feat/feat1.
(feast36) ➜ temp_feat$ cd feat1
(feast36) ➜ feat1$ feast apply
Registered entity driver_id
Registered feature view driver_hourly_stats
Deploying infrastructure for driver_hourly_stats
Traceback (most recent call last):
File "/home/user/anaconda3/envs/feast36/bin/feast", line 8, in <module>
sys.exit(cli())
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/feast/cli.py", line 160, in apply_total_command
apply_total(repo_config, Path.cwd())
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/feast/repo_operations.py", line 148, in apply_total
partial=False,
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/feast/infra/local.py", line 55, in update_infra
conn = self._get_conn()
File "/home/user/anaconda3/envs/feast36/lib/python3.6/site-packages/feast/infra/local.py", line 45, in _get_conn
self._db_path, detect_types=sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES
TypeError: argument 1 must be str, not PosixPath
```
### Specifications
- Version: 3.6
- Platform: Ubuntu 20.04, also tested on Ubuntu 18.05
- Subsystem:
## Possible Solution
The sqlite3 issue is resolved in Python 3.7 as shown here:
https://bugs.python.org/issue33496
A solution could be to add `self._db_path = str(self._db_path)` or similar in the `infra/local.py` file
I couldn't find a similar issue - in the case its resolved in an upstream commit...
| 2021-04-26T17:34:42 |
||
feast-dev/feast | 1,509 | feast-dev__feast-1509 | [
"1507"
] | be44e0a60b6a4250576ad1e58202d0f15dcd61b4 | diff --git a/sdk/python/feast/cli.py b/sdk/python/feast/cli.py
--- a/sdk/python/feast/cli.py
+++ b/sdk/python/feast/cli.py
@@ -39,7 +39,13 @@
@click.group()
-def cli():
[email protected](
+ "--chdir",
+ "-c",
+ help="Switch to a different feature repository directory before executing the given subcommand.",
+)
[email protected]_context
+def cli(ctx: click.Context, chdir: str):
"""
Feast CLI
@@ -47,6 +53,8 @@ def cli():
For any questions, you can reach us at https://slack.feast.dev/
"""
+ ctx.ensure_object(dict)
+ ctx.obj["CHDIR"] = Path.cwd() if chdir is None else Path(chdir).absolute()
pass
@@ -68,12 +76,14 @@ def entities_cmd():
@entities_cmd.command("describe")
@click.argument("name", type=click.STRING)
-def entity_describe(name: str):
[email protected]_context
+def entity_describe(ctx: click.Context, name: str):
"""
Describe an entity
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
try:
entity = store.get_entity(name)
@@ -89,12 +99,14 @@ def entity_describe(name: str):
@entities_cmd.command(name="list")
-def entity_list():
[email protected]_context
+def entity_list(ctx: click.Context):
"""
List all entities
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
table = []
for entity in store.list_entities():
table.append([entity.name, entity.description, entity.value_type])
@@ -114,12 +126,14 @@ def feature_views_cmd():
@feature_views_cmd.command("describe")
@click.argument("name", type=click.STRING)
-def feature_view_describe(name: str):
[email protected]_context
+def feature_view_describe(ctx: click.Context, name: str):
"""
Describe a feature view
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
try:
feature_view = store.get_feature_view(name)
@@ -135,12 +149,14 @@ def feature_view_describe(name: str):
@feature_views_cmd.command(name="list")
-def feature_view_list():
[email protected]_context
+def feature_view_list(ctx: click.Context):
"""
List all feature views
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
table = []
for feature_view in store.list_feature_views():
table.append([feature_view.name, feature_view.entities])
@@ -151,44 +167,50 @@ def feature_view_list():
@cli.command("apply")
-def apply_total_command():
[email protected]_context
+def apply_total_command(ctx: click.Context):
"""
Create or update a feature store deployment
"""
- cli_check_repo(Path.cwd())
- repo_config = load_repo_config(Path.cwd())
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ repo_config = load_repo_config(repo)
tele = Telemetry()
tele.log("apply")
try:
- apply_total(repo_config, Path.cwd())
+ apply_total(repo_config, repo)
except FeastProviderLoginError as e:
print(str(e))
@cli.command("teardown")
-def teardown_command():
[email protected]_context
+def teardown_command(ctx: click.Context):
"""
Tear down deployed feature store infrastructure
"""
- cli_check_repo(Path.cwd())
- repo_config = load_repo_config(Path.cwd())
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ repo_config = load_repo_config(repo)
tele = Telemetry()
tele.log("teardown")
- teardown(repo_config, Path.cwd())
+ teardown(repo_config, repo)
@cli.command("registry-dump")
-def registry_dump_command():
[email protected]_context
+def registry_dump_command(ctx: click.Context):
"""
Print contents of the metadata registry
"""
- cli_check_repo(Path.cwd())
- repo_config = load_repo_config(Path.cwd())
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ repo_config = load_repo_config(repo)
tele = Telemetry()
tele.log("registry-dump")
- registry_dump(repo_config, repo_path=Path.cwd())
+ registry_dump(repo_config, repo_path=repo)
@cli.command("materialize")
@@ -197,7 +219,10 @@ def registry_dump_command():
@click.option(
"--views", "-v", help="Feature views to materialize", multiple=True,
)
-def materialize_command(start_ts: str, end_ts: str, views: List[str]):
[email protected]_context
+def materialize_command(
+ ctx: click.Context, start_ts: str, end_ts: str, views: List[str]
+):
"""
Run a (non-incremental) materialization job to ingest data into the online store. Feast
will read all data between START_TS and END_TS from the offline store and write it to the
@@ -206,8 +231,9 @@ def materialize_command(start_ts: str, end_ts: str, views: List[str]):
START_TS and END_TS should be in ISO 8601 format, e.g. '2021-07-16T19:20:01'
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
store.materialize(
feature_views=None if not views else views,
start_date=datetime.fromisoformat(start_ts),
@@ -220,7 +246,8 @@ def materialize_command(start_ts: str, end_ts: str, views: List[str]):
@click.option(
"--views", "-v", help="Feature views to incrementally materialize", multiple=True,
)
-def materialize_incremental_command(end_ts: str, views: List[str]):
[email protected]_context
+def materialize_incremental_command(ctx: click.Context, end_ts: str, views: List[str]):
"""
Run an incremental materialization job to ingest new data into the online store. Feast will read
all data from the previously ingested point to END_TS from the offline store and write it to the
@@ -229,8 +256,9 @@ def materialize_incremental_command(end_ts: str, views: List[str]):
END_TS should be in ISO 8601 format, e.g. '2021-07-16T19:20:01'
"""
- cli_check_repo(Path.cwd())
- store = FeatureStore(repo_path=str(Path.cwd()))
+ repo = ctx.obj["CHDIR"]
+ cli_check_repo(repo)
+ store = FeatureStore(repo_path=str(repo))
store.materialize_incremental(
feature_views=None if not views else views,
end_date=datetime.fromisoformat(end_ts),
| diff --git a/sdk/python/tests/test_cli_chdir.py b/sdk/python/tests/test_cli_chdir.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/test_cli_chdir.py
@@ -0,0 +1,56 @@
+import tempfile
+from datetime import datetime, timedelta
+from pathlib import Path
+
+from tests.cli_utils import CliRunner
+
+
+def test_cli_chdir() -> None:
+ """
+ This test simply makes sure that you can run 'feast --chdir COMMAND'
+ to switch to a feature repository before running a COMMAND.
+ """
+ runner = CliRunner()
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Make sure the path is absolute by resolving any symlinks
+ temp_path = Path(temp_dir).resolve()
+ result = runner.run(["init", "my_project"], cwd=temp_path)
+ repo_path = temp_path / "my_project"
+ assert result.returncode == 0
+
+ result = runner.run(["--chdir", repo_path, "apply"], cwd=temp_path)
+ assert result.returncode == 0
+
+ result = runner.run(["--chdir", repo_path, "entities", "list"], cwd=temp_path)
+ assert result.returncode == 0
+
+ result = runner.run(
+ ["--chdir", repo_path, "feature-views", "list"], cwd=temp_path
+ )
+ assert result.returncode == 0
+
+ end_date = datetime.utcnow()
+ start_date = end_date - timedelta(days=100)
+ result = runner.run(
+ [
+ "--chdir",
+ repo_path,
+ "materialize",
+ start_date.isoformat(),
+ end_date.isoformat(),
+ ],
+ cwd=temp_path,
+ )
+ assert result.returncode == 0
+
+ result = runner.run(
+ ["--chdir", repo_path, "materialize-incremental", end_date.isoformat()],
+ cwd=temp_path,
+ )
+ assert result.returncode == 0
+
+ result = runner.run(["--chdir", repo_path, "registry-dump"], cwd=temp_path)
+ assert result.returncode == 0
+
+ result = runner.run(["--chdir", repo_path, "teardown"], cwd=temp_path)
+ assert result.returncode == 0
| Run feast cli command without cd into feature repo directory
**Is your feature request related to a problem? Please describe.**
Is it possible to run a feast command i.e. feast apply, without cd into the feature repo directory?
**Describe the solution you'd like**
feast -p feature_repo apply
feast -p feature_repo materialize-incremental $CURRENT_TIME
**Possible solution**
PR #1509
| 2021-04-27T09:08:02 |
|
feast-dev/feast | 1,523 | feast-dev__feast-1523 | [
"1500"
] | f6f00fc0a16e90a75e61699cf6cf833cf8b04604 | diff --git a/sdk/python/feast/data_source.py b/sdk/python/feast/data_source.py
--- a/sdk/python/feast/data_source.py
+++ b/sdk/python/feast/data_source.py
@@ -14,10 +14,15 @@
import enum
-from typing import Dict, Optional
+import re
+from typing import Callable, Dict, Iterable, Optional, Tuple
+from pyarrow.parquet import ParquetFile
+
+from feast import type_map
from feast.data_format import FileFormat, StreamFormat
from feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto
+from feast.value_type import ValueType
class SourceType(enum.Enum):
@@ -515,11 +520,45 @@ def to_proto(self) -> DataSourceProto:
"""
raise NotImplementedError
+ def _infer_event_timestamp_column(self, ts_column_type_regex_pattern):
+ ERROR_MSG_PREFIX = "Unable to infer DataSource event_timestamp_column"
+ USER_GUIDANCE = "Please specify event_timestamp_column explicitly."
+
+ if isinstance(self, FileSource) or isinstance(self, BigQuerySource):
+ event_timestamp_column, matched_flag = None, False
+ for col_name, col_datatype in self.get_table_column_names_and_types():
+ if re.match(ts_column_type_regex_pattern, col_datatype):
+ if matched_flag:
+ raise TypeError(
+ f"""
+ {ERROR_MSG_PREFIX} due to multiple possible columns satisfying
+ the criteria. {USER_GUIDANCE}
+ """
+ )
+ matched_flag = True
+ event_timestamp_column = col_name
+ if matched_flag:
+ return event_timestamp_column
+ else:
+ raise TypeError(
+ f"""
+ {ERROR_MSG_PREFIX} due to an absence of columns that satisfy the criteria.
+ {USER_GUIDANCE}
+ """
+ )
+ else:
+ raise TypeError(
+ f"""
+ {ERROR_MSG_PREFIX} because this DataSource currently does not support this inference.
+ {USER_GUIDANCE}
+ """
+ )
+
class FileSource(DataSource):
def __init__(
self,
- event_timestamp_column: str,
+ event_timestamp_column: Optional[str] = None,
file_url: Optional[str] = None,
path: Optional[str] = None,
file_format: FileFormat = None,
@@ -543,12 +582,6 @@ def __init__(
Examples:
>>> FileSource(path="/data/my_features.parquet", event_timestamp_column="datetime")
"""
- super().__init__(
- event_timestamp_column,
- created_timestamp_column,
- field_mapping,
- date_partition_column,
- )
if path is None and file_url is None:
raise ValueError(
'No "path" argument provided. Please set "path" to the location of your file source.'
@@ -561,8 +594,17 @@ def __init__(
)
else:
file_url = path
+
self._file_options = FileOptions(file_format=file_format, file_url=file_url)
+ super().__init__(
+ event_timestamp_column
+ or self._infer_event_timestamp_column(r"timestamp\[\w\w\]"),
+ created_timestamp_column,
+ field_mapping,
+ date_partition_column,
+ )
+
def __eq__(self, other):
if not isinstance(other, FileSource):
raise TypeError("Comparisons should only involve FileSource class objects.")
@@ -609,24 +651,34 @@ def to_proto(self) -> DataSourceProto:
return data_source_proto
+ @staticmethod
+ def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
+ return type_map.pa_to_feast_value_type
+
+ def get_table_column_names_and_types(self) -> Iterable[Tuple[str, str]]:
+ schema = ParquetFile(self.path).schema_arrow
+ return zip(schema.names, map(str, schema.types))
+
class BigQuerySource(DataSource):
def __init__(
self,
- event_timestamp_column: str,
+ event_timestamp_column: Optional[str] = None,
table_ref: Optional[str] = None,
created_timestamp_column: Optional[str] = "",
field_mapping: Optional[Dict[str, str]] = None,
date_partition_column: Optional[str] = "",
query: Optional[str] = None,
):
+ self._bigquery_options = BigQueryOptions(table_ref=table_ref, query=query)
+
super().__init__(
- event_timestamp_column,
+ event_timestamp_column
+ or self._infer_event_timestamp_column("TIMESTAMP|DATETIME"),
created_timestamp_column,
field_mapping,
date_partition_column,
)
- self._bigquery_options = BigQueryOptions(table_ref=table_ref, query=query)
def __eq__(self, other):
if not isinstance(other, BigQuerySource):
@@ -684,6 +736,39 @@ def get_table_query_string(self) -> str:
else:
return f"({self.query})"
+ @staticmethod
+ def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
+ return type_map.bq_to_feast_value_type
+
+ def get_table_column_names_and_types(self) -> Iterable[Tuple[str, str]]:
+ from google.cloud import bigquery
+
+ client = bigquery.Client()
+ bq_columns_query = ""
+ name_type_pairs = []
+ if self.table_ref is not None:
+ project_id, dataset_id, table_id = self.table_ref.split(".")
+ bq_columns_query = f"""
+ SELECT COLUMN_NAME, DATA_TYPE FROM {project_id}.{dataset_id}.INFORMATION_SCHEMA.COLUMNS
+ WHERE TABLE_NAME = '{table_id}'
+ """
+ table_schema = (
+ client.query(bq_columns_query).result().to_dataframe_iterable()
+ )
+ for df in table_schema:
+ name_type_pairs.extend(
+ list(zip(df["COLUMN_NAME"].to_list(), df["DATA_TYPE"].to_list()))
+ )
+ else:
+ bq_columns_query = f"SELECT * FROM ({self.query}) LIMIT 1"
+ queryRes = client.query(bq_columns_query).result()
+ name_type_pairs = [
+ (schema_field.name, schema_field.field_type)
+ for schema_field in queryRes.schema
+ ]
+
+ return name_type_pairs
+
class KafkaSource(DataSource):
def __init__(
diff --git a/sdk/python/feast/feature_view.py b/sdk/python/feast/feature_view.py
--- a/sdk/python/feast/feature_view.py
+++ b/sdk/python/feast/feature_view.py
@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+import re
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Tuple, Union
@@ -55,12 +56,35 @@ def __init__(
self,
name: str,
entities: List[str],
- features: List[Feature],
ttl: Optional[Union[Duration, timedelta]],
input: Union[BigQuerySource, FileSource],
+ features: List[Feature] = [],
tags: Optional[Dict[str, str]] = None,
online: bool = True,
):
+ if not features:
+ features = [] # to handle python's mutable default arguments
+ columns_to_exclude = {
+ input.event_timestamp_column,
+ input.created_timestamp_column,
+ } | set(entities)
+
+ for col_name, col_datatype in input.get_table_column_names_and_types():
+ if col_name not in columns_to_exclude and not re.match(
+ "^__|__$", col_name
+ ):
+ features.append(
+ Feature(
+ col_name,
+ input.source_datatype_to_feast_value_type()(col_datatype),
+ )
+ )
+
+ if not features:
+ raise ValueError(
+ f"Could not infer Features for the FeatureView named {name}. Please specify Features explicitly for this FeatureView."
+ )
+
cols = [entity for entity in entities] + [feat.name for feat in features]
for col in cols:
if input.field_mapping is not None and col in input.field_mapping.keys():
diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -417,7 +417,7 @@ def pa_to_value_type(pa_type: object):
return type_map[pa_type.__str__()]
-def pa_to_feast_value_type(value: pa.lib.ChunkedArray) -> ValueType:
+def pa_to_feast_value_type(value: Union[pa.lib.ChunkedArray, str]) -> ValueType:
type_map = {
"timestamp[ms]": ValueType.INT64,
"int32": ValueType.INT32,
@@ -435,7 +435,9 @@ def pa_to_feast_value_type(value: pa.lib.ChunkedArray) -> ValueType:
"list<item: binary>": ValueType.BYTES_LIST,
"list<item: bool>": ValueType.BOOL_LIST,
}
- return type_map[value.type.__str__()]
+ return type_map[
+ value.type.__str__() if isinstance(value, pa.lib.ChunkedArray) else value
+ ]
def pa_column_to_timestamp_proto_column(column: pa.lib.ChunkedArray) -> List[Timestamp]:
@@ -480,3 +482,24 @@ def pa_column_to_proto_column(
]
else:
return [ProtoValue(**{value: x.as_py()}) for x in column]
+
+
+def bq_to_feast_value_type(bq_type_as_str):
+ type_map: Dict[ValueType, Union[str, Dict[str, Any]]] = {
+ "DATETIME": ValueType.STRING, # Update to ValueType.UNIX_TIMESTAMP once #1520 lands.
+ "TIMESTAMP": ValueType.STRING, # Update to ValueType.UNIX_TIMESTAMP once #1520 lands.
+ "INTEGER": ValueType.INT64,
+ "INT64": ValueType.INT64,
+ "STRING": ValueType.STRING,
+ "FLOAT": ValueType.DOUBLE,
+ "FLOAT64": ValueType.DOUBLE,
+ "BYTES": ValueType.BYTES,
+ "BOOL": ValueType.BOOL,
+ "ARRAY<INT64>": ValueType.INT64_LIST,
+ "ARRAY<FLOAT64>": ValueType.DOUBLE_LIST,
+ "ARRAY<STRING>": ValueType.STRING_LIST,
+ "ARRAY<BYTES>": ValueType.BYTES_LIST,
+ "ARRAY<BOOL>": ValueType.BOOL_LIST,
+ }
+
+ return type_map[bq_type_as_str]
| diff --git a/sdk/python/tests/example_feature_repo_with_inference.py b/sdk/python/tests/example_feature_repo_with_inference.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/example_feature_repo_with_inference.py
@@ -0,0 +1,21 @@
+from google.protobuf.duration_pb2 import Duration
+
+from feast import Entity, FeatureView, ValueType
+from feast.data_source import FileSource
+
+driver_hourly_stats = FileSource(
+ path="%PARQUET_PATH%", # placeholder to be replaced by the test
+ created_timestamp_column="created",
+)
+
+driver = Entity(name="driver_id", value_type=ValueType.INT64, description="driver id",)
+
+# features are inferred from columns of data source
+driver_hourly_stats_view = FeatureView(
+ name="driver_hourly_stats",
+ entities=["driver_id"],
+ ttl=Duration(seconds=86400 * 1),
+ online=True,
+ input=driver_hourly_stats,
+ tags={},
+)
diff --git a/sdk/python/tests/fixtures/data_source_fixtures.py b/sdk/python/tests/fixtures/data_source_fixtures.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/fixtures/data_source_fixtures.py
@@ -0,0 +1,79 @@
+import contextlib
+import tempfile
+from datetime import datetime, timedelta
+
+import pandas as pd
+import pytest
+from google.cloud import bigquery
+
+from feast.data_format import ParquetFormat
+from feast.data_source import BigQuerySource, FileSource
+
+
[email protected]
+def simple_dataset_1() -> pd.DataFrame:
+ now = datetime.utcnow()
+ ts = pd.Timestamp(now).round("ms")
+ data = {
+ "id": [1, 2, 1, 3, 3],
+ "float_col": [0.1, 0.2, 0.3, 4, 5],
+ "int64_col": [1, 2, 3, 4, 5],
+ "string_col": ["a", "b", "c", "d", "e"],
+ "ts_1": [
+ ts,
+ ts - timedelta(hours=4),
+ ts - timedelta(hours=3),
+ ts - timedelta(hours=2),
+ ts - timedelta(hours=1),
+ ],
+ }
+ return pd.DataFrame.from_dict(data)
+
+
[email protected]
+def prep_file_source(df, event_timestamp_column="") -> FileSource:
+ with tempfile.NamedTemporaryFile(suffix=".parquet") as f:
+ f.close()
+ df.to_parquet(f.name)
+ file_source = FileSource(
+ file_format=ParquetFormat(),
+ file_url=f.name,
+ event_timestamp_column=event_timestamp_column,
+ )
+ yield file_source
+
+
+def simple_bq_source_using_table_ref_arg(
+ df, event_timestamp_column=""
+) -> BigQuerySource:
+ client = bigquery.Client()
+ gcp_project = client.project
+ bigquery_dataset = "ds"
+ dataset = bigquery.Dataset(f"{gcp_project}.{bigquery_dataset}")
+ client.create_dataset(dataset, exists_ok=True)
+ dataset.default_table_expiration_ms = (
+ 1000
+ * 60
+ * 60 # 60 minutes in milliseconds (seems to be minimum limit for gcloud)
+ )
+ client.update_dataset(dataset, ["default_table_expiration_ms"])
+ table_ref = f"{gcp_project}.{bigquery_dataset}.table_1"
+
+ job = client.load_table_from_dataframe(
+ df, table_ref, job_config=bigquery.LoadJobConfig()
+ )
+ job.result()
+
+ return BigQuerySource(
+ table_ref=table_ref, event_timestamp_column=event_timestamp_column,
+ )
+
+
+def simple_bq_source_using_query_arg(df, event_timestamp_column="") -> BigQuerySource:
+ bq_source_using_table_ref = simple_bq_source_using_table_ref_arg(
+ df, event_timestamp_column
+ )
+ return BigQuerySource(
+ query=f"SELECT * FROM {bq_source_using_table_ref.table_ref}",
+ event_timestamp_column=event_timestamp_column,
+ )
diff --git a/sdk/python/tests/test_feature_store.py b/sdk/python/tests/test_feature_store.py
--- a/sdk/python/tests/test_feature_store.py
+++ b/sdk/python/tests/test_feature_store.py
@@ -16,6 +16,12 @@
from tempfile import mkstemp
import pytest
+from fixtures.data_source_fixtures import simple_dataset_1 # noqa: F401
+from fixtures.data_source_fixtures import (
+ prep_file_source,
+ simple_bq_source_using_query_arg,
+ simple_bq_source_using_table_ref_arg,
+)
from pytest_lazyfixture import lazy_fixture
from feast.data_format import ParquetFormat
@@ -178,6 +184,70 @@ def test_apply_feature_view_success(test_feature_store):
)
[email protected]
[email protected](
+ "test_feature_store", [lazy_fixture("feature_store_with_local_registry")],
+)
[email protected]("dataframe_source", [lazy_fixture("simple_dataset_1")])
+def test_feature_view_inference_success(test_feature_store, dataframe_source):
+ with prep_file_source(
+ df=dataframe_source, event_timestamp_column="ts_1"
+ ) as file_source:
+ fv1 = FeatureView(
+ name="fv1",
+ entities=["id"],
+ ttl=timedelta(minutes=5),
+ online=True,
+ input=file_source,
+ tags={},
+ )
+
+ fv2 = FeatureView(
+ name="fv2",
+ entities=["id"],
+ ttl=timedelta(minutes=5),
+ online=True,
+ input=simple_bq_source_using_table_ref_arg(dataframe_source, "ts_1"),
+ tags={},
+ )
+
+ fv3 = FeatureView(
+ name="fv3",
+ entities=["id"],
+ ttl=timedelta(minutes=5),
+ online=True,
+ input=simple_bq_source_using_query_arg(dataframe_source, "ts_1"),
+ tags={},
+ )
+
+ test_feature_store.apply([fv1, fv2, fv3]) # Register Feature Views
+ feature_view_1 = test_feature_store.list_feature_views()[0]
+ feature_view_2 = test_feature_store.list_feature_views()[1]
+ feature_view_3 = test_feature_store.list_feature_views()[2]
+
+ actual_file_source = {
+ (feature.name, feature.dtype) for feature in feature_view_1.features
+ }
+ actual_bq_using_table_ref_arg_source = {
+ (feature.name, feature.dtype) for feature in feature_view_2.features
+ }
+ actual_bq_using_query_arg_source = {
+ (feature.name, feature.dtype) for feature in feature_view_3.features
+ }
+ expected = {
+ ("float_col", ValueType.DOUBLE),
+ ("int64_col", ValueType.INT64),
+ ("string_col", ValueType.STRING),
+ }
+
+ assert (
+ expected
+ == actual_file_source
+ == actual_bq_using_table_ref_arg_source
+ == actual_bq_using_query_arg_source
+ )
+
+
@pytest.mark.integration
@pytest.mark.parametrize(
"test_feature_store", [lazy_fixture("feature_store_with_gcs_registry")],
@@ -245,6 +315,22 @@ def test_apply_feature_view_integration(test_feature_store):
assert len(feature_views) == 0
[email protected]
[email protected]("dataframe_source", [lazy_fixture("simple_dataset_1")])
+def test_data_source_ts_col_inference_success(dataframe_source):
+ with prep_file_source(df=dataframe_source) as file_source:
+ actual_file_source = file_source.event_timestamp_column
+ actual_bq_1 = simple_bq_source_using_table_ref_arg(
+ dataframe_source
+ ).event_timestamp_column
+ actual_bq_2 = simple_bq_source_using_query_arg(
+ dataframe_source
+ ).event_timestamp_column
+ expected = "ts_1"
+
+ assert expected == actual_file_source == actual_bq_1 == actual_bq_2
+
+
@pytest.mark.parametrize(
"test_feature_store", [lazy_fixture("feature_store_with_local_registry")],
)
| Add schema inferencing to feature views
Please see [[RFC] FeatureView schema inference](https://docs.google.com/document/d/1MkWvexE4e5nYWcQLELFnJ5o9OlJDKC2rn_USHMDT9dg/edit#)
| Will work on this!! | 2021-04-29T16:30:38 |
feast-dev/feast | 1,526 | feast-dev__feast-1526 | [
"1524"
] | 3d3f806a0a6846b483e475db6efa8c9b39ec6f8a | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -43,15 +43,14 @@
"fastavro>=0.22.11,<0.23",
"google-api-core>=1.23.0",
"googleapis-common-protos==1.52.*",
- "grpcio>=1.32.0",
+ "grpcio>=1.34.0",
"Jinja2>=2.0.0",
"jsonschema",
"mmh3",
- "numpy<1.20.0",
- "pandas~=1.0.0",
+ "pandas>=1.0.0",
"pandavro==1.5.*",
"protobuf>=3.10",
- "pyarrow==2.0.0",
+ "pyarrow>=2.0.0",
"pydantic>=1.0.0",
"PyYAML==5.3.*",
"tabulate==0.8.*",
@@ -72,8 +71,8 @@
"flake8",
"black==19.10b0",
"isort>=5",
- "grpcio-tools>=1.32.0",
- "grpcio-testing>=1.32.0",
+ "grpcio-tools==1.34.0",
+ "grpcio-testing==1.34.0",
"mock==2.0.0",
"moto",
"mypy==0.790",
@@ -205,7 +204,7 @@ def run(self):
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
use_scm_version={"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX},
- setup_requires=["setuptools_scm", "grpcio", "grpcio-tools>=1.32.0", "mypy-protobuf", "sphinx"],
+ setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf", "sphinx"],
package_data={
"": [
"protos/feast/**/*.proto",
| diff --git a/.github/workflows/integration_tests.yml b/.github/workflows/integration_tests.yml
--- a/.github/workflows/integration_tests.yml
+++ b/.github/workflows/integration_tests.yml
@@ -7,10 +7,19 @@ on:
jobs:
integration-test-python:
- runs-on: ubuntu-latest
- container: python:3.7
+ runs-on: ${{ matrix.os }}
+ strategy:
+ fail-fast: false
+ matrix:
+ python-version: [ 3.7, 3.8, 3.9 ]
+ os: [ ubuntu-latest, macOS-latest ]
steps:
- uses: actions/checkout@v2
+ - name: Setup Python
+ uses: actions/setup-python@v2
+ with:
+ python-version: ${{ matrix.python-version }}
+ architecture: x64
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
diff --git a/.github/workflows/pr_integration_tests.yml b/.github/workflows/pr_integration_tests.yml
--- a/.github/workflows/pr_integration_tests.yml
+++ b/.github/workflows/pr_integration_tests.yml
@@ -12,8 +12,12 @@ jobs:
# all jobs MUST have this if check for 'ok-to-test' or 'approved' for security purposes.
if: (github.event.action == 'labeled' && (github.event.label.name == 'approved' || github.event.label.name == 'ok-to-test'))
|| (github.event.action != 'labeled' && (contains(github.event.pull_request.labels.*.name, 'ok-to-test') || contains(github.event.pull_request.labels.*.name, 'approved')))
- runs-on: ubuntu-latest
- container: python:3.7
+ runs-on: ${{ matrix.os }}
+ strategy:
+ fail-fast: false
+ matrix:
+ python-version: [ 3.7, 3.8, 3.9 ]
+ os: [ ubuntu-latest, macOS-latest ]
services:
redis:
image: redis
@@ -30,6 +34,11 @@ jobs:
# code from the PR.
ref: refs/pull/${{ github.event.pull_request.number }}/merge
submodules: recursive
+ - name: Setup Python
+ uses: actions/setup-python@v2
+ with:
+ python-version: ${{ matrix.python-version }}
+ architecture: x64
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
diff --git a/.github/workflows/unit_tests.yml b/.github/workflows/unit_tests.yml
--- a/.github/workflows/unit_tests.yml
+++ b/.github/workflows/unit_tests.yml
@@ -1,13 +1,21 @@
name: unit-tests
on: [push, pull_request]
-
jobs:
unit-test-python:
- runs-on: ubuntu-latest
- container: python:3.7
+ runs-on: ${{ matrix.os }}
+ strategy:
+ fail-fast: false
+ matrix:
+ python-version: [ 3.7, 3.8, 3.9 ]
+ os: [ ubuntu-latest, macOS-latest]
steps:
- uses: actions/checkout@v2
+ - name: Setup Python
+ uses: actions/setup-python@v2
+ with:
+ python-version: ${{ matrix.python-version }}
+ architecture: x64
- name: Install dependencies
run: make install-python-ci-dependencies
- name: Test Python
| Add cross-environment testing to GitHub Actions
Instead of just testing `python:3.7`, we should test
* Multiple operating systems
* Multiple versions of Python
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on
| 2021-04-29T23:13:32 |
|
feast-dev/feast | 1,550 | feast-dev__feast-1550 | [
"1531"
] | 77a79755561132fa68f17916f29e928522a9fa52 | diff --git a/sdk/python/feast/cli.py b/sdk/python/feast/cli.py
--- a/sdk/python/feast/cli.py
+++ b/sdk/python/feast/cli.py
@@ -38,6 +38,19 @@
DATETIME_ISO = "%Y-%m-%dT%H:%M:%s"
+class NoOptionDefaultFormat(click.Command):
+ def format_options(self, ctx: click.Context, formatter: click.HelpFormatter):
+ """Writes all the options into the formatter if they exist."""
+ opts = []
+ for param in self.get_params(ctx):
+ rv = param.get_help_record(ctx)
+ if rv is not None:
+ opts.append(rv)
+ if opts:
+ with formatter.section("Options(No current command options)"):
+ formatter.write_dl(opts)
+
+
@click.group()
@click.option(
"--chdir",
@@ -166,7 +179,7 @@ def feature_view_list(ctx: click.Context):
print(tabulate(table, headers=["NAME", "ENTITIES"], tablefmt="plain"))
[email protected]("apply")
[email protected]("apply", cls=NoOptionDefaultFormat)
@click.pass_context
def apply_total_command(ctx: click.Context):
"""
@@ -183,7 +196,7 @@ def apply_total_command(ctx: click.Context):
print(str(e))
[email protected]("teardown")
[email protected]("teardown", cls=NoOptionDefaultFormat)
@click.pass_context
def teardown_command(ctx: click.Context):
"""
| feast cli minor bug
## Expected Behavior
`feast teardown --help` should list the options to this subcommand like
> (feast) ➜ examples git:(br_jsd_mlflow_feast_example) ✗ feast init --help
> Usage: feast init [OPTIONS] [PROJECT_DIRECTORY]
>
> Create a new Feast repository
>
> Options:
> -m, --minimal Create an empty project repository
> -t, --template [local|gcp] Specify a template for the created project
> --help Show this message and exit.
## Current Behavior
> (base) ➜ delta-lake git:(master) ✗ feast teardown --help
> Usage: feast teardown [OPTIONS]
>
> Tear down deployed feature store infrastructure
>
> Options:
> --help Show this message and exit.
I reckon this is a generic message if the subcommand has not options. For example, `feast apply --help` exhibits similar
behavior.
## Steps to reproduce
`feast teardown --help`
### Specifications
- Version: 0.10.3
- Platform: MacOS Mojave
- Subsystem:
| Thanks @dmatrix. Definitely a few rough edges around the new CLI. Let's keep this issue open until resolved!
@dmatrix I'd like to fix this. What exactly should the `feast teardown --help` and `feast apply --help` print ?
@tedhtchang Generally speaking, most usage messages would print out options if available. For example, the
`feast init --help` will print out:
> (base) ➜ delta-lake git:(master) ✗ feast init --help
> Usage: feast init [OPTIONS] [PROJECT_DIRECTORY]
>
> Create a new Feast repository
>
> Options:
> -m, --minimal Create an empty project repository
> -t, --template [local|gcp] Specify a template for the created project
> --help Show this message and exit.
If no options exist for a subcommand, then it's not unreasonable to indicate none exists, IMHO. For example, one can
envision something such as:
> Usage: feast teardown [OPTIONS]
>
> Tear down deployed feature store infrastructure
>
> Options:(No current command options)
> --help Show this message and exit.
The best place to start would be [here]( https://github.com/feast-dev/feast/blob/master/sdk/python/feast/cli.py#L168).
cc: @woop can recommend or opine on what should the message be or what should it indicate.
Currently we don't have options for this command (teardown and apply), so following @dmatrix's second recommendation may make sense. | 2021-05-08T00:05:00 |
|
feast-dev/feast | 1,558 | feast-dev__feast-1558 | [
"1545"
] | 77a79755561132fa68f17916f29e928522a9fa52 | diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -1,6 +1,7 @@
import importlib
import os
import random
+import re
import sys
from datetime import timedelta
from importlib.abc import Loader
@@ -8,6 +9,7 @@
from typing import List, NamedTuple, Set, Union
import click
+from click.exceptions import BadParameter
from feast import Entity, FeatureTable
from feast.feature_view import FeatureView
@@ -110,6 +112,12 @@ def apply_total(repo_config: RepoConfig, repo_path: Path):
sys.path.append("")
registry_config = repo_config.get_registry_config()
project = repo_config.project
+ if not_valid_name(project):
+ print(
+ f"{project} is not valid. Project name should only have "
+ f"alphanumerical values and underscores."
+ )
+ sys.exit(1)
registry = Registry(
registry_path=registry_config.path,
repo_path=repo_path,
@@ -262,6 +270,11 @@ def init_repo(repo_name: str, template: str):
from colorama import Fore, Style
+ if not_valid_name(repo_name):
+ raise BadParameter(
+ message="Name should be alphanumeric values and underscores",
+ param_hint="PROJECT_DIRECTORY",
+ )
repo_path = Path(os.path.join(Path.cwd(), repo_name))
repo_path.mkdir(exist_ok=True)
repo_config_path = repo_path / "feature_store.yaml"
@@ -314,6 +327,11 @@ def init_repo(repo_name: str, template: str):
click.echo()
+def not_valid_name(name: str) -> bool:
+ """Test project or repo names. True if names have characters other than alphanumeric values and underscores"""
+ return re.compile(r"\W+").search(name) is not None
+
+
def replace_str_in_file(file_path, match_str, sub_str):
with open(file_path, "r") as f:
contents = f.read()
| Failure when there is a hyphen in project names
## Expected Behavior
Able to run `feast apply` when there is a hyphen in the project name
## Current Behavior
```
Registered entity year_month_day
Registered feature view serve_weather_features
Deploying infrastructure for serve_weather_features
Traceback (most recent call last):
File "/home/willem/.pyenv/versions/3.7.7/bin/feast", line 33, in <module>
sys.exit(load_entry_point('feast', 'console_scripts', 'feast')())
File "/home/willem/.pyenv/versions/3.7.7/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/willem/.pyenv/versions/3.7.7/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/willem/.pyenv/versions/3.7.7/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/willem/.pyenv/versions/3.7.7/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/willem/.pyenv/versions/3.7.7/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/willem/Projects/feast/sdk/python/feast/cli.py", line 163, in apply_total_command
apply_total(repo_config, Path.cwd())
File "/home/willem/Projects/feast/sdk/python/feast/repo_operations.py", line 208, in apply_total
partial=False,
File "/home/willem/Projects/feast/sdk/python/feast/infra/local.py", line 61, in update_infra
f"CREATE TABLE IF NOT EXISTS {_table_id(project, table)} (entity_key BLOB, feature_name TEXT, value BLOB, event_ts timestamp, created_ts timestamp, PRIMARY KEY(entity_key, feature_name))"
sqlite3.OperationalError: near "-": syntax error
```
## Steps to reproduce
Clone https://github.com/dmatrix/mlflow-tests/tree/master/py/delta-lake
Run `feast apply`
### Specifications
- Version: 0.10.3
## Possible Solution
The problem is that we don't support hyphens in project names. We should either prevent the use of hyphens or support them at the storage layer.
| @woop As you know, SQLite disallows "-" in a table name. For example:
```
sqlite3 bug.db 'create table-with-dash" ("col" text)'
Error: near "-": syntax error
```
SQLite does not support this at the storage layer, albeit creates the "bug.db" database without a table.
Since the `feast apply` uses the `project:<name>` table from the `feature_store.yaml`, created by `feast init <name>`, does it make sense to parse the <name> argument to the `feast init <name>` and flag an error, disallowing its usage?
Preventing the use of "-" might be an easier workaround early on, during initializing a feast repo, unless you want to support “-“ at both local and remote storage layers.
WDYT?
> @woop As you know, SQLite disallows "-" in a table name. For example:
>
> ```
> sqlite3 bug.db 'create table-with-dash" ("col" text)'
> Error: near "-": syntax error
> ```
>
> SQLite does not support this at the storage layer, albeit creates the "bug.db" database without a table.
>
> Since the `feast apply` uses the `project:<name>` table from the `feature_store.yaml`, created by `feast init <name>`, does it make sense to parse the argument to the `feast init <name>` and flag an error, disallowing its usage?
>
> Preventing the use of "-" might be an easier workaround early on, during initializing a feast repo, unless you want to support “-“ at both local and remote storage layers.
>
> WDYT?
Spot on. It may be easiest to just allow alphanumeric values and underscores in project names as a start. As you say, validation would have to happen during init and also during apply. | 2021-05-13T10:10:29 |
|
feast-dev/feast | 1,580 | feast-dev__feast-1580 | [
"1574"
] | 6e243b821f77f6b58128472b876d8e5979ca049e | diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -330,7 +330,11 @@ def _python_value_to_proto_value(feast_value_type, value) -> ProtoValue:
def python_value_to_proto_value(
value: Any, feature_type: ValueType = None
) -> ProtoValue:
- value_type = python_type_to_feast_value_type("", value) if value else feature_type
+ value_type = (
+ python_type_to_feast_value_type("", value)
+ if value is not None
+ else feature_type
+ )
return _python_value_to_proto_value(value_type, value)
| Zeroed entity value causes materialization to fail
If the user uses an INT32 entity type, and sets their entity to `0`, then type_map.py will throw an exception here https://github.com/feast-dev/feast/blob/master/sdk/python/feast/type_map.py#L207 that says `feast_value_type` is of type `None` and does not contain a `name` field.
| 2021-05-24T17:31:33 |
||
feast-dev/feast | 1,582 | feast-dev__feast-1582 | [
"1575"
] | dfb029df97ab04f6a26e87505e0379caadc4f069 | diff --git a/sdk/python/feast/infra/gcp.py b/sdk/python/feast/infra/gcp.py
--- a/sdk/python/feast/infra/gcp.py
+++ b/sdk/python/feast/infra/gcp.py
@@ -42,6 +42,8 @@ def __init__(self, config: RepoConfig):
assert isinstance(config.online_store, DatastoreOnlineStoreConfig)
self._gcp_project_id = config.online_store.project_id
self._namespace = config.online_store.namespace
+ self._write_concurrency = config.online_store.write_concurrency
+ self._write_batch_size = config.online_store.write_batch_size
assert config.offline_store is not None
self.offline_store = get_offline_store_from_config(config.offline_store)
@@ -72,7 +74,9 @@ def update_infra(
for table in tables_to_keep:
key = client.key("Project", project, "Table", table.name)
- entity = datastore.Entity(key=key)
+ entity = datastore.Entity(
+ key=key, exclude_from_indexes=("created_ts", "event_ts", "values")
+ )
entity.update({"created_ts": datetime.utcnow()})
client.put(entity)
@@ -113,10 +117,10 @@ def online_write_batch(
) -> None:
client = self._initialize_client()
- pool = ThreadPool(processes=40)
+ pool = ThreadPool(processes=self._write_concurrency)
pool.map(
lambda b: _write_minibatch(client, project, table, b, progress),
- _to_minibatches(data),
+ _to_minibatches(data, batch_size=self._write_batch_size),
)
def online_read(
@@ -217,7 +221,7 @@ def get_historical_features(
]
-def _to_minibatches(data: ProtoBatch, batch_size=50) -> Iterator[ProtoBatch]:
+def _to_minibatches(data: ProtoBatch, batch_size) -> Iterator[ProtoBatch]:
"""
Split data into minibatches, making sure we stay under GCP datastore transaction size
limits.
@@ -247,7 +251,9 @@ def _write_minibatch(
key = client.key("Project", project, "Table", table.name, "Row", document_id,)
- entity = datastore.Entity(key=key)
+ entity = datastore.Entity(
+ key=key, exclude_from_indexes=("created_ts", "event_ts", "values")
+ )
entity.update(
dict(
diff --git a/sdk/python/feast/repo_config.py b/sdk/python/feast/repo_config.py
--- a/sdk/python/feast/repo_config.py
+++ b/sdk/python/feast/repo_config.py
@@ -1,7 +1,14 @@
from pathlib import Path
import yaml
-from pydantic import BaseModel, StrictInt, StrictStr, ValidationError, root_validator
+from pydantic import (
+ BaseModel,
+ PositiveInt,
+ StrictInt,
+ StrictStr,
+ ValidationError,
+ root_validator,
+)
from pydantic.error_wrappers import ErrorWrapper
from pydantic.typing import Dict, Literal, Optional, Union
@@ -38,6 +45,12 @@ class DatastoreOnlineStoreConfig(FeastBaseModel):
namespace: Optional[StrictStr] = None
""" (optional) Datastore namespace """
+ write_concurrency: Optional[PositiveInt] = 40
+ """ (optional) Amount of threads to use when writing batches of feature rows into Datastore"""
+
+ write_batch_size: Optional[PositiveInt] = 50
+ """ (optional) Amount of feature rows per batch being written into Datastore"""
+
OnlineStoreConfig = Union[DatastoreOnlineStoreConfig, SqliteOnlineStoreConfig]
| Materialization into Datastore leads to contention error
This issue was reported directly to us by a user. Documenting the failure here for posterity:
```
Materializing 1 feature views from 2021-04-29 21:19:00-07:00 to 2021-04-29 21:19:05-07:00 into the datastore online store.
my_fv:
8%|████▋ | 2720/33120 [00:07<01:18, 388.10it/s]
[SNIP]
google.api_core.exceptions.Aborted: 409 too much contention on these datastore entities. please try again. entity groups:
```
Example of the data used for ingestion https://pastebin.com/raw/GCJ2CLSb (this is just a sample, the actual ingestion contains many more rows)
| 2021-05-24T18:34:50 |
||
feast-dev/feast | 1,585 | feast-dev__feast-1585 | [
"1584"
] | a3d7c8661039eb3abc1beb9cb48718a6ad24e3ef | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -40,7 +40,7 @@
REQUIRED = [
"Click==7.*",
"colorama>=0.3.9",
- "fastavro>=0.22.11,<0.23",
+ "fastavro>=1.1.0",
"google-api-core>=1.23.0",
"googleapis-common-protos==1.52.*",
"grpcio>=1.34.0",
| Bump fastavro version
**Is your feature request related to a problem? Please describe.**
The version of Fastavro that we're using is kinda old and may be buggy soon. It's also causing some version conflicts with packages that have already upgraded to the newer (1.xx) versions.
**Describe the solution you'd like**
Bump Fastavro to 1.x.x
| Sounds good to me. @kevinhu do you want to try to take a stab at it? | 2021-05-25T22:10:27 |
|
feast-dev/feast | 1,623 | feast-dev__feast-1623 | [
"1613"
] | b32e76602d15ab1b3d8e499bfd6079156c62c2b1 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -474,7 +474,7 @@ def get_online_features(
Note: This method will download the full feature registry the first time it is run. If you are using a
remote registry like GCS or S3 then that may take a few seconds. The registry remains cached up to a TTL
- duration (which can be set to infinitey). If the cached registry is stale (more time than the TTL has
+ duration (which can be set to infinity). If the cached registry is stale (more time than the TTL has
passed), then a new registry will be downloaded synchronously by this method. This download may
introduce latency to online feature retrieval. In order to avoid synchronous downloads, please call
refresh_registry() prior to the TTL being reached. Remember it is possible to set the cache TTL to
diff --git a/sdk/python/feast/online_response.py b/sdk/python/feast/online_response.py
--- a/sdk/python/feast/online_response.py
+++ b/sdk/python/feast/online_response.py
@@ -14,6 +14,8 @@
from typing import Any, Dict, List, cast
+import pandas as pd
+
from feast.protos.feast.serving.ServingService_pb2 import (
GetOnlineFeaturesRequestV2,
GetOnlineFeaturesResponse,
@@ -61,6 +63,13 @@ def to_dict(self) -> Dict[str, Any]:
return features_dict
+ def to_df(self) -> pd.DataFrame:
+ """
+ Converts GetOnlineFeaturesResponse features into Panda dataframe form.
+ """
+
+ return pd.DataFrame(self.to_dict())
+
def _infer_online_entity_rows(
entity_rows: List[Dict[str, Any]]
| diff --git a/sdk/python/tests/test_online_retrieval.py b/sdk/python/tests/test_online_retrieval.py
--- a/sdk/python/tests/test_online_retrieval.py
+++ b/sdk/python/tests/test_online_retrieval.py
@@ -2,7 +2,9 @@
import time
from datetime import datetime
+import pandas as pd
import pytest
+from pandas.testing import assert_frame_equal
from feast import FeatureStore, RepoConfig
from feast.errors import FeatureViewNotFoundException
@@ -234,3 +236,163 @@ def test_online() -> None:
# Restore registry.db so that teardown works
os.rename(store.config.registry + "_fake", store.config.registry)
+
+
+def test_online_to_df():
+ """
+ Test dataframe conversion. Make sure the response columns and rows are
+ the same order as the request.
+ """
+
+ driver_ids = [1, 2, 3]
+ customer_ids = [4, 5, 6]
+ name = "foo"
+ lon_multiply = 1.0
+ lat_multiply = 0.1
+ age_multiply = 10
+ avg_order_day_multiply = 1.0
+
+ runner = CliRunner()
+ with runner.local_repo(
+ get_example_repo("example_feature_repo_1.py"), "bigquery"
+ ) as store:
+ # Write three tables to online store
+ driver_locations_fv = store.get_feature_view(name="driver_locations")
+ customer_profile_fv = store.get_feature_view(name="customer_profile")
+ customer_driver_combined_fv = store.get_feature_view(
+ name="customer_driver_combined"
+ )
+ provider = store._get_provider()
+
+ for (d, c) in zip(driver_ids, customer_ids):
+ """
+ driver table:
+ driver driver_locations__lon driver_locations__lat
+ 1 1.0 0.1
+ 2 2.0 0.2
+ 3 3.0 0.3
+ """
+ driver_key = EntityKeyProto(
+ join_keys=["driver"], entity_values=[ValueProto(int64_val=d)]
+ )
+ provider.online_write_batch(
+ config=store.config,
+ table=driver_locations_fv,
+ data=[
+ (
+ driver_key,
+ {
+ "lat": ValueProto(double_val=d * lat_multiply),
+ "lon": ValueProto(string_val=str(d * lon_multiply)),
+ },
+ datetime.utcnow(),
+ datetime.utcnow(),
+ )
+ ],
+ progress=None,
+ )
+
+ """
+ customer table
+ customer customer_profile__avg_orders_day customer_profile__name customer_profile__age
+ 4 4.0 foo4 40
+ 5 5.0 foo5 50
+ 6 6.0 foo6 60
+ """
+ customer_key = EntityKeyProto(
+ join_keys=["customer"], entity_values=[ValueProto(int64_val=c)]
+ )
+ provider.online_write_batch(
+ config=store.config,
+ table=customer_profile_fv,
+ data=[
+ (
+ customer_key,
+ {
+ "avg_orders_day": ValueProto(
+ float_val=c * avg_order_day_multiply
+ ),
+ "name": ValueProto(string_val=name + str(c)),
+ "age": ValueProto(int64_val=c * age_multiply),
+ },
+ datetime.utcnow(),
+ datetime.utcnow(),
+ )
+ ],
+ progress=None,
+ )
+ """
+ customer_driver_combined table
+ customer driver customer_driver_combined__trips
+ 4 1 4
+ 5 2 10
+ 6 3 18
+ """
+ combo_keys = EntityKeyProto(
+ join_keys=["customer", "driver"],
+ entity_values=[ValueProto(int64_val=c), ValueProto(int64_val=d)],
+ )
+ provider.online_write_batch(
+ config=store.config,
+ table=customer_driver_combined_fv,
+ data=[
+ (
+ combo_keys,
+ {"trips": ValueProto(int64_val=c * d)},
+ datetime.utcnow(),
+ datetime.utcnow(),
+ )
+ ],
+ progress=None,
+ )
+
+ # Get online features in dataframe
+ result_df = store.get_online_features(
+ feature_refs=[
+ "driver_locations:lon",
+ "driver_locations:lat",
+ "customer_profile:avg_orders_day",
+ "customer_profile:name",
+ "customer_profile:age",
+ "customer_driver_combined:trips",
+ ],
+ # Reverse the row order
+ entity_rows=[
+ {"driver": d, "customer": c}
+ for (d, c) in zip(reversed(driver_ids), reversed(customer_ids))
+ ],
+ ).to_df()
+ """
+ Construct the expected dataframe with reversed row order like so:
+ driver customer driver_locations__lon driver_locations__lat customer_profile__avg_orders_day customer_profile__name customer_profile__age customer_driver_combined__trips
+ 3 6 3.0 0.3 6.0 foo6 60 18
+ 2 5 2.0 0.2 5.0 foo5 50 10
+ 1 4 1.0 0.1 4.0 foo4 40 4
+ """
+ df_dict = {
+ "driver": driver_ids,
+ "customer": customer_ids,
+ "driver_locations__lon": [str(d * lon_multiply) for d in driver_ids],
+ "driver_locations__lat": [d * lat_multiply for d in driver_ids],
+ "customer_profile__avg_orders_day": [
+ c * avg_order_day_multiply for c in customer_ids
+ ],
+ "customer_profile__name": [name + str(c) for c in customer_ids],
+ "customer_profile__age": [c * age_multiply for c in customer_ids],
+ "customer_driver_combined__trips": [
+ d * c for (d, c) in zip(driver_ids, customer_ids)
+ ],
+ }
+ # Requested column order
+ ordered_column = [
+ "driver",
+ "customer",
+ "driver_locations__lon",
+ "driver_locations__lat",
+ "customer_profile__avg_orders_day",
+ "customer_profile__name",
+ "customer_profile__age",
+ "customer_driver_combined__trips",
+ ]
+ expected_df = pd.DataFrame({k: reversed(v) for (k, v) in df_dict.items()})
+ assert_frame_equal(result_df[ordered_column], expected_df)
| Add `to_df()` to OnlineResponse
We need to provide a Pandas Dataframe based return type from `get_online_features()` for consistency with `get_historical_features()`.
| @woop You mean adding a method `to_df()` to the class OnlineResponse so we call call `get_online_features(...).to_df()` ?
yea, something like that.
@woop I will work on it if no one is working on it.
Sure, that would be great :) | 2021-06-07T10:53:39 |
feast-dev/feast | 1,627 | feast-dev__feast-1627 | [
"1596"
] | c50a36ec1ad5b8d81c6f773c23204db7c7a7d218 | diff --git a/sdk/python/feast/data_source.py b/sdk/python/feast/data_source.py
--- a/sdk/python/feast/data_source.py
+++ b/sdk/python/feast/data_source.py
@@ -20,6 +20,7 @@
from feast import type_map
from feast.data_format import FileFormat, StreamFormat
+from feast.errors import DataSourceNotFoundException
from feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto
from feast.value_type import ValueType
@@ -519,6 +520,12 @@ def to_proto(self) -> DataSourceProto:
"""
raise NotImplementedError
+ def validate(self):
+ """
+ Validates the underlying data source.
+ """
+ raise NotImplementedError
+
class FileSource(DataSource):
def __init__(
@@ -615,6 +622,10 @@ def to_proto(self) -> DataSourceProto:
return data_source_proto
+ def validate(self):
+ # TODO: validate a FileSource
+ pass
+
@staticmethod
def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
return type_map.pa_to_feast_value_type
@@ -692,6 +703,17 @@ def to_proto(self) -> DataSourceProto:
return data_source_proto
+ def validate(self):
+ if not self.query:
+ from google.api_core.exceptions import NotFound
+ from google.cloud import bigquery
+
+ client = bigquery.Client()
+ try:
+ client.get_table(self.table_ref)
+ except NotFound:
+ raise DataSourceNotFoundException(self.table_ref)
+
def get_table_query_string(self) -> str:
"""Returns a string that can directly be used to reference this table in SQL"""
if self.table_ref:
diff --git a/sdk/python/feast/errors.py b/sdk/python/feast/errors.py
--- a/sdk/python/feast/errors.py
+++ b/sdk/python/feast/errors.py
@@ -3,6 +3,13 @@
from colorama import Fore, Style
+class DataSourceNotFoundException(Exception):
+ def __init__(self, path):
+ super().__init__(
+ f"Unable to find table at '{path}'. Please check that table exists."
+ )
+
+
class FeastObjectNotFoundException(Exception):
pass
diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -154,11 +154,12 @@ def apply_total(repo_config: RepoConfig, repo_path: Path):
data_sources = [t.input for t in repo.feature_views]
- # Make sure the data source used by this feature view is supported by
+ # Make sure the data source used by this feature view is supported by Feast
for data_source in data_sources:
assert_offline_store_supports_data_source(
repo_config.offline_store, data_source
)
+ data_source.validate()
update_data_sources_with_inferred_event_timestamp_col(data_sources)
for view in repo.feature_views:
| diff --git a/sdk/python/tests/example_feature_repo_with_missing_bq_source.py b/sdk/python/tests/example_feature_repo_with_missing_bq_source.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/example_feature_repo_with_missing_bq_source.py
@@ -0,0 +1,20 @@
+from datetime import timedelta
+
+from feast import BigQuerySource, Entity, Feature, FeatureView, ValueType
+
+nonexistent_source = BigQuerySource(
+ table_ref="project.dataset.nonexistent_table", event_timestamp_column=""
+)
+
+driver = Entity(name="driver", value_type=ValueType.INT64, description="driver id",)
+
+nonexistent_features = FeatureView(
+ name="driver_locations",
+ entities=["driver"],
+ ttl=timedelta(days=1),
+ features=[
+ Feature(name="lat", dtype=ValueType.FLOAT),
+ Feature(name="lon", dtype=ValueType.STRING),
+ ],
+ input=nonexistent_source,
+)
diff --git a/sdk/python/tests/test_cli_gcp.py b/sdk/python/tests/test_cli_gcp.py
--- a/sdk/python/tests/test_cli_gcp.py
+++ b/sdk/python/tests/test_cli_gcp.py
@@ -53,3 +53,38 @@ def test_basic() -> None:
result = runner.run(["teardown"], cwd=repo_path)
assert result.returncode == 0
+
+
[email protected]
+def test_missing_bq_source_fail() -> None:
+ project_id = "".join(
+ random.choice(string.ascii_lowercase + string.digits) for _ in range(10)
+ )
+ runner = CliRunner()
+ with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
+
+ repo_path = Path(repo_dir_name)
+ data_path = Path(data_dir_name)
+
+ repo_config = repo_path / "feature_store.yaml"
+
+ repo_config.write_text(
+ dedent(
+ f"""
+ project: {project_id}
+ registry: {data_path / "registry.db"}
+ provider: gcp
+ """
+ )
+ )
+
+ repo_example = repo_path / "example.py"
+ repo_example.write_text(
+ (
+ Path(__file__).parent / "example_feature_repo_with_missing_bq_source.py"
+ ).read_text()
+ )
+
+ returncode, output = runner.run_with_output(["apply"], cwd=repo_path)
+ assert returncode == 1
+ assert b"DataSourceNotFoundException" in output
diff --git a/sdk/python/tests/test_cli_local.py b/sdk/python/tests/test_cli_local.py
--- a/sdk/python/tests/test_cli_local.py
+++ b/sdk/python/tests/test_cli_local.py
@@ -4,12 +4,14 @@
from textwrap import dedent
import assertpy
+import pytest
from feast.feature_store import FeatureStore
from tests.cli_utils import CliRunner
from tests.online_read_write_test import basic_rw_test
[email protected]
def test_workflow() -> None:
"""
Test running apply on a sample repo, and make sure the infra gets created.
@@ -78,6 +80,7 @@ def test_workflow() -> None:
assertpy.assert_that(result.returncode).is_equal_to(0)
[email protected]
def test_non_local_feature_repo() -> None:
"""
Test running apply on a sample repo, and make sure the infra gets created.
diff --git a/sdk/python/tests/test_online_retrieval.py b/sdk/python/tests/test_online_retrieval.py
--- a/sdk/python/tests/test_online_retrieval.py
+++ b/sdk/python/tests/test_online_retrieval.py
@@ -14,6 +14,7 @@
from tests.cli_utils import CliRunner, get_example_repo
[email protected]
def test_online() -> None:
"""
Test reading from the online store in local mode.
@@ -238,6 +239,7 @@ def test_online() -> None:
os.rename(store.config.registry + "_fake", store.config.registry)
[email protected]
def test_online_to_df():
"""
Test dataframe conversion. Make sure the response columns and rows are
diff --git a/sdk/python/tests/test_partial_apply.py b/sdk/python/tests/test_partial_apply.py
--- a/sdk/python/tests/test_partial_apply.py
+++ b/sdk/python/tests/test_partial_apply.py
@@ -1,3 +1,4 @@
+import pytest
from google.protobuf.duration_pb2 import Duration
from feast import BigQuerySource, Feature, FeatureView, ValueType
@@ -5,6 +6,7 @@
from tests.online_read_write_test import basic_rw_test
[email protected]
def test_partial() -> None:
"""
Add another table to existing repo using partial apply API. Make sure both the table
| BigQuerySource does not validate whether the table exists or not
## Expected Behavior
When specifying an invalid table, feast should surface an error that explicitly states this.
## Current Behavior
We currently get an error that implies that the schema cannot be parsed.
```
$ feast apply
Traceback (most recent call last):
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/bin/feast", line 33, in <module>
sys.exit(load_entry_point('feast', 'console_scripts', 'feast')())
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/achal/tecton/feast/.direnv/python-3.7.10/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/achal/tecton/feast/sdk/python/feast/cli.py", line 192, in apply_total_command
apply_total(repo_config, repo)
File "/Users/achal/tecton/feast/sdk/python/feast/telemetry.py", line 151, in exception_logging_wrapper
result = func(*args, **kwargs)
File "/Users/achal/tecton/feast/sdk/python/feast/repo_operations.py", line 131, in apply_total
repo = parse_repo(repo_path)
File "/Users/achal/tecton/feast/sdk/python/feast/repo_operations.py", line 97, in parse_repo
module = importlib.import_module(module_path)
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/achal/tecton/feast/splendid_lark/example.py", line 46, in <module>
table_ref="feast-oss.demo_data.driver_activity"
File "/Users/achal/tecton/feast/sdk/python/feast/data_source.py", line 677, in __init__
or self._infer_event_timestamp_column("TIMESTAMP|DATETIME"),
File "/Users/achal/tecton/feast/sdk/python/feast/data_source.py", line 547, in _infer_event_timestamp_column
"""
TypeError:
Unable to infer DataSource event_timestamp_column due to an absence of columns that satisfy the criteria.
Please specify event_timestamp_column explicitly.
```
## Steps to reproduce
- Create a feature view with an invalid BigQuery source.
```
driver_stats_fv = FeatureView(
name="driver_activity",
entities=["driver_id"],
input=BigQuerySource(
table_ref="feast-oss.demo_data.driver_activity"
)
)
```
and then run `feast apply`
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
Run `client.get_table` as validation before any other operations.
| I'll be working on this, supervised by David Liu @mavysavydav from Twitter
Sounds good! | 2021-06-08T01:14:17 |
feast-dev/feast | 1,634 | feast-dev__feast-1634 | [
"1637"
] | d71b45251beb009b9f5923489deccf880c12875e | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -1,11 +1,13 @@
import time
+import uuid
from dataclasses import asdict, dataclass
-from datetime import datetime, timedelta
+from datetime import date, datetime, timedelta
from typing import List, Optional, Set, Union
import pandas
import pyarrow
from jinja2 import BaseLoader, Environment
+from tenacity import retry, stop_after_delay, wait_fixed
from feast import errors
from feast.data_source import BigQuerySource, DataSource
@@ -118,7 +120,7 @@ def get_historical_features(
entity_df_event_timestamp_col=entity_df_event_timestamp_col,
)
- job = BigQueryRetrievalJob(query=query, client=client)
+ job = BigQueryRetrievalJob(query=query, client=client, config=config)
return job
@@ -206,15 +208,41 @@ def _infer_event_timestamp_from_dataframe(entity_df: pandas.DataFrame) -> str:
class BigQueryRetrievalJob(RetrievalJob):
- def __init__(self, query, client):
+ def __init__(self, query, client, config):
self.query = query
self.client = client
+ self.config = config
def to_df(self):
# TODO: Ideally only start this job when the user runs "get_historical_features", not when they run to_df()
df = self.client.query(self.query).to_dataframe(create_bqstorage_client=True)
return df
+ def to_bigquery(self, dry_run=False) -> Optional[str]:
+ @retry(wait=wait_fixed(10), stop=stop_after_delay(1800), reraise=True)
+ def _block_until_done():
+ return self.client.get_job(bq_job.job_id).state in ["PENDING", "RUNNING"]
+
+ today = date.today().strftime("%Y%m%d")
+ rand_id = str(uuid.uuid4())[:7]
+ path = f"{self.client.project}.{self.config.offline_store.dataset}.historical_{today}_{rand_id}"
+ job_config = bigquery.QueryJobConfig(destination=path, dry_run=dry_run)
+ bq_job = self.client.query(self.query, job_config=job_config)
+
+ if dry_run:
+ print(
+ "This query will process {} bytes.".format(bq_job.total_bytes_processed)
+ )
+ return None
+
+ _block_until_done()
+
+ if bq_job.exception():
+ raise bq_job.exception()
+
+ print(f"Done writing to '{path}'.")
+ return path
+
@dataclass(frozen=True)
class FeatureViewQueryContext:
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -54,6 +54,7 @@
"pydantic>=1.0.0",
"PyYAML==5.3.*",
"tabulate==0.8.*",
+ "tenacity>=7.*",
"toml==0.10.*",
"tqdm==4.*",
]
@@ -92,7 +93,6 @@
"pytest-mock==1.10.4",
"Sphinx!=4.0.0",
"sphinx-rtd-theme",
- "tenacity",
"adlfs==0.5.9",
"firebase-admin==4.5.2",
"pre-commit",
| diff --git a/sdk/python/tests/test_historical_retrieval.py b/sdk/python/tests/test_historical_retrieval.py
--- a/sdk/python/tests/test_historical_retrieval.py
+++ b/sdk/python/tests/test_historical_retrieval.py
@@ -441,6 +441,22 @@ def test_historical_features_from_bigquery_sources(
],
)
+ # Just a dry run, should not create table
+ bq_dry_run = job_from_sql.to_bigquery(dry_run=True)
+ assert bq_dry_run is None
+
+ bq_temp_table_path = job_from_sql.to_bigquery()
+ assert bq_temp_table_path.split(".")[0] == gcp_project
+
+ if provider_type == "gcp_custom_offline_config":
+ assert bq_temp_table_path.split(".")[1] == "foo"
+ else:
+ assert bq_temp_table_path.split(".")[1] == bigquery_dataset
+
+ # Check that this table actually exists
+ actual_bq_temp_table = bigquery.Client().get_table(bq_temp_table_path)
+ assert actual_bq_temp_table.table_id == bq_temp_table_path.split(".")[-1]
+
start_time = datetime.utcnow()
actual_df_from_sql_entities = job_from_sql.to_df()
end_time = datetime.utcnow()
| Add to_bq() to BigQueryRetrievalJob
**Is your feature request related to a problem? Please describe.**
Data too big for dataframes.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Have feature data joining directly put the training dataset in a BQ table. User may be able to choose table name, whether to replace, expiration, etc.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
N/A
**Additional context**
Add any other context or screenshots about the feature request here.
| I am currently working on this 🤚
Edit: I will let @vtao2 continue working on this in #1634 (see below comments)
I believe @vtao2 is also working on this over here https://github.com/feast-dev/feast/pull/1634/files
> User may be able to choose table name, whether to replace, expiration, etc
What we were thinking is that the user would define some kind of offline store location like
```
offline_store:
type: BigQuery
dataset: my_feast_dataset
```
and that `to_bq()` would create a table within that dataset, but without providing the option to set the name, expiration, etc.
Do you have a use case for more granular control?
Got it, thanks for the heads up. I've polled some ppl on which controls they need. For now, please proceed and i'll update u guys once I have the info | 2021-06-10T00:42:54 |
feast-dev/feast | 1,651 | feast-dev__feast-1651 | [
"1625"
] | 8099ea7320d5c1fbe0a6bd2f209d4b4ea9277d69 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -275,6 +275,7 @@ def apply(
assert isinstance(objects, list)
views_to_update = [ob for ob in objects if isinstance(ob, FeatureView)]
+ _validate_feature_views(views_to_update)
entities_to_update = [ob for ob in objects if isinstance(ob, Entity)]
services_to_update = [ob for ob in objects if isinstance(ob, FeatureService)]
@@ -814,3 +815,15 @@ def _print_materialization_log(
f" to {Style.BRIGHT + Fore.GREEN}{end_date.replace(microsecond=0).astimezone()}{Style.RESET_ALL}"
f" into the {Style.BRIGHT + Fore.GREEN}{online_store}{Style.RESET_ALL} online store.\n"
)
+
+
+def _validate_feature_views(feature_views: List[FeatureView]):
+ """ Verify feature views have unique names"""
+ name_to_fv_dict = {}
+ for fv in feature_views:
+ if fv.name in name_to_fv_dict:
+ raise ValueError(
+ f"More than one feature view with name {fv.name} found. Please ensure that all feature view names are unique. It may be necessary to ignore certain files in your feature repository by using a .feastignore file."
+ )
+ else:
+ name_to_fv_dict[fv.name] = fv
diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -13,7 +13,7 @@
from feast import Entity, FeatureTable
from feast.feature_service import FeatureService
-from feast.feature_store import FeatureStore
+from feast.feature_store import FeatureStore, _validate_feature_views
from feast.feature_view import FeatureView
from feast.inference import (
update_data_sources_with_inferred_event_timestamp_col,
@@ -103,7 +103,6 @@ def parse_repo(repo_root: Path) -> ParsedRepo:
for repo_file in get_repo_files(repo_root):
module_path = py_path_to_module(repo_file, repo_root)
module = importlib.import_module(module_path)
-
for attr_name in dir(module):
obj = getattr(module, attr_name)
if isinstance(obj, FeatureTable):
@@ -162,6 +161,7 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
registry._initialize_registry()
sys.dont_write_bytecode = True
repo = parse_repo(repo_path)
+ _validate_feature_views(repo.feature_views)
data_sources = [t.batch_source for t in repo.feature_views]
if not skip_source_validation:
| diff --git a/sdk/python/tests/example_repos/example_feature_repo_with_duplicated_featureview_names.py b/sdk/python/tests/example_repos/example_feature_repo_with_duplicated_featureview_names.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/example_repos/example_feature_repo_with_duplicated_featureview_names.py
@@ -0,0 +1,25 @@
+from google.protobuf.duration_pb2 import Duration
+
+from feast import FeatureView, FileSource
+
+driver_hourly_stats = FileSource(
+ path="driver_stats.parquet", # this parquet is not real and will not be read
+)
+
+driver_hourly_stats_view = FeatureView(
+ name="driver_hourly_stats", # Intentionally use the same FeatureView name
+ entities=["driver_id"],
+ online=False,
+ input=driver_hourly_stats,
+ ttl=Duration(seconds=10),
+ tags={},
+)
+
+driver_hourly_stats_view_dup1 = FeatureView(
+ name="driver_hourly_stats", # Intentionally use the same FeatureView name
+ entities=["driver_id"],
+ online=False,
+ input=driver_hourly_stats,
+ ttl=Duration(seconds=10),
+ tags={},
+)
diff --git a/sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py b/sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py
@@ -0,0 +1,80 @@
+import tempfile
+from pathlib import Path
+from textwrap import dedent
+
+from tests.utils.cli_utils import CliRunner, get_example_repo
+
+
+def test_cli_apply_duplicated_featureview_names() -> None:
+ """
+ Test apply feature views with duplicated names and single py file in a feature repo using CLI
+ """
+
+ with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
+ runner = CliRunner()
+ # Construct an example repo in a temporary dir
+ repo_path = Path(repo_dir_name)
+ data_path = Path(data_dir_name)
+
+ repo_config = repo_path / "feature_store.yaml"
+
+ repo_config.write_text(
+ dedent(
+ f"""
+ project: foo
+ registry: {data_path / "registry.db"}
+ provider: local
+ online_store:
+ path: {data_path / "online_store.db"}
+ """
+ )
+ )
+
+ repo_example = repo_path / "example.py"
+ repo_example.write_text(
+ get_example_repo(
+ "example_feature_repo_with_duplicated_featureview_names.py"
+ )
+ )
+ rc, output = runner.run_with_output(["apply"], cwd=repo_path)
+
+ assert (
+ rc != 0
+ and b"Please ensure that all feature view names are unique" in output
+ )
+
+
+def test_cli_apply_duplicated_featureview_names_multiple_py_files() -> None:
+ """
+ Test apply feature views with duplicated names from multiple py files in a feature repo using CLI
+ """
+
+ with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
+ runner = CliRunner()
+ # Construct an example repo in a temporary dir
+ repo_path = Path(repo_dir_name)
+ data_path = Path(data_dir_name)
+
+ repo_config = repo_path / "feature_store.yaml"
+
+ repo_config.write_text(
+ dedent(
+ f"""
+ project: foo
+ registry: {data_path / "registry.db"}
+ provider: local
+ online_store:
+ path: {data_path / "online_store.db"}
+ """
+ )
+ )
+ # Create multiple py files containing the same feature view name
+ for i in range(3):
+ repo_example = repo_path / f"example{i}.py"
+ repo_example.write_text(get_example_repo("example_feature_repo_2.py"))
+ rc, output = runner.run_with_output(["apply"], cwd=repo_path)
+
+ assert (
+ rc != 0
+ and b"Please ensure that all feature view names are unique" in output
+ )
diff --git a/sdk/python/tests/integration/registration/test_feature_store.py b/sdk/python/tests/integration/registration/test_feature_store.py
--- a/sdk/python/tests/integration/registration/test_feature_store.py
+++ b/sdk/python/tests/integration/registration/test_feature_store.py
@@ -480,3 +480,36 @@ def test_reapply_feature_view_success(test_feature_store, dataframe_source):
assert len(fv_stored.materialization_intervals) == 0
test_feature_store.teardown()
+
+
+def test_apply_duplicated_featureview_names(feature_store_with_local_registry):
+ """ Test applying feature views with duplicated names"""
+
+ driver_stats = FeatureView(
+ name="driver_hourly_stats",
+ entities=["driver_id"],
+ ttl=timedelta(seconds=10),
+ online=False,
+ input=FileSource(path="driver_stats.parquet"),
+ tags={},
+ )
+
+ customer_stats = FeatureView(
+ name="driver_hourly_stats",
+ entities=["id"],
+ ttl=timedelta(seconds=10),
+ online=False,
+ input=FileSource(path="customer_stats.parquet"),
+ tags={},
+ )
+ try:
+ feature_store_with_local_registry.apply([driver_stats, customer_stats])
+ error = None
+ except ValueError as e:
+ error = e
+ assert (
+ isinstance(error, ValueError)
+ and "Please ensure that all feature view names are unique" in error.args[0]
+ )
+
+ feature_store_with_local_registry.teardown()
| Validate against duplicate FeatureView names
If you happen to name a FeatureView the same name, you'll get:
```
$ feast apply
Registered entity user_id
Registered feature view user_account_features
Registered feature view user_account_features
Registered feature view user_transaction_count_7d
Deploying infrastructure for user_account_features
Deploying infrastructure for user_account_features
Deploying infrastructure for user_transaction_count_7d
```
We should throw an error instead. FeatureView names should be unique in a feature repo.
| 2021-06-17T00:51:00 |
|
feast-dev/feast | 1,661 | feast-dev__feast-1661 | [
"1654"
] | 3cb303f020300c1a9f1ed1f2d424ecaa90b5a657 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -224,31 +224,51 @@ def to_df(self):
df = self.client.query(self.query).to_dataframe(create_bqstorage_client=True)
return df
- def to_bigquery(self, dry_run=False) -> Optional[str]:
+ def to_sql(self) -> str:
+ """
+ Returns the SQL query that will be executed in BigQuery to build the historical feature table.
+ """
+ return self.query
+
+ def to_bigquery(self, job_config: bigquery.QueryJobConfig = None) -> Optional[str]:
+ """
+ Triggers the execution of a historical feature retrieval query and exports the results to a BigQuery table.
+
+ Args:
+ job_config: An optional bigquery.QueryJobConfig to specify options like destination table, dry run, etc.
+
+ Returns:
+ Returns the destination table name or returns None if job_config.dry_run is True.
+ """
+
@retry(wait=wait_fixed(10), stop=stop_after_delay(1800), reraise=True)
def _block_until_done():
return self.client.get_job(bq_job.job_id).state in ["PENDING", "RUNNING"]
- today = date.today().strftime("%Y%m%d")
- rand_id = str(uuid.uuid4())[:7]
- dataset_project = self.config.offline_store.project_id or self.client.project
- path = f"{dataset_project}.{self.config.offline_store.dataset}.historical_{today}_{rand_id}"
- job_config = bigquery.QueryJobConfig(destination=path, dry_run=dry_run)
- bq_job = self.client.query(self.query, job_config=job_config)
-
- if dry_run:
- print(
- "This query will process {} bytes.".format(bq_job.total_bytes_processed)
+ if not job_config:
+ today = date.today().strftime("%Y%m%d")
+ rand_id = str(uuid.uuid4())[:7]
+ dataset_project = (
+ self.config.offline_store.project_id or self.client.project
)
- return None
+ path = f"{dataset_project}.{self.config.offline_store.dataset}.historical_{today}_{rand_id}"
+ job_config = bigquery.QueryJobConfig(destination=path)
+
+ bq_job = self.client.query(self.query, job_config=job_config)
_block_until_done()
if bq_job.exception():
raise bq_job.exception()
- print(f"Done writing to '{path}'.")
- return path
+ if job_config.dry_run:
+ print(
+ "This query will process {} bytes.".format(bq_job.total_bytes_processed)
+ )
+ return None
+
+ print(f"Done writing to '{job_config.destination}'.")
+ return str(job_config.destination)
@dataclass(frozen=True)
| diff --git a/sdk/python/tests/test_historical_retrieval.py b/sdk/python/tests/test_historical_retrieval.py
--- a/sdk/python/tests/test_historical_retrieval.py
+++ b/sdk/python/tests/test_historical_retrieval.py
@@ -438,29 +438,13 @@ def test_historical_features_from_bigquery_sources(
],
)
- # Just a dry run, should not create table
- bq_dry_run = job_from_sql.to_bigquery(dry_run=True)
- assert bq_dry_run is None
-
- bq_temp_table_path = job_from_sql.to_bigquery()
- assert bq_temp_table_path.split(".")[0] == gcp_project
-
- if provider_type == "gcp_custom_offline_config":
- assert bq_temp_table_path.split(".")[1] == "foo"
- else:
- assert bq_temp_table_path.split(".")[1] == bigquery_dataset
-
- # Check that this table actually exists
- actual_bq_temp_table = bigquery.Client().get_table(bq_temp_table_path)
- assert actual_bq_temp_table.table_id == bq_temp_table_path.split(".")[-1]
-
start_time = datetime.utcnow()
actual_df_from_sql_entities = job_from_sql.to_df()
end_time = datetime.utcnow()
with capsys.disabled():
print(
str(
- f"\nTime to execute job_from_df.to_df() = '{(end_time - start_time)}'"
+ f"\nTime to execute job_from_sql.to_df() = '{(end_time - start_time)}'"
)
)
| Provide ability to set more options when outputting joined data to bq table.
**Is your feature request related to a problem? Please describe.**
We need to be able to set expiries for the tables, project.dataset.tableName, and also option for create vs replace.
**Describe the solution you'd like**
Ideally the user can optionally pass in a config object like QueryJobConfig. QueryJobConfig has many options such as `destination` which allows one to set the destination of table and name, but it doesn't seem like it can handle create/replace (it has create_disposition which offers some limited control) and expiry.
**Describe alternatives you've considered**
N/A. Willem and a few of us briefly discussed this and he played around with the idea of passing in and out sql queries. So one could modify the query and pass it back in.
| @mavysavydav the problem statement makes sense. `get_historical_features()` is a general (non-implementation specific) method, but the retrieval jobs are specific to the implementation. So roughly there are options like
1.
```
job = get_historical_features(..., kwargs)
```
where those options get passed all the way to the offline store and the offline store then knows how to use them
2.
```
job = get_historical_features(...)
job.set_config(QueryJobConfig)
job.to_bigquery()
```
3.
```
job = get_historical_features(...)
job.to_bigquery(..., config=QueryJobConfig)
```
4.
```
job = get_historical_features(...)
sql = job.to_sql()
job_config = bigquery.QueryJobConfig(default_dataset="bigquery-public-data.stackoverflow")
client.query(sql, job_config=job_config)
```
My preference is (3) or (4) right now, but don't feel super strongly.
:raising_hand: working on this now! Please assign me.
yea agreed 3 or 4 is good. cody will look into
solution 4 would work well with in-house wrappers. The wrapper could still present a clean api for users even tho it's changing the sql to add things like expiry. Though expiry seems like something feast would want to support natively. Tho maybe we'll wait for more needs to emerge b4 natively supporting it. | 2021-06-22T21:54:18 |
feast-dev/feast | 1,697 | feast-dev__feast-1697 | [
"1696"
] | 0d2179d9d3d67b3928201f6ca879666a6ae6a4e6 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -24,6 +24,7 @@
from feast import utils
from feast.entity import Entity
from feast.errors import FeatureNameCollisionError, FeatureViewNotFoundException
+from feast.feature_table import FeatureTable
from feast.feature_view import FeatureView
from feast.inference import (
update_data_sources_with_inferred_event_timestamp_col,
@@ -254,6 +255,23 @@ def apply(
partial=True,
)
+ @log_exceptions_and_usage
+ def teardown(self):
+ tables: List[Union[FeatureView, FeatureTable]] = []
+ feature_views = self.list_feature_views()
+ feature_tables = self._registry.list_feature_tables(self.project)
+
+ tables.extend(feature_views)
+ tables.extend(feature_tables)
+
+ entities = self.list_entities()
+
+ self._get_provider().teardown_infra(self.project, tables, entities)
+ for feature_view in feature_views:
+ self.delete_feature_view(feature_view.name)
+ for feature_table in feature_tables:
+ self._registry.delete_feature_table(feature_table.name, self.project)
+
@log_exceptions_and_usage
def get_historical_features(
self,
| diff --git a/sdk/python/tests/test_offline_online_store_consistency.py b/sdk/python/tests/test_offline_online_store_consistency.py
--- a/sdk/python/tests/test_offline_online_store_consistency.py
+++ b/sdk/python/tests/test_offline_online_store_consistency.py
@@ -111,6 +111,8 @@ def prep_bq_fs_and_fv(
yield fs, fv
+ fs.teardown()
+
@contextlib.contextmanager
def prep_local_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
@@ -133,10 +135,13 @@ def prep_local_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
join_key="driver_id",
value_type=ValueType.INT32,
)
+ project = f"test_local_correctness_{str(uuid.uuid4()).replace('-', '')}"
+ print(f"Using project: {project}")
+
with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
config = RepoConfig(
registry=str(Path(repo_dir_name) / "registry.db"),
- project=f"test_bq_correctness_{str(uuid.uuid4()).replace('-', '')}",
+ project=project,
provider="local",
online_store=SqliteOnlineStoreConfig(
path=str(Path(data_dir_name) / "online_store.db")
@@ -147,6 +152,8 @@ def prep_local_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
yield fs, fv
+ fs.teardown()
+
@contextlib.contextmanager
def prep_redis_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
@@ -169,10 +176,12 @@ def prep_redis_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
join_key="driver_id",
value_type=ValueType.INT32,
)
+ project = f"test_redis_correctness_{str(uuid.uuid4()).replace('-', '')}"
+ print(f"Using project: {project}")
with tempfile.TemporaryDirectory() as repo_dir_name:
config = RepoConfig(
registry=str(Path(repo_dir_name) / "registry.db"),
- project=f"test_bq_correctness_{str(uuid.uuid4()).replace('-', '')}",
+ project=project,
provider="local",
online_store=RedisOnlineStoreConfig(
type="redis",
@@ -185,6 +194,8 @@ def prep_redis_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
yield fs, fv
+ fs.teardown()
+
@contextlib.contextmanager
def prep_dynamodb_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
@@ -207,10 +218,12 @@ def prep_dynamodb_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
join_key="driver_id",
value_type=ValueType.INT32,
)
+ project = f"test_dynamo_correctness_{str(uuid.uuid4()).replace('-', '')}"
+ print(f"Using project {project}")
with tempfile.TemporaryDirectory() as repo_dir_name:
config = RepoConfig(
registry=str(Path(repo_dir_name) / "registry.db"),
- project=f"test_bq_correctness_{str(uuid.uuid4()).replace('-', '')}",
+ project=project,
provider="aws",
online_store=DynamoDBOnlineStoreConfig(region="us-west-2"),
offline_store=FileOfflineStoreConfig(),
@@ -220,6 +233,8 @@ def prep_dynamodb_fs_and_fv() -> Iterator[Tuple[FeatureStore, FeatureView]]:
yield fs, fv
+ fs.teardown()
+
# Checks that both offline & online store values are as expected
def check_offline_and_online_features(
| The FeatureStore class should expose a `teardown` method
## Expected Behavior
A user should be able to use the FeatureStore class to teardown any infrastructure setup when `apply` was called.
## Current Behavior
There's no user facing API to allow this. Users must call `feast teardown` as a CLI command.
## Possible Solution
The FeatureStore class should expose a `teardown` method. All CLI commands should delegate to FeatureStore methods.
| 2021-07-08T03:56:12 |
|
feast-dev/feast | 1,742 | feast-dev__feast-1742 | [
"1738"
] | 95a245abbd49abbfec5746fa9ad8c64d9c1434ef | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -52,7 +52,7 @@
"protobuf>=3.10",
"pyarrow>=2.0.0",
"pydantic>=1.0.0",
- "PyYAML==5.3.*",
+ "PyYAML>=5.4.*",
"tabulate==0.8.*",
"tenacity>=7.*",
"toml==0.10.*",
| Dependency PyYAML 5.3.* has vulnerability issues
## Expected Behavior
According to [CVE-2020-14343](https://nvd.nist.gov/vuln/detail/CVE-2020-14343):
> A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747. See CVE-2020-14343.
## Current Behavior
Feast Python SDK requires `PyYAML==5.3.*` version.
This not only affects Feast, but also any app depending on it, since dependencies are shared.
## Steps to reproduce
N/A
### Specifications
N/A
## Possible Solution
Bump PyYAML to a ">=5.4" version.
| 2021-07-28T23:18:29 |
||
feast-dev/feast | 1,766 | feast-dev__feast-1766 | [
"1755"
] | 54bbe5f04c9cdbc808a7f739ee93351ab43213a5 | diff --git a/sdk/python/feast/repo_config.py b/sdk/python/feast/repo_config.py
--- a/sdk/python/feast/repo_config.py
+++ b/sdk/python/feast/repo_config.py
@@ -2,7 +2,14 @@
from typing import Any
import yaml
-from pydantic import BaseModel, StrictInt, StrictStr, ValidationError, root_validator
+from pydantic import (
+ BaseModel,
+ StrictInt,
+ StrictStr,
+ ValidationError,
+ root_validator,
+ validator,
+)
from pydantic.error_wrappers import ErrorWrapper
from pydantic.typing import Dict, Optional, Union
@@ -180,6 +187,17 @@ def _validate_offline_store_config(cls, values):
return values
+ @validator("project")
+ def _validate_project_name(cls, v):
+ from feast.repo_operations import is_valid_name
+
+ if not is_valid_name(v):
+ raise ValueError(
+ f"Project name, {v}, should only have "
+ f"alphanumerical values and underscores but not start with an underscore."
+ )
+ return v
+
class FeastConfigError(Exception):
def __init__(self, error_message, config_path):
| diff --git a/sdk/python/tests/integration/scaffolding/test_repo_config.py b/sdk/python/tests/integration/scaffolding/test_repo_config.py
--- a/sdk/python/tests/integration/scaffolding/test_repo_config.py
+++ b/sdk/python/tests/integration/scaffolding/test_repo_config.py
@@ -153,3 +153,27 @@ def test_no_project():
"project\n"
" field required (type=value_error.missing)",
)
+
+
+def test_invalid_project_name():
+ _test_config(
+ dedent(
+ """
+ project: foo-1
+ registry: "registry.db"
+ provider: local
+ """
+ ),
+ expect_error="alphanumerical values ",
+ )
+
+ _test_config(
+ dedent(
+ """
+ project: _foo
+ registry: "registry.db"
+ provider: local
+ """
+ ),
+ expect_error="alphanumerical values ",
+ )
| Add validation for project name
**Is your feature request related to a problem? Please describe.**
A follow up to https://github.com/feast-dev/feast/pull/1752 - I think project name should definitely be validated early on. Otherwise could run into an error message (https://github.com/feast-dev/feast/pull/1752#issue-700598472) that is not obviously related to the project name.
**Describe the solution you'd like**
Validation upon feast apply that checks the project name. Can use the name validation function [here](https://github.com/feast-dev/feast/blob/a548c48927e6f6858d91a93cf356b43fe7c67aad/sdk/python/feast/repo_operations.py#L390).
**Describe alternatives you've considered**
There's probably more validation coverage to be done, and it could be done at a later time if there are higher priorities at the moment.
**Additional context**
@tedhtchang to take this simple addition on per https://github.com/feast-dev/feast/pull/1752#issuecomment-891339661
| 2021-08-07T00:48:48 |
|
feast-dev/feast | 1,771 | feast-dev__feast-1771 | [
"1770"
] | 316aabb47951ed2528715030f091ffc0ea8c5346 | diff --git a/sdk/python/feast/feature_service.py b/sdk/python/feast/feature_service.py
--- a/sdk/python/feast/feature_service.py
+++ b/sdk/python/feast/feature_service.py
@@ -31,6 +31,7 @@ class FeatureService:
name: str
features: List[FeatureViewProjection]
tags: Dict[str, str]
+ description: Optional[str] = None
created_timestamp: Optional[datetime] = None
last_updated_timestamp: Optional[datetime] = None
@@ -39,6 +40,7 @@ def __init__(
name: str,
features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],
tags: Optional[Dict[str, str]] = None,
+ description: Optional[str] = None,
):
"""
Creates a FeatureService object.
@@ -56,6 +58,7 @@ def __init__(
else:
raise ValueError(f"Unexpected type: {type(feature)}")
self.tags = tags or {}
+ self.description = description
self.created_timestamp = None
self.last_updated_timestamp = None
@@ -97,6 +100,11 @@ def from_proto(feature_service_proto: FeatureServiceProto):
for fp in feature_service_proto.spec.features
],
tags=dict(feature_service_proto.spec.tags),
+ description=(
+ feature_service_proto.spec.description
+ if feature_service_proto.spec.description != ""
+ else None
+ ),
)
if feature_service_proto.meta.HasField("created_timestamp"):
@@ -137,6 +145,8 @@ def to_proto(self) -> FeatureServiceProto:
if self.tags:
spec.tags.update(self.tags)
+ if self.description:
+ spec.description = self.description
feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)
return feature_service_proto
| diff --git a/sdk/python/tests/unit/test_feature_service.py b/sdk/python/tests/unit/test_feature_service.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/unit/test_feature_service.py
@@ -0,0 +1,14 @@
+from feast import FeatureService
+
+
+def test_feature_service_with_description():
+ feature_service = FeatureService(
+ name="my-feature-service", features=[], description="a clear description"
+ )
+ assert feature_service.to_proto().spec.description == "a clear description"
+
+
+def test_feature_service_without_description():
+ feature_service = FeatureService(name="my-feature-service", features=[])
+ #
+ assert feature_service.to_proto().spec.description == ""
| FeatureService should support "description" as first class named-value argument as Entity does
## Expected Behavior
...
```
# Define your feature service and the features it will serve
driver_fs = FeatureService(name="driver_ranking_fv_svc",
features=[driver_hourly_stats_view],
description="Used for training an ElasticNetCV model"})
```
```
feast feature-services describe driver_ranking_fv_svc
spec:
name: driver_ranking_fv_svc
features:
- featureViewName: driver_hourly_stats
featureColumns:
- name: conv_rate
valueType: FLOAT
- name: acc_rate
valueType: FLOAT
- name: avg_daily_trips
valueType: INT64
description: Used for training an ElasticNetCV model
meta: {}
```
Extend the argument list in the [FeatureService ](https://github.com/feast-dev/feast/blob/7dff49a194a25a62927c1ee7022caf0651f68f38/sdk/python/feast/feature_service.py#L24)to support `description=None` as default
## Current Behavior
```
# Define your feature service and the features it will serve
driver_fs = FeatureService(name="driver_ranking_fv_svc",
features=[driver_hourly_stats_view],
tags={"description": "Used for training an ElasticNetCV model"})
```
Then query the FS from the cli
```
feast feature-services describe driver_ranking_fv_svc
spec:
name: driver_ranking_fv_svc
features:
- featureViewName: driver_hourly_stats
featureColumns:
- name: conv_rate
valueType: FLOAT
- name: acc_rate
valueType: FLOAT
- name: avg_daily_trips
valueType: INT64
tags:
description: Used for training an ElasticNetCV model
meta: {}
```
## Steps to reproduce
Follow the steps above in the current behavior
### Specifications
- Version: v0.12v
| 2021-08-09T20:53:17 |
|
feast-dev/feast | 1,789 | feast-dev__feast-1789 | [
"1548"
] | 2e5f6e8e49009d64781ad6d3311aa5bf04cbe966 | diff --git a/sdk/python/feast/infra/offline_stores/file.py b/sdk/python/feast/infra/offline_stores/file.py
--- a/sdk/python/feast/infra/offline_stores/file.py
+++ b/sdk/python/feast/infra/offline_stores/file.py
@@ -112,7 +112,11 @@ def evaluate_historical_retrieval():
)
# Read offline parquet data in pyarrow format.
- table = pyarrow.parquet.read_table(feature_view.batch_source.path)
+ filesystem, path = FileSource.create_filesystem_and_path(
+ feature_view.batch_source.path,
+ feature_view.batch_source.file_options.s3_endpoint_override,
+ )
+ table = pyarrow.parquet.read_table(path, filesystem=filesystem)
# Rename columns by the field mapping dictionary if it exists
if feature_view.batch_source.field_mapping is not None:
@@ -238,7 +242,10 @@ def pull_latest_from_table_or_query(
# Create lazy function that is only called from the RetrievalJob object
def evaluate_offline_job():
- source_df = pd.read_parquet(data_source.path)
+ filesystem, path = FileSource.create_filesystem_and_path(
+ data_source.path, data_source.file_options.s3_endpoint_override
+ )
+ source_df = pd.read_parquet(path, filesystem=filesystem)
# Make sure all timestamp fields are tz-aware. We default tz-naive fields to UTC
source_df[event_timestamp_column] = source_df[event_timestamp_column].apply(
lambda x: x if x.tzinfo is not None else x.replace(tzinfo=pytz.utc)
diff --git a/sdk/python/feast/infra/offline_stores/file_source.py b/sdk/python/feast/infra/offline_stores/file_source.py
--- a/sdk/python/feast/infra/offline_stores/file_source.py
+++ b/sdk/python/feast/infra/offline_stores/file_source.py
@@ -1,5 +1,7 @@
from typing import Callable, Dict, Iterable, Optional, Tuple
+from pyarrow._fs import FileSystem
+from pyarrow._s3fs import S3FileSystem
from pyarrow.parquet import ParquetFile
from feast import type_map
@@ -20,6 +22,7 @@ def __init__(
created_timestamp_column: Optional[str] = "",
field_mapping: Optional[Dict[str, str]] = None,
date_partition_column: Optional[str] = "",
+ s3_endpoint_override: Optional[str] = None,
):
"""Create a FileSource from a file containing feature data. Only Parquet format supported.
@@ -33,6 +36,7 @@ def __init__(
file_format (optional): Explicitly set the file format. Allows Feast to bypass inferring the file format.
field_mapping: A dictionary mapping of column names in this data source to feature names in a feature table
or view. Only used for feature columns, not entities or timestamp columns.
+ s3_endpoint_override (optional): Overrides AWS S3 enpoint with custom S3 storage
Examples:
>>> from feast import FileSource
@@ -51,7 +55,11 @@ def __init__(
else:
file_url = path
- self._file_options = FileOptions(file_format=file_format, file_url=file_url)
+ self._file_options = FileOptions(
+ file_format=file_format,
+ file_url=file_url,
+ s3_endpoint_override=s3_endpoint_override,
+ )
super().__init__(
event_timestamp_column,
@@ -70,6 +78,8 @@ def __eq__(self, other):
and self.event_timestamp_column == other.event_timestamp_column
and self.created_timestamp_column == other.created_timestamp_column
and self.field_mapping == other.field_mapping
+ and self.file_options.s3_endpoint_override
+ == other.file_options.s3_endpoint_override
)
@property
@@ -102,6 +112,7 @@ def from_proto(data_source: DataSourceProto):
event_timestamp_column=data_source.event_timestamp_column,
created_timestamp_column=data_source.created_timestamp_column,
date_partition_column=data_source.date_partition_column,
+ s3_endpoint_override=data_source.file_options.s3_endpoint_override,
)
def to_proto(self) -> DataSourceProto:
@@ -128,9 +139,26 @@ def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
def get_table_column_names_and_types(
self, config: RepoConfig
) -> Iterable[Tuple[str, str]]:
- schema = ParquetFile(self.path).schema_arrow
+ filesystem, path = FileSource.create_filesystem_and_path(
+ self.path, self._file_options.s3_endpoint_override
+ )
+ schema = ParquetFile(
+ path if filesystem is None else filesystem.open_input_file(path)
+ ).schema_arrow
return zip(schema.names, map(str, schema.types))
+ @staticmethod
+ def create_filesystem_and_path(
+ path: str, s3_endpoint_override: str
+ ) -> Tuple[Optional[FileSystem], str]:
+ if path.startswith("s3://"):
+ s3fs = S3FileSystem(
+ endpoint_override=s3_endpoint_override if s3_endpoint_override else None
+ )
+ return s3fs, path.replace("s3://", "")
+ else:
+ return None, path
+
class FileOptions:
"""
@@ -138,10 +166,22 @@ class FileOptions:
"""
def __init__(
- self, file_format: Optional[FileFormat], file_url: Optional[str],
+ self,
+ file_format: Optional[FileFormat],
+ file_url: Optional[str],
+ s3_endpoint_override: Optional[str],
):
+ """
+ FileOptions initialization method
+
+ Args:
+ file_format (FileFormat, optional): file source format eg. parquet
+ file_url (str, optional): file source url eg. s3:// or local file
+ s3_endpoint_override (str, optional): custom s3 endpoint (used only with s3 file_url)
+ """
self._file_format = file_format
self._file_url = file_url
+ self._s3_endpoint_override = s3_endpoint_override
@property
def file_format(self):
@@ -171,6 +211,20 @@ def file_url(self, file_url):
"""
self._file_url = file_url
+ @property
+ def s3_endpoint_override(self):
+ """
+ Returns the s3 endpoint override
+ """
+ return None if self._s3_endpoint_override == "" else self._s3_endpoint_override
+
+ @s3_endpoint_override.setter
+ def s3_endpoint_override(self, s3_endpoint_override):
+ """
+ Sets the s3 endpoint override
+ """
+ self._s3_endpoint_override = s3_endpoint_override
+
@classmethod
def from_proto(cls, file_options_proto: DataSourceProto.FileOptions):
"""
@@ -185,6 +239,7 @@ def from_proto(cls, file_options_proto: DataSourceProto.FileOptions):
file_options = cls(
file_format=FileFormat.from_proto(file_options_proto.file_format),
file_url=file_options_proto.file_url,
+ s3_endpoint_override=file_options_proto.s3_endpoint_override,
)
return file_options
@@ -201,6 +256,7 @@ def to_proto(self) -> DataSourceProto.FileOptions:
None if self.file_format is None else self.file_format.to_proto()
),
file_url=self.file_url,
+ s3_endpoint_override=self.s3_endpoint_override,
)
return file_options_proto
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -82,6 +82,7 @@
"isort>=5",
"grpcio-tools==1.34.0",
"grpcio-testing==1.34.0",
+ "minio==7.1.0",
"mock==2.0.0",
"moto",
"mypy==0.790",
@@ -99,6 +100,7 @@
"pytest-mock==1.10.4",
"Sphinx!=4.0.0",
"sphinx-rtd-theme",
+ "testcontainers==3.4.2",
"adlfs==0.5.9",
"firebase-admin==4.5.2",
"pre-commit",
| diff --git a/sdk/python/tests/integration/feature_repos/test_repo_configuration.py b/sdk/python/tests/integration/feature_repos/test_repo_configuration.py
--- a/sdk/python/tests/integration/feature_repos/test_repo_configuration.py
+++ b/sdk/python/tests/integration/feature_repos/test_repo_configuration.py
@@ -115,7 +115,7 @@ def customer_feature_view(self) -> FeatureView:
customer_table_id = self.data_source_creator.get_prefixed_table_name(
self.name, "customer_profile"
)
- ds = self.data_source_creator.create_data_sources(
+ ds = self.data_source_creator.create_data_source(
customer_table_id,
self.customer_df,
event_timestamp_column="event_timestamp",
@@ -129,7 +129,7 @@ def driver_stats_feature_view(self) -> FeatureView:
driver_table_id = self.data_source_creator.get_prefixed_table_name(
self.name, "driver_hourly"
)
- ds = self.data_source_creator.create_data_sources(
+ ds = self.data_source_creator.create_data_source(
driver_table_id,
self.driver_df,
event_timestamp_column="event_timestamp",
@@ -145,7 +145,7 @@ def orders_table(self) -> Optional[str]:
orders_table_id = self.data_source_creator.get_prefixed_table_name(
self.name, "orders"
)
- ds = self.data_source_creator.create_data_sources(
+ ds = self.data_source_creator.create_data_source(
orders_table_id,
self.orders_df,
event_timestamp_column="event_timestamp",
@@ -221,7 +221,7 @@ def construct_test_environment(
offline_creator: DataSourceCreator = importer.get_class_from_type(
module_name, config_class_name, "DataSourceCreator"
)(project)
- ds = offline_creator.create_data_sources(
+ ds = offline_creator.create_data_source(
project, df, field_mapping={"ts_1": "ts", "id": "driver_id"}
)
offline_store = offline_creator.create_offline_store_config()
diff --git a/sdk/python/tests/integration/feature_repos/universal/data_source_creator.py b/sdk/python/tests/integration/feature_repos/universal/data_source_creator.py
--- a/sdk/python/tests/integration/feature_repos/universal/data_source_creator.py
+++ b/sdk/python/tests/integration/feature_repos/universal/data_source_creator.py
@@ -9,7 +9,7 @@
class DataSourceCreator(ABC):
@abstractmethod
- def create_data_sources(
+ def create_data_source(
self,
destination: str,
df: pd.DataFrame,
diff --git a/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py b/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
--- a/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
+++ b/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
@@ -40,7 +40,7 @@ def teardown(self):
def create_offline_store_config(self):
return BigQueryOfflineStoreConfig()
- def create_data_sources(
+ def create_data_source(
self,
destination: str,
df: pd.DataFrame,
diff --git a/sdk/python/tests/integration/feature_repos/universal/data_sources/file.py b/sdk/python/tests/integration/feature_repos/universal/data_sources/file.py
--- a/sdk/python/tests/integration/feature_repos/universal/data_sources/file.py
+++ b/sdk/python/tests/integration/feature_repos/universal/data_sources/file.py
@@ -2,6 +2,9 @@
from typing import Any, Dict
import pandas as pd
+from minio import Minio
+from testcontainers.core.generic import DockerContainer
+from testcontainers.core.waiting_utils import wait_for_logs
from feast import FileSource
from feast.data_format import ParquetFormat
@@ -19,7 +22,7 @@ class FileDataSourceCreator(DataSourceCreator):
def __init__(self, _: str):
pass
- def create_data_sources(
+ def create_data_source(
self,
destination: str,
df: pd.DataFrame,
@@ -46,3 +49,79 @@ def create_offline_store_config(self) -> FeastConfigBaseModel:
def teardown(self):
self.f.close()
+
+
+class S3FileDataSourceCreator(DataSourceCreator):
+ f: Any
+ minio: DockerContainer
+ bucket = "feast-test"
+ access_key = "AKIAIOSFODNN7EXAMPLE"
+ secret = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
+ minio_image = "minio/minio:RELEASE.2021-08-17T20-53-08Z"
+
+ def __init__(self, _: str):
+ self._setup_minio()
+
+ def _setup_minio(self):
+ self.minio = DockerContainer(self.minio_image)
+ self.minio.with_exposed_ports(9000).with_exposed_ports(9001).with_env(
+ "MINIO_ROOT_USER", self.access_key
+ ).with_env("MINIO_ROOT_PASSWORD", self.secret).with_command(
+ 'server /data --console-address ":9001"'
+ )
+ self.minio.start()
+ log_string_to_wait_for = (
+ "API" # The minio container will print "API: ..." when ready.
+ )
+ wait_for_logs(container=self.minio, predicate=log_string_to_wait_for, timeout=5)
+
+ def _upload_parquet_file(self, df, file_name, minio_endpoint):
+ self.f = tempfile.NamedTemporaryFile(suffix=".parquet", delete=False)
+ df.to_parquet(self.f.name)
+
+ client = Minio(
+ minio_endpoint,
+ access_key=self.access_key,
+ secret_key=self.secret,
+ secure=False,
+ )
+ if not client.bucket_exists(self.bucket):
+ client.make_bucket(self.bucket)
+ client.fput_object(
+ self.bucket, file_name, self.f.name,
+ )
+
+ def create_data_source(
+ self,
+ destination: str,
+ df: pd.DataFrame,
+ event_timestamp_column="ts",
+ created_timestamp_column="created_ts",
+ field_mapping: Dict[str, str] = None,
+ ) -> DataSource:
+ filename = f"{destination}.parquet"
+ port = self.minio.get_exposed_port("9000")
+ host = self.minio.get_container_host_ip()
+ minio_endpoint = f"{host}:{port}"
+
+ self._upload_parquet_file(df, filename, minio_endpoint)
+
+ return FileSource(
+ file_format=ParquetFormat(),
+ path=f"s3://{self.bucket}/{filename}",
+ event_timestamp_column=event_timestamp_column,
+ created_timestamp_column=created_timestamp_column,
+ date_partition_column="",
+ field_mapping=field_mapping or {"ts_1": "ts"},
+ s3_endpoint_override=f"http://{host}:{port}",
+ )
+
+ def get_prefixed_table_name(self, name: str, suffix: str) -> str:
+ return f"{suffix}"
+
+ def create_offline_store_config(self) -> FeastConfigBaseModel:
+ return FileOfflineStoreConfig()
+
+ def teardown(self):
+ self.minio.stop()
+ self.f.close()
diff --git a/sdk/python/tests/integration/feature_repos/universal/data_sources/redshift.py b/sdk/python/tests/integration/feature_repos/universal/data_sources/redshift.py
--- a/sdk/python/tests/integration/feature_repos/universal/data_sources/redshift.py
+++ b/sdk/python/tests/integration/feature_repos/universal/data_sources/redshift.py
@@ -31,7 +31,7 @@ def __init__(self, project_name: str):
iam_role="arn:aws:iam::402087665549:role/redshift_s3_access_role",
)
- def create_data_sources(
+ def create_data_source(
self,
destination: str,
df: pd.DataFrame,
diff --git a/sdk/python/tests/integration/feature_repos/universal/feature_views.py b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
--- a/sdk/python/tests/integration/feature_repos/universal/feature_views.py
+++ b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
@@ -20,11 +20,6 @@ def create_driver_hourly_stats_feature_view(source):
driver_stats_feature_view = FeatureView(
name="driver_stats",
entities=["driver"],
- features=[
- Feature(name="conv_rate", dtype=ValueType.FLOAT),
- Feature(name="acc_rate", dtype=ValueType.FLOAT),
- Feature(name="avg_daily_trips", dtype=ValueType.INT32),
- ],
batch_source=source,
ttl=timedelta(hours=2),
)
diff --git a/sdk/python/tests/integration/offline_store/test_s3_custom_endpoint.py b/sdk/python/tests/integration/offline_store/test_s3_custom_endpoint.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/integration/offline_store/test_s3_custom_endpoint.py
@@ -0,0 +1,35 @@
+import pytest
+
+from tests.integration.feature_repos.test_repo_configuration import (
+ TestRepoConfig,
+ construct_test_environment,
+)
+
+
[email protected]
+def test_registration_and_retrieval_from_custom_s3_endpoint():
+ config = TestRepoConfig(
+ offline_store_creator="tests.integration.feature_repos.universal.data_sources.file.S3FileDataSourceCreator"
+ )
+ import os
+
+ if "AWS_ACCESS_KEY_ID" in os.environ:
+ raise Exception(
+ "AWS_ACCESS_KEY_ID has already been set in the environment. Setting it again may cause a conflict. "
+ "It may be better to deduplicate AWS configuration or use sub-processes for isolation"
+ )
+
+ os.environ["AWS_ACCESS_KEY_ID"] = "AKIAIOSFODNN7EXAMPLE"
+ os.environ["AWS_SECRET_ACCESS_KEY"] = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
+
+ with construct_test_environment(
+ config, create_and_apply=True, materialize=True
+ ) as environment:
+ fs = environment.feature_store
+ out = fs.get_online_features(
+ features=["driver_stats:conv_rate"], entity_rows=[{"driver": 5001}]
+ ).to_dict()
+ assert out["conv_rate"][0] is not None
+
+ del os.environ["AWS_ACCESS_KEY_ID"]
+ del os.environ["AWS_SECRET_ACCESS_KEY"]
| Add S3 support to FileSource
**Is your feature request related to a problem? Please describe.**
It should be possible to use s3 file path directly in ```FileSource```
eg.
``` python
driver_hourly_stats = FileSource(
path="s3://driver/driver_stats.parquet",
event_timestamp_column="datetime",
created_timestamp_column="created",
)
```
additionally it should be possible to use custom S3 endpoints.
**Describe the solution you'd like**
I have added S3 support which is internally handled by pyarrow and pandas.
It is also possible to use custom S3 endpoints using ```FEAST_S3_ENDPOINT_URL``` environment variable
| Thanks for this @qooba. Also, thank you for being so patient with our delayed reviews. We're hoping to speed things up going forward! | 2021-08-18T21:16:05 |
feast-dev/feast | 1,814 | feast-dev__feast-1814 | [
"1759"
] | 5857a553d975237ae25f8e76cf2111be959ada63 | diff --git a/sdk/python/feast/infra/online_stores/datastore.py b/sdk/python/feast/infra/online_stores/datastore.py
--- a/sdk/python/feast/infra/online_stores/datastore.py
+++ b/sdk/python/feast/infra/online_stores/datastore.py
@@ -30,6 +30,7 @@
try:
from google.auth.exceptions import DefaultCredentialsError
from google.cloud import datastore
+ from google.cloud.datastore.client import Key
except ImportError as e:
from feast.errors import FeastExtrasDependencyImportError, FeastProviderLoginError
@@ -233,22 +234,32 @@ def online_read(
feast_project = config.project
+ keys: List[Key] = []
result: List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]] = []
for entity_key in entity_keys:
document_id = compute_entity_id(entity_key)
key = client.key(
"Project", feast_project, "Table", table.name, "Row", document_id
)
- value = client.get(key)
- if value is not None:
+ keys.append(key)
+
+ values = client.get_multi(keys)
+
+ if values is not None:
+ keys_missing_from_response = set(keys) - set([v.key for v in values])
+ values = sorted(values, key=lambda v: keys.index(v.key))
+ for value in values:
res = {}
for feature_name, value_bin in value["values"].items():
val = ValueProto()
val.ParseFromString(value_bin)
res[feature_name] = val
result.append((value["event_ts"], res))
- else:
- result.append((None, None))
+ for missing_key_idx in sorted(
+ [keys.index(k) for k in keys_missing_from_response]
+ ):
+ result.insert(missing_key_idx, (None, None))
+
return result
| Datastore online request makes a call once for each entity
**Is your feature request related to a problem? Please describe.**
There are performance issues with Datastore when fetching features for multiple entities. An individual fetch request is made for each entity
**Describe the solution you'd like**
I would like to have a more efficient access to the Datastore for multiple entities. The python api for datastore offers a method `get_multi()` which allows to fetch multiple keys in one request.
**Describe alternatives you've considered**
---
**Additional context**
I have done some basic tests using the `get_multi` vs get and it seems that a improvement can be made by using the function `get_multi`
Here is part of code I used for testing:
```
import numpy as np
import timeit
from google.cloud import datastore
def multi_online_read(num_keys=5):
client = datastore.Client(
project=project_name, namespace=None,
)
feast_project = "feature_repo"
key = client.key("Project", feast_project, "Table", table, "Row", row)
keys = []
response = []
for i in range(num_keys):
keys.append(key)
response = client.get_multi(keys)
return response
def online_read(num_keys=5):
client = datastore.Client(
project=project_name, namespace=None,
)
feast_project = "feature_repo"
key = client.key("Project", feast_project, "Table", table, "Row", row)
response = []
for i in range(num_keys):
response.append(client.get(key))
return response
if __name__ == "__main__":
results = timeit.repeat(online_read, repeat=10, number=1)
print(results)
print(np.mean(results))
results = timeit.repeat(multi_online_read, repeat=10, number=1)
print(results)
print(np.mean(results))
```
| We'd love to see a contribution here if you have time :) | 2021-08-30T18:41:11 |
|
feast-dev/feast | 1,844 | feast-dev__feast-1844 | [
"1842"
] | 0dc13f096b0a1547b3dc0b442d7d023b9391cd34 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -261,11 +261,10 @@ def block_until_done(
if is_test:
retry_cadence = 0.1
- def _wait_until_done(job_id):
- if client.get_job(job_id).state in ["PENDING", "RUNNING"]:
- raise BigQueryJobStillRunning(job_id=job_id)
+ def _wait_until_done(bq_job):
+ if client.get_job(bq_job).state in ["PENDING", "RUNNING"]:
+ raise BigQueryJobStillRunning(job_id=bq_job.job_id)
- job_id = bq_job.job_id
try:
retryer = Retrying(
wait=wait_fixed(retry_cadence),
@@ -273,12 +272,12 @@ def _wait_until_done(job_id):
retry=retry_if_exception_type(BigQueryJobStillRunning),
reraise=True,
)
- retryer(_wait_until_done, job_id)
+ retryer(_wait_until_done, bq_job)
finally:
- if client.get_job(job_id).state in ["PENDING", "RUNNING"]:
- client.cancel_job(job_id)
- raise BigQueryJobCancelled(job_id=job_id)
+ if client.get_job(bq_job).state in ["PENDING", "RUNNING"]:
+ client.cancel_job(bq_job)
+ raise BigQueryJobCancelled(job_id=bq_job.job_id)
if bq_job.exception():
raise bq_job.exception()
| Bigquery job id is not found in locations outside of US
## Expected Behavior
When querying for offline features with bigquery source, the job id is used to get the status of the bigquery job.
## Current Behavior
For a location outside of US (southamerica) the job id to get the status is not found.
## Steps to reproduce
### Specifications
## Possible Solution
Pass `bigquery.job.query.QueryJob` object to `client.get_job(job_id).state` instead of job_id as a string. `get_job` will then get the correct location automatically.
| Thanks for raising this @LarsKlingen. Would you be open to submitting a pull request? | 2021-09-07T21:41:15 |
|
feast-dev/feast | 1,847 | feast-dev__feast-1847 | [
"1767"
] | 622792868a456a139c946bac1d1d0342f8422da8 | diff --git a/sdk/python/feast/infra/offline_stores/redshift_source.py b/sdk/python/feast/infra/offline_stores/redshift_source.py
--- a/sdk/python/feast/infra/offline_stores/redshift_source.py
+++ b/sdk/python/feast/infra/offline_stores/redshift_source.py
@@ -13,11 +13,27 @@ def __init__(
self,
event_timestamp_column: Optional[str] = "",
table: Optional[str] = None,
+ schema: Optional[str] = None,
created_timestamp_column: Optional[str] = "",
field_mapping: Optional[Dict[str, str]] = None,
date_partition_column: Optional[str] = "",
query: Optional[str] = None,
):
+ """
+ Creates a RedshiftSource object.
+
+ Args:
+ event_timestamp_column (optional): Event timestamp column used for point in
+ time joins of feature values.
+ table (optional): Redshift table where the features are stored.
+ schema (optional): Redshift schema in which the table is located.
+ created_timestamp_column (optional): Timestamp column indicating when the
+ row was created, used for deduplicating rows.
+ field_mapping (optional): A dictionary mapping of column names in this data
+ source to column names in a feature table or view.
+ date_partition_column (optional): Timestamp column used for partitioning.
+ query (optional): The query to be executed to obtain the features.
+ """
super().__init__(
event_timestamp_column,
created_timestamp_column,
@@ -25,13 +41,28 @@ def __init__(
date_partition_column,
)
- self._redshift_options = RedshiftOptions(table=table, query=query)
+ # The default Redshift schema is named "public".
+ _schema = "public" if table and not schema else schema
+
+ self._redshift_options = RedshiftOptions(
+ table=table, schema=_schema, query=query
+ )
@staticmethod
def from_proto(data_source: DataSourceProto):
+ """
+ Creates a RedshiftSource from a protobuf representation of a RedshiftSource.
+
+ Args:
+ data_source: A protobuf representation of a RedshiftSource
+
+ Returns:
+ A RedshiftSource object based on the data_source protobuf.
+ """
return RedshiftSource(
field_mapping=dict(data_source.field_mapping),
table=data_source.redshift_options.table,
+ schema=data_source.redshift_options.schema,
event_timestamp_column=data_source.event_timestamp_column,
created_timestamp_column=data_source.created_timestamp_column,
date_partition_column=data_source.date_partition_column,
@@ -46,6 +77,7 @@ def __eq__(self, other):
return (
self.redshift_options.table == other.redshift_options.table
+ and self.redshift_options.schema == other.redshift_options.schema
and self.redshift_options.query == other.redshift_options.query
and self.event_timestamp_column == other.event_timestamp_column
and self.created_timestamp_column == other.created_timestamp_column
@@ -54,27 +86,36 @@ def __eq__(self, other):
@property
def table(self):
+ """Returns the table of this Redshift source."""
return self._redshift_options.table
+ @property
+ def schema(self):
+ """Returns the schema of this Redshift source."""
+ return self._redshift_options.schema
+
@property
def query(self):
+ """Returns the Redshift options of this Redshift source."""
return self._redshift_options.query
@property
def redshift_options(self):
- """
- Returns the Redshift options of this data source
- """
+ """Returns the Redshift options of this Redshift source."""
return self._redshift_options
@redshift_options.setter
def redshift_options(self, _redshift_options):
- """
- Sets the Redshift options of this data source
- """
+ """Sets the Redshift options of this Redshift source."""
self._redshift_options = _redshift_options
def to_proto(self) -> DataSourceProto:
+ """
+ Converts a RedshiftSource object to its protobuf representation.
+
+ Returns:
+ A DataSourceProto object.
+ """
data_source_proto = DataSourceProto(
type=DataSourceProto.BATCH_REDSHIFT,
field_mapping=self.field_mapping,
@@ -93,9 +134,9 @@ def validate(self, config: RepoConfig):
self.get_table_column_names_and_types(config)
def get_table_query_string(self) -> str:
- """Returns a string that can directly be used to reference this table in SQL"""
+ """Returns a string that can directly be used to reference this table in SQL."""
if self.table:
- return f'"{self.table}"'
+ return f'"{self.schema}"."{self.table}"'
else:
return f"({self.query})"
@@ -106,6 +147,12 @@ def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
def get_table_column_names_and_types(
self, config: RepoConfig
) -> Iterable[Tuple[str, str]]:
+ """
+ Returns a mapping of column names to types for this Redshift source.
+
+ Args:
+ config: A RepoConfig describing the feature repo
+ """
from botocore.exceptions import ClientError
from feast.infra.offline_stores.redshift import RedshiftOfflineStoreConfig
@@ -122,6 +169,7 @@ def get_table_column_names_and_types(
Database=config.offline_store.database,
DbUser=config.offline_store.user,
Table=self.table,
+ Schema=self.schema,
)
except ClientError as e:
if e.response["Error"]["Code"] == "ValidationException":
@@ -150,55 +198,61 @@ def get_table_column_names_and_types(
class RedshiftOptions:
"""
- DataSource Redshift options used to source features from Redshift query
+ DataSource Redshift options used to source features from Redshift query.
"""
- def __init__(self, table: Optional[str], query: Optional[str]):
+ def __init__(
+ self, table: Optional[str], schema: Optional[str], query: Optional[str]
+ ):
self._table = table
+ self._schema = schema
self._query = query
@property
def query(self):
- """
- Returns the Redshift SQL query referenced by this source
- """
+ """Returns the Redshift SQL query referenced by this source."""
return self._query
@query.setter
def query(self, query):
- """
- Sets the Redshift SQL query referenced by this source
- """
+ """Sets the Redshift SQL query referenced by this source."""
self._query = query
@property
def table(self):
- """
- Returns the table name of this Redshift table
- """
+ """Returns the table name of this Redshift table."""
return self._table
@table.setter
def table(self, table_name):
- """
- Sets the table ref of this Redshift table
- """
+ """Sets the table ref of this Redshift table."""
self._table = table_name
+ @property
+ def schema(self):
+ """Returns the schema name of this Redshift table."""
+ return self._schema
+
+ @schema.setter
+ def schema(self, schema):
+ """Sets the schema of this Redshift table."""
+ self._schema = schema
+
@classmethod
def from_proto(cls, redshift_options_proto: DataSourceProto.RedshiftOptions):
"""
- Creates a RedshiftOptions from a protobuf representation of a Redshift option
+ Creates a RedshiftOptions from a protobuf representation of a Redshift option.
Args:
redshift_options_proto: A protobuf representation of a DataSource
Returns:
- Returns a RedshiftOptions object based on the redshift_options protobuf
+ A RedshiftOptions object based on the redshift_options protobuf.
"""
-
redshift_options = cls(
- table=redshift_options_proto.table, query=redshift_options_proto.query,
+ table=redshift_options_proto.table,
+ schema=redshift_options_proto.schema,
+ query=redshift_options_proto.query,
)
return redshift_options
@@ -208,11 +262,10 @@ def to_proto(self) -> DataSourceProto.RedshiftOptions:
Converts an RedshiftOptionsProto object to its protobuf representation.
Returns:
- RedshiftOptionsProto protobuf
+ A RedshiftOptionsProto protobuf.
"""
-
redshift_options_proto = DataSourceProto.RedshiftOptions(
- table=self.table, query=self.query,
+ table=self.table, schema=self.schema, query=self.query,
)
return redshift_options_proto
| Add schema parameter to RedshiftSource to distinguish between database schemas
**Is your feature request related to a problem? Please describe.**
No possible way to specify schema via `table` parameter.
When you add schema parameter to table name e.g `schema_sample.feature_table` it throws exception when applying with message:
```
raise DataSourceNotFoundException(self.table)
feast.errors.DataSourceNotFoundException: Unable to find table at 'schema_sample.feature_table'. Please check that table exists.
```
**Describe the solution you'd like**
Add optional `schema` parameter to uniquely identify which schema to use to reference source table.
**Describe alternatives you've considered**
`query` parameter works just fine
| 2021-09-08T20:56:27 |
||
feast-dev/feast | 1,850 | feast-dev__feast-1850 | [
"1816"
] | 32d5c6c527d5c4b11959ef63a0812c41cab91dc6 | diff --git a/sdk/python/feast/infra/offline_stores/redshift.py b/sdk/python/feast/infra/offline_stores/redshift.py
--- a/sdk/python/feast/infra/offline_stores/redshift.py
+++ b/sdk/python/feast/infra/offline_stores/redshift.py
@@ -377,9 +377,9 @@ def _upload_entity_df_and_get_entity_schema(
{{entity_df_event_timestamp_col}} AS entity_timestamp
{% for featureview in featureviews %}
{% if featureview.entities %}
- ,CONCAT(
+ ,(
{% for entity in featureview.entities %}
- CAST({{entity}} AS VARCHAR),
+ CAST({{entity}} as VARCHAR) ||
{% endfor %}
CAST({{entity_df_event_timestamp_col}} AS VARCHAR)
) AS {{featureview.name}}__entity_row_unique_id
diff --git a/sdk/python/feast/infra/online_stores/helpers.py b/sdk/python/feast/infra/online_stores/helpers.py
--- a/sdk/python/feast/infra/online_stores/helpers.py
+++ b/sdk/python/feast/infra/online_stores/helpers.py
@@ -1,13 +1,12 @@
import importlib
import struct
-from typing import Any
+from typing import Any, List
import mmh3
from feast import errors
from feast.infra.key_encoding_utils import serialize_entity_key
from feast.infra.online_stores.online_store import OnlineStore
-from feast.protos.feast.storage.Redis_pb2 import RedisKeyV2 as RedisKeyProto
from feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto
@@ -37,13 +36,9 @@ def get_online_store_from_config(online_store_config: Any,) -> OnlineStore:
return online_store_class()
-def _redis_key(project: str, entity_key: EntityKeyProto):
- redis_key = RedisKeyProto(
- project=project,
- entity_names=entity_key.join_keys,
- entity_values=entity_key.entity_values,
- )
- return redis_key.SerializeToString()
+def _redis_key(project: str, entity_key: EntityKeyProto) -> bytes:
+ key: List[bytes] = [serialize_entity_key(entity_key), project.encode("utf-8")]
+ return b"".join(key)
def _mmh3(key: str):
| diff --git a/sdk/python/tests/integration/feature_repos/repo_configuration.py b/sdk/python/tests/integration/feature_repos/repo_configuration.py
--- a/sdk/python/tests/integration/feature_repos/repo_configuration.py
+++ b/sdk/python/tests/integration/feature_repos/repo_configuration.py
@@ -28,6 +28,7 @@
create_customer_daily_profile_feature_view,
create_driver_hourly_stats_feature_view,
create_global_stats_feature_view,
+ create_order_feature_view,
)
@@ -94,17 +95,19 @@ def construct_universal_datasets(
orders_df = driver_test_data.create_orders_df(
customers=entities["customer"],
drivers=entities["driver"],
- start_date=end_time - timedelta(days=3),
- end_date=end_time + timedelta(days=3),
+ start_date=start_time,
+ end_date=end_time,
order_count=20,
)
global_df = driver_test_data.create_global_daily_stats_df(start_time, end_time)
+ entity_df = orders_df[["customer_id", "driver_id", "order_id", "event_timestamp"]]
return {
"customer": customer_df,
"driver": driver_df,
"orders": orders_df,
"global": global_df,
+ "entity": entity_df,
}
@@ -127,7 +130,7 @@ def construct_universal_data_sources(
datasets["orders"],
destination_name="orders",
event_timestamp_column="event_timestamp",
- created_timestamp_column="created",
+ created_timestamp_column=None,
)
global_ds = data_source_creator.create_data_source(
datasets["global"],
@@ -161,6 +164,7 @@ def construct_universal_feature_views(
"input_request": create_conv_rate_request_data_source(),
}
),
+ "order": create_order_feature_view(data_sources["orders"]),
}
diff --git a/sdk/python/tests/integration/feature_repos/universal/feature_views.py b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
--- a/sdk/python/tests/integration/feature_repos/universal/feature_views.py
+++ b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
@@ -117,3 +117,15 @@ def create_global_stats_feature_view(source, infer_features: bool = False):
ttl=timedelta(days=2),
)
return global_stats_feature_view
+
+
+def create_order_feature_view(source, infer_features: bool = False):
+ return FeatureView(
+ name="order",
+ entities=["driver", "customer_id"],
+ features=None
+ if infer_features
+ else [Feature(name="order_is_success", dtype=ValueType.INT32)],
+ batch_source=source,
+ ttl=timedelta(days=2),
+ )
diff --git a/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py b/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
--- a/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
+++ b/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
@@ -1,5 +1,5 @@
from datetime import datetime
-from typing import Any, Dict, List
+from typing import Any, Dict, List, Optional
import numpy as np
import pandas as pd
@@ -37,14 +37,23 @@ def find_asof_record(
ts_key: str,
ts_start: datetime,
ts_end: datetime,
- filter_key: str = "",
- filter_value: Any = None,
+ filter_keys: Optional[List[str]] = None,
+ filter_values: Optional[List[Any]] = None,
) -> Dict[str, Any]:
+ filter_keys = filter_keys or []
+ filter_values = filter_values or []
+ assert len(filter_keys) == len(filter_values)
found_record = {}
for record in records:
if (
- not filter_key or record[filter_key] == filter_value
- ) and ts_start <= record[ts_key] <= ts_end:
+ all(
+ [
+ record[filter_key] == filter_value
+ for filter_key, filter_value in zip(filter_keys, filter_values)
+ ]
+ )
+ and ts_start <= record[ts_key] <= ts_end
+ ):
if not found_record or found_record[ts_key] < record[ts_key]:
found_record = record
return found_record
@@ -55,43 +64,57 @@ def get_expected_training_df(
customer_fv: FeatureView,
driver_df: pd.DataFrame,
driver_fv: FeatureView,
+ orders_df: pd.DataFrame,
+ order_fv: FeatureView,
global_df: pd.DataFrame,
global_fv: FeatureView,
- orders_df: pd.DataFrame,
+ entity_df: pd.DataFrame,
event_timestamp: str,
full_feature_names: bool = False,
):
# Convert all pandas dataframes into records with UTC timestamps
- order_records = convert_timestamp_records_to_utc(
- orders_df.to_dict("records"), event_timestamp
+ customer_records = convert_timestamp_records_to_utc(
+ customer_df.to_dict("records"), customer_fv.batch_source.event_timestamp_column
)
driver_records = convert_timestamp_records_to_utc(
driver_df.to_dict("records"), driver_fv.batch_source.event_timestamp_column
)
- customer_records = convert_timestamp_records_to_utc(
- customer_df.to_dict("records"), customer_fv.batch_source.event_timestamp_column
+ order_records = convert_timestamp_records_to_utc(
+ orders_df.to_dict("records"), event_timestamp
)
global_records = convert_timestamp_records_to_utc(
global_df.to_dict("records"), global_fv.batch_source.event_timestamp_column
)
+ entity_rows = convert_timestamp_records_to_utc(
+ entity_df.to_dict("records"), event_timestamp
+ )
- # Manually do point-in-time join of orders to drivers and customers records
- for order_record in order_records:
+ # Manually do point-in-time join of driver, customer, and order records against
+ # the entity df
+ for entity_row in entity_rows:
+ customer_record = find_asof_record(
+ customer_records,
+ ts_key=customer_fv.batch_source.event_timestamp_column,
+ ts_start=entity_row[event_timestamp] - customer_fv.ttl,
+ ts_end=entity_row[event_timestamp],
+ filter_keys=["customer_id"],
+ filter_values=[entity_row["customer_id"]],
+ )
driver_record = find_asof_record(
driver_records,
ts_key=driver_fv.batch_source.event_timestamp_column,
- ts_start=order_record[event_timestamp] - driver_fv.ttl,
- ts_end=order_record[event_timestamp],
- filter_key="driver_id",
- filter_value=order_record["driver_id"],
+ ts_start=entity_row[event_timestamp] - driver_fv.ttl,
+ ts_end=entity_row[event_timestamp],
+ filter_keys=["driver_id"],
+ filter_values=[entity_row["driver_id"]],
)
- customer_record = find_asof_record(
- customer_records,
+ order_record = find_asof_record(
+ order_records,
ts_key=customer_fv.batch_source.event_timestamp_column,
- ts_start=order_record[event_timestamp] - customer_fv.ttl,
- ts_end=order_record[event_timestamp],
- filter_key="customer_id",
- filter_value=order_record["customer_id"],
+ ts_start=entity_row[event_timestamp] - order_fv.ttl,
+ ts_end=entity_row[event_timestamp],
+ filter_keys=["customer_id", "driver_id"],
+ filter_values=[entity_row["customer_id"], entity_row["driver_id"]],
)
global_record = find_asof_record(
global_records,
@@ -100,15 +123,7 @@ def get_expected_training_df(
ts_end=order_record[event_timestamp],
)
- order_record.update(
- {
- (f"driver_stats__{k}" if full_feature_names else k): driver_record.get(
- k, None
- )
- for k in ("conv_rate", "avg_daily_trips")
- }
- )
- order_record.update(
+ entity_row.update(
{
(
f"customer_profile__{k}" if full_feature_names else k
@@ -120,7 +135,21 @@ def get_expected_training_df(
)
}
)
- order_record.update(
+ entity_row.update(
+ {
+ (f"driver_stats__{k}" if full_feature_names else k): driver_record.get(
+ k, None
+ )
+ for k in ("conv_rate", "avg_daily_trips")
+ }
+ )
+ entity_row.update(
+ {
+ (f"order__{k}" if full_feature_names else k): order_record.get(k, None)
+ for k in ("order_is_success",)
+ }
+ )
+ entity_row.update(
{
(f"global_stats__{k}" if full_feature_names else k): global_record.get(
k, None
@@ -130,7 +159,7 @@ def get_expected_training_df(
)
# Convert records back to pandas dataframe
- expected_df = pd.DataFrame(order_records)
+ expected_df = pd.DataFrame(entity_rows)
# Move "event_timestamp" column to front
current_cols = expected_df.columns.tolist()
@@ -140,7 +169,7 @@ def get_expected_training_df(
# Cast some columns to expected types, since we lose information when converting pandas DFs into Python objects.
if full_feature_names:
expected_column_types = {
- "order_is_success": "int32",
+ "order__order_is_success": "int32",
"driver_stats__conv_rate": "float32",
"customer_profile__current_balance": "float32",
"customer_profile__avg_passenger_count": "float32",
@@ -175,20 +204,23 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
(entities, datasets, data_sources) = universal_data_sources
feature_views = construct_universal_feature_views(data_sources)
- customer_df, driver_df, orders_df, global_df = (
+ customer_df, driver_df, orders_df, global_df, entity_df = (
datasets["customer"],
datasets["driver"],
datasets["orders"],
datasets["global"],
+ datasets["entity"],
)
- orders_df_with_request_data = orders_df.copy(deep=True)
- orders_df_with_request_data["val_to_add"] = [
- i for i in range(len(orders_df_with_request_data))
+ entity_df_with_request_data = entity_df.copy(deep=True)
+ entity_df_with_request_data["val_to_add"] = [
+ i for i in range(len(entity_df_with_request_data))
]
- customer_fv, driver_fv, driver_odfv, global_fv = (
+
+ customer_fv, driver_fv, driver_odfv, order_fv, global_fv = (
feature_views["customer"],
feature_views["driver"],
feature_views["driver_odfv"],
+ feature_views["order"],
feature_views["global"],
)
@@ -203,6 +235,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
customer_fv,
driver_fv,
driver_odfv,
+ order_fv,
global_fv,
driver(),
customer(),
@@ -214,7 +247,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
entity_df_query = None
orders_table = table_name_from_data_source(data_sources["orders"])
if orders_table:
- entity_df_query = f"SELECT * FROM {orders_table}"
+ entity_df_query = f"SELECT customer_id, driver_id, order_id, event_timestamp FROM {orders_table}"
event_timestamp = (
DEFAULT_ENTITY_DF_EVENT_TIMESTAMP_COL
@@ -226,9 +259,11 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
customer_fv,
driver_df,
driver_fv,
+ orders_df,
+ order_fv,
global_df,
global_fv,
- orders_df_with_request_data,
+ entity_df_with_request_data,
event_timestamp,
full_feature_names,
)
@@ -242,6 +277,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
"customer_profile:current_balance",
"customer_profile:avg_passenger_count",
"customer_profile:lifetime_trip_count",
+ "order:order_is_success",
"global_stats:num_rides",
"global_stats:avg_ride_length",
],
@@ -297,7 +333,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
assert_frame_equal(expected_df_query, df_from_sql_entities)
job_from_df = store.get_historical_features(
- entity_df=orders_df_with_request_data,
+ entity_df=entity_df_with_request_data,
features=[
"driver_stats:conv_rate",
"driver_stats:avg_daily_trips",
@@ -306,6 +342,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
"customer_profile:lifetime_trip_count",
"conv_rate_plus_100:conv_rate_plus_100",
"conv_rate_plus_100:conv_rate_plus_val_to_add",
+ "order:order_is_success",
"global_stats:num_rides",
"global_stats:avg_ride_length",
],
@@ -341,7 +378,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
store,
feature_service,
full_feature_names,
- orders_df_with_request_data,
+ entity_df_with_request_data,
expected_df,
event_timestamp,
)
@@ -361,7 +398,7 @@ def test_historical_features(environment, universal_data_sources, full_feature_n
# If request data is missing that's needed for on demand transform, throw an error
with pytest.raises(RequestDataNotFoundInEntityDfException):
store.get_historical_features(
- entity_df=orders_df,
+ entity_df=entity_df,
features=[
"driver_stats:conv_rate",
"driver_stats:avg_daily_trips",
@@ -388,11 +425,11 @@ def response_feature_name(feature: str, full_feature_names: bool) -> str:
def assert_feature_service_correctness(
- store, feature_service, full_feature_names, orders_df, expected_df, event_timestamp
+ store, feature_service, full_feature_names, entity_df, expected_df, event_timestamp
):
job_from_df = store.get_historical_features(
- entity_df=orders_df,
+ entity_df=entity_df,
features=feature_service,
full_feature_names=full_feature_names,
)
diff --git a/sdk/python/tests/integration/online_store/test_universal_online.py b/sdk/python/tests/integration/online_store/test_universal_online.py
--- a/sdk/python/tests/integration/online_store/test_universal_online.py
+++ b/sdk/python/tests/integration/online_store/test_universal_online.py
@@ -1,5 +1,5 @@
-import random
import unittest
+from datetime import timedelta
import pandas as pd
import pytest
@@ -29,14 +29,27 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
feast_objects.extend(feature_views.values())
feast_objects.extend([driver(), customer(), feature_service])
fs.apply(feast_objects)
- fs.materialize(environment.start_date, environment.end_date)
+ fs.materialize(
+ environment.start_date - timedelta(days=1),
+ environment.end_date + timedelta(days=1),
+ )
+
+ entity_sample = datasets["orders"].sample(10)[
+ ["customer_id", "driver_id", "order_id", "event_timestamp"]
+ ]
+ orders_df = datasets["orders"][
+ (
+ datasets["orders"]["customer_id"].isin(entity_sample["customer_id"])
+ & datasets["orders"]["driver_id"].isin(entity_sample["driver_id"])
+ )
+ ]
- sample_drivers = random.sample(entities["driver"], 10)
+ sample_drivers = entity_sample["driver_id"]
drivers_df = datasets["driver"][
datasets["driver"]["driver_id"].isin(sample_drivers)
]
- sample_customers = random.sample(entities["customer"], 10)
+ sample_customers = entity_sample["customer_id"]
customers_df = datasets["customer"][
datasets["customer"]["customer_id"].isin(sample_customers)
]
@@ -56,6 +69,7 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
"customer_profile:lifetime_trip_count",
"conv_rate_plus_100:conv_rate_plus_100",
"conv_rate_plus_100:conv_rate_plus_val_to_add",
+ "order:order_is_success",
"global_stats:num_rides",
"global_stats:avg_ride_length",
]
@@ -84,13 +98,14 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
assert (
"driver_stats" not in keys
and "customer_profile" not in keys
+ and "order" not in keys
and "global_stats" not in keys
)
tc = unittest.TestCase()
for i, entity_row in enumerate(entity_rows):
df_features = get_latest_feature_values_from_dataframes(
- drivers_df, customers_df, global_df, entity_row
+ drivers_df, customers_df, orders_df, global_df, entity_row
)
assert df_features["customer_id"] == online_features_dict["customer_id"][i]
@@ -145,6 +160,7 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
full_feature_names,
drivers_df,
customers_df,
+ orders_df,
global_df,
)
@@ -165,13 +181,17 @@ def response_feature_name(feature: str, full_feature_names: bool) -> str:
):
return f"conv_rate_plus_100__{feature}"
+ if feature in {"order_is_success"} and full_feature_names:
+ return f"order__{feature}"
+
if feature in {"num_rides", "avg_ride_length"} and full_feature_names:
return f"global_stats__{feature}"
+
return feature
def get_latest_feature_values_from_dataframes(
- driver_df, customer_df, global_df, entity_row
+ driver_df, customer_df, orders_df, global_df, entity_row
):
driver_rows = driver_df[driver_df["driver_id"] == entity_row["driver"]]
latest_driver_row: pd.DataFrame = driver_rows.loc[
@@ -181,6 +201,20 @@ def get_latest_feature_values_from_dataframes(
latest_customer_row = customer_rows.loc[
customer_rows["event_timestamp"].idxmax()
].to_dict()
+
+ # Since the event timestamp columns may contain timestamps of different timezones,
+ # we must first convert the timestamps to UTC before we can compare them.
+ order_rows = orders_df[
+ (orders_df["driver_id"] == entity_row["driver"])
+ & (orders_df["customer_id"] == entity_row["customer_id"])
+ ]
+ timestamps = order_rows[["event_timestamp"]]
+ timestamps["event_timestamp"] = pd.to_datetime(
+ timestamps["event_timestamp"], utc=True
+ )
+ max_index = timestamps["event_timestamp"].idxmax()
+ latest_orders_row = order_rows.loc[max_index]
+
latest_global_row = global_df.loc[global_df["event_timestamp"].idxmax()].to_dict()
request_data_features = entity_row.copy()
request_data_features.pop("driver")
@@ -189,6 +223,7 @@ def get_latest_feature_values_from_dataframes(
**latest_customer_row,
**latest_driver_row,
**latest_global_row,
+ **latest_orders_row,
**request_data_features,
}
@@ -200,6 +235,7 @@ def assert_feature_service_correctness(
full_feature_names,
drivers_df,
customers_df,
+ orders_df,
global_df,
):
feature_service_response = fs.get_online_features(
@@ -220,7 +256,7 @@ def assert_feature_service_correctness(
for i, entity_row in enumerate(entity_rows):
df_features = get_latest_feature_values_from_dataframes(
- drivers_df, customers_df, global_df, entity_row
+ drivers_df, customers_df, orders_df, global_df, entity_row
)
assert (
feature_service_online_features_dict[
| Redshift does not support multiple entities
## Expected Behavior
Feast supports multiple entities with Redshift sources
## Current Behavior
When a feature requires more than 1 entities, Feast will throw error upon retrieving feature. It throws the follow error:
```
ERROR: function concat(character varying, character varying, character varying) does not exist\n Hint: No function matches the given name and argument types. You may need to add explicit type casts.
```
The reason is, redshift CONCAT only support 2 arguments. you might want consider the concat operator or nested concat
| 2021-09-10T05:10:33 |
|
feast-dev/feast | 1,889 | feast-dev__feast-1889 | [
"1839"
] | 1dada6a2de66e33192e7f1d3f8ef2dd9589d0ed4 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -5,6 +5,7 @@
import numpy as np
import pandas as pd
import pyarrow
+import pyarrow.parquet
from pydantic import StrictStr
from pydantic.typing import Literal
from tenacity import Retrying, retry_if_exception_type, stop_after_delay, wait_fixed
@@ -222,11 +223,8 @@ def to_bigquery(
job_config = bigquery.QueryJobConfig(destination=path)
if not job_config.dry_run and self.on_demand_feature_views is not None:
- transformed_df = self.to_df()
- job = self.client.load_table_from_dataframe(
- transformed_df,
- job_config.destination,
- job_config=bigquery.LoadJobConfig(),
+ job = _write_pyarrow_table_to_bq(
+ self.client, self.to_arrow(), job_config.destination
)
job.result()
print(f"Done writing to '{job_config.destination}'.")
@@ -331,12 +329,7 @@ def _upload_entity_df_and_get_entity_schema(
elif isinstance(entity_df, pd.DataFrame):
# Drop the index so that we dont have unnecessary columns
entity_df.reset_index(drop=True, inplace=True)
-
- # Upload the dataframe into BigQuery, creating a temporary table
- job_config = bigquery.LoadJobConfig()
- job = client.load_table_from_dataframe(
- entity_df, table_name, job_config=job_config
- )
+ job = _write_df_to_bq(client, entity_df, table_name)
block_until_done(client, job)
entity_schema = dict(zip(entity_df.columns, entity_df.dtypes))
@@ -371,6 +364,44 @@ def _get_bigquery_client(project: Optional[str] = None):
return client
+def _write_df_to_bq(
+ client: bigquery.Client, df: pd.DataFrame, table_name: str
+) -> bigquery.LoadJob:
+ # It is complicated to get BQ to understand that we want an ARRAY<value_type>
+ # https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#parquetoptions
+ # https://github.com/googleapis/python-bigquery/issues/19
+ writer = pyarrow.BufferOutputStream()
+ pyarrow.parquet.write_table(
+ pyarrow.Table.from_pandas(df), writer, use_compliant_nested_type=True
+ )
+ return _write_pyarrow_buffer_to_bq(client, writer.getvalue(), table_name,)
+
+
+def _write_pyarrow_table_to_bq(
+ client: bigquery.Client, table: pyarrow.Table, table_name: str
+) -> bigquery.LoadJob:
+ # It is complicated to get BQ to understand that we want an ARRAY<value_type>
+ # https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#parquetoptions
+ # https://github.com/googleapis/python-bigquery/issues/19
+ writer = pyarrow.BufferOutputStream()
+ pyarrow.parquet.write_table(table, writer, use_compliant_nested_type=True)
+ return _write_pyarrow_buffer_to_bq(client, writer.getvalue(), table_name,)
+
+
+def _write_pyarrow_buffer_to_bq(
+ client: bigquery.Client, buf: pyarrow.Buffer, table_name: str
+) -> bigquery.LoadJob:
+ reader = pyarrow.BufferReader(buf)
+
+ parquet_options = bigquery.format_options.ParquetOptions()
+ parquet_options.enable_list_inference = True
+ job_config = bigquery.LoadJobConfig()
+ job_config.source_format = bigquery.SourceFormat.PARQUET
+ job_config.parquet_options = parquet_options
+
+ return client.load_table_from_file(reader, table_name, job_config=job_config,)
+
+
# TODO: Optimizations
# * Use GENERATE_UUID() instead of ROW_NUMBER(), or join on entity columns directly
# * Precompute ROW_NUMBER() so that it doesn't have to be recomputed for every query on entity_dataframe
diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -277,11 +277,16 @@ def _python_value_to_proto_value(feast_value_type, value) -> ProtoValue:
def python_value_to_proto_value(
value: Any, feature_type: ValueType = None
) -> ProtoValue:
- value_type = (
- python_type_to_feast_value_type("", value)
- if value is not None
- else feature_type
- )
+ value_type = feature_type
+ if value is not None:
+ if isinstance(value, (list, np.ndarray)):
+ value_type = (
+ feature_type
+ if len(value) == 0
+ else python_type_to_feast_value_type("", value)
+ )
+ else:
+ value_type = python_type_to_feast_value_type("", value)
return _python_value_to_proto_value(value_type, value)
| diff --git a/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py b/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
--- a/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
+++ b/sdk/python/tests/integration/feature_repos/universal/data_sources/bigquery.py
@@ -6,7 +6,10 @@
from feast import BigQuerySource
from feast.data_source import DataSource
-from feast.infra.offline_stores.bigquery import BigQueryOfflineStoreConfig
+from feast.infra.offline_stores.bigquery import (
+ BigQueryOfflineStoreConfig,
+ _write_df_to_bq,
+)
from tests.integration.feature_repos.universal.data_source_creator import (
DataSourceCreator,
)
@@ -61,15 +64,12 @@ def create_data_source(
self.create_dataset()
- job_config = bigquery.LoadJobConfig()
if self.gcp_project not in destination_name:
destination_name = (
f"{self.gcp_project}.{self.project_name}.{destination_name}"
)
- job = self.client.load_table_from_dataframe(
- df, destination_name, job_config=job_config
- )
+ job = _write_df_to_bq(self.client, df, destination_name)
job.result()
self.tables.append(destination_name)
diff --git a/sdk/python/tests/integration/offline_store/test_historical_retrieval.py b/sdk/python/tests/integration/offline_store/test_historical_retrieval.py
--- a/sdk/python/tests/integration/offline_store/test_historical_retrieval.py
+++ b/sdk/python/tests/integration/offline_store/test_historical_retrieval.py
@@ -19,7 +19,10 @@
from feast.feature import Feature
from feast.feature_store import FeatureStore, _validate_feature_refs
from feast.feature_view import FeatureView
-from feast.infra.offline_stores.bigquery import BigQueryOfflineStoreConfig
+from feast.infra.offline_stores.bigquery import (
+ BigQueryOfflineStoreConfig,
+ _write_df_to_bq,
+)
from feast.infra.offline_stores.offline_utils import (
DEFAULT_ENTITY_DF_EVENT_TIMESTAMP_COL,
)
@@ -62,9 +65,8 @@ def stage_driver_hourly_stats_parquet_source(directory, df):
def stage_driver_hourly_stats_bigquery_source(df, table_id):
client = bigquery.Client()
- job_config = bigquery.LoadJobConfig()
df.reset_index(drop=True, inplace=True)
- job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
+ job = _write_df_to_bq(client, df, table_id)
job.result()
@@ -99,9 +101,8 @@ def feature_service(name: str, views) -> FeatureService:
def stage_customer_daily_profile_bigquery_source(df, table_id):
client = bigquery.Client()
- job_config = bigquery.LoadJobConfig()
df.reset_index(drop=True, inplace=True)
- job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
+ job = _write_df_to_bq(client, df, table_id)
job.result()
@@ -231,9 +232,8 @@ def get_expected_training_df(
def stage_orders_bigquery(df, table_id):
client = bigquery.Client()
- job_config = bigquery.LoadJobConfig()
df.reset_index(drop=True, inplace=True)
- job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
+ job = _write_df_to_bq(client, df, table_id)
job.result()
diff --git a/sdk/python/tests/integration/registration/test_universal_types.py b/sdk/python/tests/integration/registration/test_universal_types.py
--- a/sdk/python/tests/integration/registration/test_universal_types.py
+++ b/sdk/python/tests/integration/registration/test_universal_types.py
@@ -36,13 +36,6 @@ def populate_test_configs(offline: bool):
# For offline tests, don't need to vary for online store
if offline and test_repo_config.online_store == REDIS_CONFIG:
continue
- # TODO(https://github.com/feast-dev/feast/issues/1839): Fix BQ materialization of list features
- if (
- not offline
- and test_repo_config.provider == "gcp"
- and feature_is_list is True
- ):
- continue
configs.append(
TypeTestConfig(
entity_type=entity_type,
@@ -255,16 +248,10 @@ def assert_feature_list_types(
"bool": "bool",
}
assert str(historical_features_df.dtypes["value"]) == "object"
- if provider == "gcp":
- assert (
- feature_list_dtype_to_expected_historical_feature_list_dtype[feature_dtype]
- in type(historical_features_df.value[0]["list"][0]["item"]).__name__
- )
- else:
- assert (
- feature_list_dtype_to_expected_historical_feature_list_dtype[feature_dtype]
- in type(historical_features_df.value[0][0]).__name__
- )
+ assert (
+ feature_list_dtype_to_expected_historical_feature_list_dtype[feature_dtype]
+ in type(historical_features_df.value[0][0]).__name__
+ )
def assert_expected_arrow_types(
@@ -287,18 +274,10 @@ def assert_expected_arrow_types(
feature_dtype
]
if feature_is_list:
- if provider == "gcp":
- assert str(
- historical_features_arrow.schema.field_by_name("value").type
- ) in [
- f"struct<list: list<item: struct<item: {arrow_type}>> not null>",
- f"struct<list: list<item: struct<item: {arrow_type}>>>",
- ]
- else:
- assert (
- str(historical_features_arrow.schema.field_by_name("value").type)
- == f"list<item: {arrow_type}>"
- )
+ assert (
+ str(historical_features_arrow.schema.field_by_name("value").type)
+ == f"list<item: {arrow_type}>"
+ )
else:
assert (
str(historical_features_arrow.schema.field_by_name("value").type)
diff --git a/sdk/python/tests/utils/data_source_utils.py b/sdk/python/tests/utils/data_source_utils.py
--- a/sdk/python/tests/utils/data_source_utils.py
+++ b/sdk/python/tests/utils/data_source_utils.py
@@ -7,6 +7,7 @@
from feast import BigQuerySource, FileSource
from feast.data_format import ParquetFormat
+from feast.infra.offline_stores.bigquery import _write_df_to_bq
@contextlib.contextmanager
@@ -38,9 +39,7 @@ def simple_bq_source_using_table_ref_arg(
client.update_dataset(dataset, ["default_table_expiration_ms"])
table_ref = f"{gcp_project}.{bigquery_dataset}.table_{random.randrange(100, 999)}"
- job = client.load_table_from_dataframe(
- df, table_ref, job_config=bigquery.LoadJobConfig()
- )
+ job = _write_df_to_bq(client, df, table_ref)
job.result()
return BigQuerySource(
| Array/list feature materialization in BQ crashes in type conversion
Currently, if you try to use BQ and materialize a feature that is a list (of numbers, strings, etc), Feast will crash because in BQ, the value type of the feature is a dictionary, such as
`{'list': [{'item': 3}, {'item': 3}]}`
In materialize, we convert the latest values retrieval job to a pyarrow table and then converts to ValueProtos to write. This calls
`python_type_to_feast_value_type`, which doesn't support dict types.
| 2021-09-20T17:25:24 |
|
feast-dev/feast | 1,906 | feast-dev__feast-1906 | [
"1904"
] | 0342aa4be4882d738ac69beb3a52564ca5cdb09e | diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -245,7 +245,13 @@ def _type_err(item, dtype):
ValueType, Tuple[str, Any, Optional[Set[Type]]]
] = {
ValueType.INT32: ("int32_val", lambda x: int(x), None),
- ValueType.INT64: ("int64_val", lambda x: int(x), None),
+ ValueType.INT64: (
+ "int64_val",
+ lambda x: int(x.timestamp())
+ if isinstance(x, pd._libs.tslibs.timestamps.Timestamp)
+ else int(x),
+ None,
+ ),
ValueType.FLOAT: ("float_val", lambda x: float(x), None),
ValueType.DOUBLE: ("double_val", lambda x: x, {float, np.float64}),
ValueType.STRING: ("string_val", lambda x: str(x), None),
@@ -317,7 +323,7 @@ def python_value_to_proto_value(
value: Any, feature_type: ValueType = ValueType.UNKNOWN
) -> ProtoValue:
value_type = feature_type
- if value is not None:
+ if value is not None and feature_type == ValueType.UNKNOWN:
if isinstance(value, (list, np.ndarray)):
value_type = (
feature_type
| diff --git a/sdk/python/tests/integration/online_store/test_e2e_local.py b/sdk/python/tests/integration/online_store/test_e2e_local.py
--- a/sdk/python/tests/integration/online_store/test_e2e_local.py
+++ b/sdk/python/tests/integration/online_store/test_e2e_local.py
@@ -27,7 +27,7 @@ def _assert_online_features(
):
"""Assert that features in online store are up to date with `max_date` date."""
# Read features back
- result = store.get_online_features(
+ response = store.get_online_features(
features=[
"driver_hourly_stats:conv_rate",
"driver_hourly_stats:avg_daily_trips",
@@ -36,8 +36,14 @@ def _assert_online_features(
],
entity_rows=[{"driver_id": 1001}],
full_feature_names=True,
- ).to_dict()
+ )
+
+ # Float features should still be floats from the online store...
+ assert (
+ response.field_values[0].fields["driver_hourly_stats__conv_rate"].float_val > 0
+ )
+ result = response.to_dict()
assert len(result) == 5
assert "driver_hourly_stats__avg_daily_trips" in result
assert "driver_hourly_stats__conv_rate" in result
diff --git a/sdk/python/tests/integration/online_store/test_universal_online.py b/sdk/python/tests/integration/online_store/test_universal_online.py
--- a/sdk/python/tests/integration/online_store/test_universal_online.py
+++ b/sdk/python/tests/integration/online_store/test_universal_online.py
@@ -110,24 +110,27 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
assert df_features["customer_id"] == online_features_dict["customer_id"][i]
assert df_features["driver_id"] == online_features_dict["driver_id"][i]
- assert (
+ tc.assertAlmostEqual(
online_features_dict[
response_feature_name("conv_rate_plus_100", full_feature_names)
- ][i]
- == df_features["conv_rate"] + 100
+ ][i],
+ df_features["conv_rate"] + 100,
+ delta=0.0001,
)
- assert (
+ tc.assertAlmostEqual(
online_features_dict[
response_feature_name("conv_rate_plus_val_to_add", full_feature_names)
- ][i]
- == df_features["conv_rate"] + df_features["val_to_add"]
+ ][i],
+ df_features["conv_rate"] + df_features["val_to_add"],
+ delta=0.0001,
)
for unprefixed_feature_ref in unprefixed_feature_refs:
- tc.assertEqual(
+ tc.assertAlmostEqual(
df_features[unprefixed_feature_ref],
online_features_dict[
response_feature_name(unprefixed_feature_ref, full_feature_names)
][i],
+ delta=0.0001,
)
# Check what happens for missing values
@@ -254,13 +257,15 @@ def assert_feature_service_correctness(
+ 3
) # Add two for the driver id and the customer id entity keys and val_to_add request data
+ tc = unittest.TestCase()
for i, entity_row in enumerate(entity_rows):
df_features = get_latest_feature_values_from_dataframes(
drivers_df, customers_df, orders_df, global_df, entity_row
)
- assert (
+ tc.assertAlmostEqual(
feature_service_online_features_dict[
response_feature_name("conv_rate_plus_100", full_feature_names)
- ][i]
- == df_features["conv_rate"] + 100
+ ][i],
+ df_features["conv_rate"] + 100,
+ delta=0.0001,
)
| Float features are materialized into online stores as doubles
## Expected Behavior
For features defined as floats, online retrieval should NOT return double type.
## Current Behavior
Float features are materialized into online stores as double type.
## Steps to reproduce
Add the following assertion inside the `_assert_online_features()` function in `test_e2e_local.py`:
```python
assert response.field_values[0].fields[f"driver_hourly_stats__conv_rate"].float_val > 0
```
### Specifications
- Version: 0.13.0
- Platform: all
- Subsystem: Python SDK
## Possible Solution
In the function `python_value_to_proto_value()` of `type_map.py`,
`value_type` perhaps should be set to `feature_type` if `feature_type` is not `None`:
```python
def python_value_to_proto_value(
value: Any, feature_type: ValueType = None
) -> ProtoValue:
value_type = (
feature_type
if feature_type is not None
else python_type_to_feast_value_type("", value)
)
return _python_value_to_proto_value(value_type, value)
```
| Thanks for the bug report @Agent007 - I noticed the same thing yesterday when working on https://github.com/feast-dev/feast-java/pull/37.
| 2021-09-23T21:09:40 |
feast-dev/feast | 1,915 | feast-dev__feast-1915 | [
"1841"
] | 5d0f37b5096052ba8defc4c29cd355c1d9dbd901 | diff --git a/sdk/python/feast/infra/utils/aws_utils.py b/sdk/python/feast/infra/utils/aws_utils.py
--- a/sdk/python/feast/infra/utils/aws_utils.py
+++ b/sdk/python/feast/infra/utils/aws_utils.py
@@ -7,7 +7,13 @@
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
-from tenacity import retry, retry_if_exception_type, wait_exponential
+from tenacity import (
+ retry,
+ retry_if_exception_type,
+ stop_after_attempt,
+ stop_after_delay,
+ wait_exponential,
+)
from feast.errors import RedshiftCredentialsError, RedshiftQueryError
from feast.type_map import pa_to_redshift_value_type
@@ -53,6 +59,7 @@ def get_bucket_and_key(s3_path: str) -> Tuple[str, str]:
@retry(
wait=wait_exponential(multiplier=1, max=4),
retry=retry_if_exception_type(ConnectionClosedError),
+ stop=stop_after_attempt(5),
)
def execute_redshift_statement_async(
redshift_data_client, cluster_id: str, database: str, user: str, query: str
@@ -88,6 +95,7 @@ class RedshiftStatementNotFinishedError(Exception):
@retry(
wait=wait_exponential(multiplier=1, max=30),
retry=retry_if_exception_type(RedshiftStatementNotFinishedError),
+ stop=stop_after_delay(300), # 300 seconds
)
def wait_for_redshift_statement(redshift_data_client, statement: dict) -> None:
"""Waits for the Redshift statement to finish. Raises RedshiftQueryError if the statement didn't succeed.
| Redshift does not exit long running query
https://github.com/feast-dev/feast/blob/fd255cae7153cd44432f172b1e5c4738e4a7d583/sdk/python/feast/infra/utils/aws_utils.py#L88-L91
There is no timeout on this line of code, which means it is possible for queries to run indefinitely if they don't raise an exception. I've personally run into a query that has run for 25 minutes with no response.
| 2021-09-30T22:13:08 |
||
feast-dev/feast | 1,916 | feast-dev__feast-1916 | [
"1902"
] | 5d0f37b5096052ba8defc4c29cd355c1d9dbd901 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -1,6 +1,7 @@
+import contextlib
import uuid
from datetime import date, datetime, timedelta
-from typing import Dict, List, Optional, Union
+from typing import Callable, ContextManager, Dict, Iterator, List, Optional, Union
import numpy as np
import pandas as pd
@@ -122,38 +123,47 @@ def get_historical_features(
client, client.project, config.offline_store.dataset
)
- entity_schema = _upload_entity_df_and_get_entity_schema(
- client=client, table_name=table_reference, entity_df=entity_df,
- )
+ @contextlib.contextmanager
+ def query_generator() -> Iterator[str]:
+ entity_schema = _upload_entity_df_and_get_entity_schema(
+ client=client, table_name=table_reference, entity_df=entity_df,
+ )
- entity_df_event_timestamp_col = offline_utils.infer_event_timestamp_from_entity_df(
- entity_schema
- )
+ entity_df_event_timestamp_col = offline_utils.infer_event_timestamp_from_entity_df(
+ entity_schema
+ )
- expected_join_keys = offline_utils.get_expected_join_keys(
- project, feature_views, registry
- )
+ expected_join_keys = offline_utils.get_expected_join_keys(
+ project, feature_views, registry
+ )
- offline_utils.assert_expected_columns_in_entity_df(
- entity_schema, expected_join_keys, entity_df_event_timestamp_col
- )
+ offline_utils.assert_expected_columns_in_entity_df(
+ entity_schema, expected_join_keys, entity_df_event_timestamp_col
+ )
- # Build a query context containing all information required to template the BigQuery SQL query
- query_context = offline_utils.get_feature_view_query_context(
- feature_refs, feature_views, registry, project,
- )
+ # Build a query context containing all information required to template the BigQuery SQL query
+ query_context = offline_utils.get_feature_view_query_context(
+ feature_refs, feature_views, registry, project,
+ )
- # Generate the BigQuery SQL query from the query context
- query = offline_utils.build_point_in_time_query(
- query_context,
- left_table_query_string=table_reference,
- entity_df_event_timestamp_col=entity_df_event_timestamp_col,
- query_template=MULTIPLE_FEATURE_VIEW_POINT_IN_TIME_JOIN,
- full_feature_names=full_feature_names,
- )
+ # Generate the BigQuery SQL query from the query context
+ query = offline_utils.build_point_in_time_query(
+ query_context,
+ left_table_query_string=table_reference,
+ entity_df_event_timestamp_col=entity_df_event_timestamp_col,
+ query_template=MULTIPLE_FEATURE_VIEW_POINT_IN_TIME_JOIN,
+ full_feature_names=full_feature_names,
+ )
+
+ try:
+ yield query
+ finally:
+ # Asynchronously clean up the uploaded Bigquery table, which will expire
+ # if cleanup fails
+ client.delete_table(table=table_reference, not_found_ok=True)
return BigQueryRetrievalJob(
- query=query,
+ query=query_generator,
client=client,
config=config,
full_feature_names=full_feature_names,
@@ -166,13 +176,22 @@ def get_historical_features(
class BigQueryRetrievalJob(RetrievalJob):
def __init__(
self,
- query: str,
+ query: Union[str, Callable[[], ContextManager[str]]],
client: bigquery.Client,
config: RepoConfig,
full_feature_names: bool,
on_demand_feature_views: Optional[List[OnDemandFeatureView]],
):
- self.query = query
+ if not isinstance(query, str):
+ self._query_generator = query
+ else:
+
+ @contextlib.contextmanager
+ def query_generator() -> Iterator[str]:
+ assert isinstance(query, str)
+ yield query
+
+ self._query_generator = query_generator
self.client = client
self.config = config
self._full_feature_names = full_feature_names
@@ -187,15 +206,16 @@ def on_demand_feature_views(self) -> Optional[List[OnDemandFeatureView]]:
return self._on_demand_feature_views
def _to_df_internal(self) -> pd.DataFrame:
- # TODO: Ideally only start this job when the user runs "get_historical_features", not when they run to_df()
- df = self.client.query(self.query).to_dataframe(create_bqstorage_client=True)
- return df
+ with self._query_generator() as query:
+ df = self.client.query(query).to_dataframe(create_bqstorage_client=True)
+ return df
def to_sql(self) -> str:
"""
Returns the SQL query that will be executed in BigQuery to build the historical feature table.
"""
- return self.query
+ with self._query_generator() as query:
+ return query
def to_bigquery(
self,
@@ -215,36 +235,39 @@ def to_bigquery(
Returns:
Returns the destination table name or returns None if job_config.dry_run is True.
"""
+ with self._query_generator() as query:
+ if not job_config:
+ today = date.today().strftime("%Y%m%d")
+ rand_id = str(uuid.uuid4())[:7]
+ path = f"{self.client.project}.{self.config.offline_store.dataset}.historical_{today}_{rand_id}"
+ job_config = bigquery.QueryJobConfig(destination=path)
+
+ if not job_config.dry_run and self.on_demand_feature_views is not None:
+ job = _write_pyarrow_table_to_bq(
+ self.client, self.to_arrow(), job_config.destination
+ )
+ job.result()
+ print(f"Done writing to '{job_config.destination}'.")
+ return str(job_config.destination)
+
+ bq_job = self.client.query(query, job_config=job_config)
+
+ if job_config.dry_run:
+ print(
+ "This query will process {} bytes.".format(
+ bq_job.total_bytes_processed
+ )
+ )
+ return None
+
+ block_until_done(client=self.client, bq_job=bq_job, timeout=timeout)
- if not job_config:
- today = date.today().strftime("%Y%m%d")
- rand_id = str(uuid.uuid4())[:7]
- path = f"{self.client.project}.{self.config.offline_store.dataset}.historical_{today}_{rand_id}"
- job_config = bigquery.QueryJobConfig(destination=path)
-
- if not job_config.dry_run and self.on_demand_feature_views is not None:
- job = _write_pyarrow_table_to_bq(
- self.client, self.to_arrow(), job_config.destination
- )
- job.result()
print(f"Done writing to '{job_config.destination}'.")
return str(job_config.destination)
- bq_job = self.client.query(self.query, job_config=job_config)
-
- if job_config.dry_run:
- print(
- "This query will process {} bytes.".format(bq_job.total_bytes_processed)
- )
- return None
-
- block_until_done(client=self.client, bq_job=bq_job, timeout=timeout)
-
- print(f"Done writing to '{job_config.destination}'.")
- return str(job_config.destination)
-
def _to_arrow_internal(self) -> pyarrow.Table:
- return self.client.query(self.query).to_arrow()
+ with self._query_generator() as query:
+ return self.client.query(query).to_arrow()
def block_until_done(
@@ -325,13 +348,13 @@ def _upload_entity_df_and_get_entity_schema(
limited_entity_df = (
client.query(f"SELECT * FROM {table_name} LIMIT 1").result().to_dataframe()
)
+
entity_schema = dict(zip(limited_entity_df.columns, limited_entity_df.dtypes))
elif isinstance(entity_df, pd.DataFrame):
# Drop the index so that we dont have unnecessary columns
entity_df.reset_index(drop=True, inplace=True)
job = _write_df_to_bq(client, entity_df, table_name)
block_until_done(client, job)
-
entity_schema = dict(zip(entity_df.columns, entity_df.dtypes))
else:
raise InvalidEntityType(type(entity_df))
diff --git a/sdk/python/feast/infra/offline_stores/redshift.py b/sdk/python/feast/infra/offline_stores/redshift.py
--- a/sdk/python/feast/infra/offline_stores/redshift.py
+++ b/sdk/python/feast/infra/offline_stores/redshift.py
@@ -153,16 +153,17 @@ def query_generator() -> Iterator[str]:
full_feature_names=full_feature_names,
)
- yield query
-
- # Clean up the uploaded Redshift table
- aws_utils.execute_redshift_statement(
- redshift_client,
- config.offline_store.cluster_id,
- config.offline_store.database,
- config.offline_store.user,
- f"DROP TABLE {table_name}",
- )
+ try:
+ yield query
+ finally:
+ # Always clean up the uploaded Redshift table
+ aws_utils.execute_redshift_statement(
+ redshift_client,
+ config.offline_store.cluster_id,
+ config.offline_store.database,
+ config.offline_store.user,
+ f"DROP TABLE IF EXISTS {table_name}",
+ )
return RedshiftRetrievalJob(
query=query_generator,
| Clean up entity dataframes or temporary tables by default
**Is your feature request related to a problem? Please describe.**
Currently when a user runs `get_historical_features()` Feast will upload or create an entity dataframe in the offline store. Feast will then build a training dataset and return that to the user. Feast does not clean up the temporary tables, but often sets an expiry. This means that a data warehouse constantly has a set of temporary tables waiting to expire.
**Describe the solution you'd like**
Delete entity dataframe tables after a training dataset has been created (as well as other temporary tables in the creation process).
| 2021-09-30T22:27:31 |
||
feast-dev/feast | 1,921 | feast-dev__feast-1921 | [
"1728"
] | f31ea811e03efa285ace91990c2c7acf47363109 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -52,6 +52,13 @@ class BigQueryOfflineStoreConfig(FeastConfigBaseModel):
project_id: Optional[StrictStr] = None
""" (optional) GCP project name used for the BigQuery offline store """
+ location: Optional[StrictStr] = None
+ """ (optional) GCP location name used for the BigQuery offline store.
+ Examples of location names include ``US``, ``EU``, ``us-central1``, ``us-west4``.
+ If a location is not specified, the location defaults to the ``US`` multi-regional location.
+ For more information on BigQuery data locations see: https://cloud.google.com/bigquery/docs/locations
+ """
+
class BigQueryOfflineStore(OfflineStore):
@staticmethod
@@ -79,7 +86,10 @@ def pull_latest_from_table_or_query(
timestamp_desc_string = " DESC, ".join(timestamps) + " DESC"
field_string = ", ".join(join_key_columns + feature_name_columns + timestamps)
- client = _get_bigquery_client(project=config.offline_store.project_id)
+ client = _get_bigquery_client(
+ project=config.offline_store.project_id,
+ location=config.offline_store.location,
+ )
query = f"""
SELECT
{field_string}
@@ -115,7 +125,10 @@ def get_historical_features(
# TODO: Add entity_df validation in order to fail before interacting with BigQuery
assert isinstance(config.offline_store, BigQueryOfflineStoreConfig)
- client = _get_bigquery_client(project=config.offline_store.project_id)
+ client = _get_bigquery_client(
+ project=config.offline_store.project_id,
+ location=config.offline_store.location,
+ )
assert isinstance(config.offline_store, BigQueryOfflineStoreConfig)
@@ -367,9 +380,9 @@ def _upload_entity_df_and_get_entity_schema(
return entity_schema
-def _get_bigquery_client(project: Optional[str] = None):
+def _get_bigquery_client(project: Optional[str] = None, location: Optional[str] = None):
try:
- client = bigquery.Client(project=project)
+ client = bigquery.Client(project=project, location=location)
except DefaultCredentialsError as e:
raise FeastProviderLoginError(
str(e)
| Support for BQ Datasets not in US
## Expected Behavior
Currently Feast is assuming the location of my datasets is always in US as this is the current default location for the BQ Client.
It would be good to have the possibility to specify the location somewhere
## Current Behavior
```
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/feast/infra/offline_stores/bigquery.py", line 239, in to_df
df = self.client.query(self.query).to_dataframe(create_bqstorage_client=True)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/cloud/bigquery/job/query.py", line 1340, in to_dataframe
query_result = wait_for_query(self, progress_bar_type)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/cloud/bigquery/_tqdm_helpers.py", line 65, in wait_for_query
return query_job.result()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/cloud/bigquery/job/query.py", line 1172, in result
super(QueryJob, self).result(retry=retry, timeout=timeout)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/cloud/bigquery/job/base.py", line 679, in result
return super(_AsyncJob, self).result(timeout=timeout, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 134, in result
raise self._exception
google.api_core.exceptions.NotFound: 404 Not found: Dataset project_id:test_feast was not found in location US
```
| You should be able to set the `dataset` name manually as part of your bigquery configuration (to a pre-existing dataset). Or would you like to **not** set the dataset name but only the region?
@woop thanks for your quick response! I am able to set manually my table reference as part Big query source configuration. When I do `feast apply` everything works well but when I try to store some data in that table I receive a message saying that the dataset can't be found in US. So yes responding to your question I would like to set the location for a certain table (and the relative dataset)
> @woop thanks for your quick response! I am able to set manually my table reference as part Big query source configuration. When I do `feast apply` everything works well but when I try to store some data in that table I receive a message saying that the dataset can't be found in US. So yes responding to your question I would like to set the location for a certain table (and the relative dataset)
@elia-secchi can you try to delete that dataset, recreate it manually in the right region, and then configure Feast to use your dataset? I am assuming you are still using the automatically generated on (which defaults to the US).
Separately we can also add a configuration option for this, but I the above is a workaround for the time being.
@woop thanks for the help, the workaround you suggested solved the issue I was having!
@woop I set the BQ dataset to "asia-northeast3" region, but it doesn't work.
Is there any way to manually set the region? | 2021-10-02T01:25:10 |
|
feast-dev/feast | 1,940 | feast-dev__feast-1940 | [
"1938"
] | e8a151941c2ea53ce72aadb254573cd6966e1a82 | diff --git a/sdk/python/feast/repo_config.py b/sdk/python/feast/repo_config.py
--- a/sdk/python/feast/repo_config.py
+++ b/sdk/python/feast/repo_config.py
@@ -277,14 +277,7 @@ def write_to_path(self, repo_path: Path):
config_path = repo_path / "feature_store.yaml"
with open(config_path, mode="w") as f:
yaml.dump(
- yaml.safe_load(
- self.json(
- exclude={"repo_path"},
- exclude_none=True,
- exclude_unset=True,
- exclude_defaults=True,
- )
- ),
+ yaml.safe_load(self.json(exclude={"repo_path"}, exclude_unset=True,)),
f,
sort_keys=False,
)
| feast alpha enable command messes up feature_store.yaml
## Expected Behavior
Calling `feast alpha enable python_feature_server` should add
```
flags:
alpha_features: true
python_feature_server: true
```
to `feature_store.yaml`.
## Current Behavior
Instead, it changes
```
project: sharing_sculpin
registry: data/registry.db
provider: local
online_store:
path: data/online_store.db
```
to
```
project: sharing_sculpin
provider: local
online_store:
path: data/online_store.db
offline_store: {}
flags:
alpha_features: true
python_feature_server: true
```
This still works due to the Pydantic defaults, but can be very confusing for users.
Edit: in some cases, it actually doesn't preserve the same effect. For example, the command changes
```
project: sharing_sculpin
provider: local
online_store:
type: redis
offline_store: {}
flags:
alpha_features: true
python_feature_server: true
```
to
```
project: sharing_sculpin
provider: local
online_store: {}
offline_store: {}
flags:
alpha_features: true
python_feature_server: true
```
which is incorrect since the `online_store` no longer has type `redis`.
## Steps to reproduce
Call `feast init` to create a new `feature_repo`. Then call `feast alpha enable python_feature_server`, and inspect `feature_store.yaml` before and after this command.
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
| 2021-10-14T05:31:45 |
||
feast-dev/feast | 1,946 | feast-dev__feast-1946 | [
"1892"
] | 0212728dc2251e8b9af3d0a94dc62797fe7c4567 | diff --git a/sdk/python/feast/constants.py b/sdk/python/feast/constants.py
--- a/sdk/python/feast/constants.py
+++ b/sdk/python/feast/constants.py
@@ -24,3 +24,6 @@
# Environment variable for toggling usage
FEAST_USAGE = "FEAST_USAGE"
+
+# Environment variable for the path for overwriting universal test configs
+FULL_REPO_CONFIGS_MODULE_ENV_NAME: str = "FULL_REPO_CONFIGS_MODULE"
| diff --git a/sdk/python/tests/conftest.py b/sdk/python/tests/conftest.py
--- a/sdk/python/tests/conftest.py
+++ b/sdk/python/tests/conftest.py
@@ -40,6 +40,9 @@ def pytest_configure(config):
"markers", "integration: mark test that has external dependencies"
)
config.addinivalue_line("markers", "benchmark: mark benchmarking tests")
+ config.addinivalue_line(
+ "markers", "universal: mark tests that use the universal feature repo"
+ )
def pytest_addoption(parser):
@@ -52,11 +55,15 @@ def pytest_addoption(parser):
parser.addoption(
"--benchmark", action="store_true", default=False, help="Run benchmark tests",
)
+ parser.addoption(
+ "--universal", action="store_true", default=False, help="Run universal tests",
+ )
def pytest_collection_modifyitems(config, items: List[Item]):
should_run_integration = config.getoption("--integration") is True
should_run_benchmark = config.getoption("--benchmark") is True
+ should_run_universal = config.getoption("--universal") is True
integration_tests = [t for t in items if "integration" in t.keywords]
if not should_run_integration:
@@ -76,6 +83,12 @@ def pytest_collection_modifyitems(config, items: List[Item]):
for t in benchmark_tests:
items.append(t)
+ universal_tests = [t for t in items if "universal" in t.keywords]
+ if should_run_universal:
+ items.clear()
+ for t in universal_tests:
+ items.append(t)
+
@pytest.fixture
def simple_dataset_1() -> pd.DataFrame:
diff --git a/sdk/python/tests/integration/e2e/test_universal_e2e.py b/sdk/python/tests/integration/e2e/test_universal_e2e.py
--- a/sdk/python/tests/integration/e2e/test_universal_e2e.py
+++ b/sdk/python/tests/integration/e2e/test_universal_e2e.py
@@ -12,6 +12,7 @@
@pytest.mark.integration
[email protected]
@pytest.mark.parametrize("infer_features", [True, False])
def test_e2e_consistency(environment, e2e_data_sources, infer_features):
fs = environment.feature_store
diff --git a/sdk/python/tests/integration/feature_repos/repo_configuration.py b/sdk/python/tests/integration/feature_repos/repo_configuration.py
--- a/sdk/python/tests/integration/feature_repos/repo_configuration.py
+++ b/sdk/python/tests/integration/feature_repos/repo_configuration.py
@@ -1,3 +1,5 @@
+import importlib
+import os
import tempfile
import uuid
from contextlib import contextmanager
@@ -9,6 +11,7 @@
import pandas as pd
from feast import FeatureStore, FeatureView, RepoConfig, driver_test_data
+from feast.constants import FULL_REPO_CONFIGS_MODULE_ENV_NAME
from feast.data_source import DataSource
from tests.integration.feature_repos.universal.data_source_creator import (
DataSourceCreator,
@@ -61,7 +64,15 @@ def __repr__(self) -> str:
DYNAMO_CONFIG = {"type": "dynamodb", "region": "us-west-2"}
REDIS_CONFIG = {"type": "redis", "connection_string": "localhost:6379,db=0"}
-FULL_REPO_CONFIGS: List[IntegrationTestRepoConfig] = [
+
+# FULL_REPO_CONFIGS contains the repo configurations (e.g. provider, offline store,
+# online store, test data, and more parameters) that most integration tests will test
+# against. By default, FULL_REPO_CONFIGS uses the three providers (local, GCP, and AWS)
+# with their default offline and online stores; it also tests the providers with the
+# Redis online store. It can be overwritten by specifying a Python module through the
+# FULL_REPO_CONFIGS_MODULE_ENV_NAME environment variable. In this case, that Python
+# module will be imported and FULL_REPO_CONFIGS will be extracted from the file.
+DEFAULT_FULL_REPO_CONFIGS: List[IntegrationTestRepoConfig] = [
# Local configurations
IntegrationTestRepoConfig(),
IntegrationTestRepoConfig(online_store=REDIS_CONFIG),
@@ -88,6 +99,17 @@ def __repr__(self) -> str:
online_store=REDIS_CONFIG,
),
]
+full_repo_configs_module = os.environ.get(FULL_REPO_CONFIGS_MODULE_ENV_NAME)
+if full_repo_configs_module is not None:
+ try:
+ module = importlib.import_module(full_repo_configs_module)
+ FULL_REPO_CONFIGS = getattr(module, "FULL_REPO_CONFIGS")
+ except Exception:
+ pass
+ finally:
+ FULL_REPO_CONFIGS = DEFAULT_FULL_REPO_CONFIGS
+else:
+ FULL_REPO_CONFIGS = DEFAULT_FULL_REPO_CONFIGS
def construct_universal_entities() -> Dict[str, List[Any]]:
diff --git a/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py b/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
--- a/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
+++ b/sdk/python/tests/integration/offline_store/test_universal_historical_retrieval.py
@@ -231,6 +231,7 @@ def get_expected_training_df(
@pytest.mark.integration
[email protected]
@pytest.mark.parametrize("full_feature_names", [True, False], ids=lambda v: str(v))
def test_historical_features(environment, universal_data_sources, full_feature_names):
store = environment.feature_store
diff --git a/sdk/python/tests/integration/online_store/test_universal_online.py b/sdk/python/tests/integration/online_store/test_universal_online.py
--- a/sdk/python/tests/integration/online_store/test_universal_online.py
+++ b/sdk/python/tests/integration/online_store/test_universal_online.py
@@ -22,6 +22,7 @@
@pytest.mark.integration
[email protected]
@pytest.mark.parametrize("full_feature_names", [True, False], ids=lambda v: str(v))
def test_online_retrieval(environment, universal_data_sources, full_feature_names):
diff --git a/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py b/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
--- a/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
+++ b/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
@@ -11,6 +11,7 @@
@pytest.mark.integration
[email protected]
@pytest.mark.parametrize("infer_features", [True, False], ids=lambda v: str(v))
def test_infer_odfv_features(environment, universal_data_sources, infer_features):
store = environment.feature_store
@@ -33,6 +34,7 @@ def test_infer_odfv_features(environment, universal_data_sources, infer_features
@pytest.mark.integration
[email protected]
def test_infer_odfv_features_with_error(environment, universal_data_sources):
store = environment.feature_store
diff --git a/sdk/python/tests/integration/registration/test_universal_types.py b/sdk/python/tests/integration/registration/test_universal_types.py
--- a/sdk/python/tests/integration/registration/test_universal_types.py
+++ b/sdk/python/tests/integration/registration/test_universal_types.py
@@ -116,6 +116,7 @@ def get_fixtures(request):
@pytest.mark.integration
[email protected]
def test_entity_inference_types_match(offline_types_test_fixtures):
environment, config, data_source, fv = offline_types_test_fixtures
fs = environment.feature_store
@@ -139,6 +140,7 @@ def test_entity_inference_types_match(offline_types_test_fixtures):
@pytest.mark.integration
[email protected]
def test_feature_get_historical_features_types_match(offline_types_test_fixtures):
environment, config, data_source, fv = offline_types_test_fixtures
fs = environment.feature_store
@@ -185,6 +187,7 @@ def test_feature_get_historical_features_types_match(offline_types_test_fixtures
@pytest.mark.integration
[email protected]
def test_feature_get_online_features_types_match(online_types_test_fixtures):
environment, config, data_source, fv = online_types_test_fixtures
fv = create_feature_view(
| Allow plugin repos to test against universal test suite
**Is your feature request related to a problem? Please describe.**
There are several plugin repos for custom connectors (Hive, Azure, Snowflake, etc.), and there is increasing interest from the community in contributing plugins. One blocker for many folks is that there is no easy way to test their custom connector against our universal test suite. Someone working on a plugin repo should be able to test their connector against the universal test suite with minimal changes in their repo.
**Describe the solution you'd like**
The Feast team has come up with two solutions.
The first solution is a temporary fix to unblock folks who wish to start testing immediately. We recommend that you add `feast` as a [git submodule](https://git-scm.com/book/en/v2/Git-Tools-Submodules) in your plugin repo, and then install `feast` in editable mode by navigating to `feast` and running `pip install -e sdk/python/[ci]` as detailed [here](https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md). This will allow you to `import feast`, and will also allow you to run our test suite with `pytest`. For example, in `feast` you should be able to run `make test`, and all unit tests should succeed. In order to run the full suite of integration tests with your custom connector, all you need to do is modify `FULL_REPO_CONFIGS` in `sdk/python/tests/integration/feature_repos/repo_configuration.py`. Most of our integration tests rely on pytest fixtures defined in `conftest.py`, most of which are parametrized based on `FULL_REPO_CONFIGS`. The main thing you will need to do in order to overwrite `FULL_REPO_CONFIGS` is to write a `DataSourceCreator`. We consider this solution a temporary fix because it still requires that the user to modify the `feast` repo directly, even if it's in a git submodule.
The second solution, which extends the first solution to be more viable in the long-term, will be to allow users to overwrite `FULL_REPO_CONFIGS` through an environment variable. This means that after adding `feast` as a git submodule, users should be able to directly run integration tests without ever needing to modify the `feast` repo. We intend to build this functionality out eventually, but are currently working on several other higher-priority features. If anyone in the community wants to take this on, that would be great!
**Describe alternatives you've considered**
N/A
**Additional context**
Add any other context or screenshots about the feature request here.
| 2021-10-14T19:48:44 |
|
feast-dev/feast | 1,955 | feast-dev__feast-1955 | [
"1912"
] | 2541c91c9238ef09ffd74e45e74116a31d7f2daa | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -69,7 +69,7 @@
"google-cloud-bigquery>=2.28.1",
"google-cloud-bigquery-storage >= 2.0.0",
"google-cloud-datastore>=2.1.*",
- "google-cloud-storage>=1.34.*",
+ "google-cloud-storage>=1.34.*,<1.41",
"google-cloud-core==1.4.*",
]
@@ -115,7 +115,7 @@
"google-cloud-bigquery>=2.28.1",
"google-cloud-bigquery-storage >= 2.0.0",
"google-cloud-datastore>=2.1.*",
- "google-cloud-storage>=1.20.*",
+ "google-cloud-storage>=1.20.*,<1.41",
"google-cloud-core==1.4.*",
"redis-py-cluster==2.1.2",
"boto3==1.17.*",
| Solve feast conflict dependencies for [gcp]
## Expected Behavior
`pip-compile` should run without error and result in a nice lock files of the libraries to install
## Current Behavior
`pip-compile` will find conflict in the current feast[gcp] dependencies. Did not try for [aws] or [redis]
<img width="1774" alt="image" src="https://user-images.githubusercontent.com/18557047/135331637-5c3d47ad-ebe0-4a27-b335-93617675027f.png">
## Steps to reproduce
```bash
echo "-e sdk/python[gcp]" > sdk/python/requirements.txt
pip-compile --dry-run sdk/python/requirements.txt
```
<img width="1244" alt="image" src="https://user-images.githubusercontent.com/18557047/135332916-c368ca80-3276-40ab-a3bd-42c48d52c2e9.png">
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
| 2021-10-18T07:00:53 |
||
feast-dev/feast | 1,959 | feast-dev__feast-1959 | [
"1958",
"1958"
] | 019552e8b2780deb1cae256f3513006f43b70981 | diff --git a/sdk/python/feast/infra/offline_stores/bigquery.py b/sdk/python/feast/infra/offline_stores/bigquery.py
--- a/sdk/python/feast/infra/offline_stores/bigquery.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery.py
@@ -136,7 +136,10 @@ def get_historical_features(
assert isinstance(config.offline_store, BigQueryOfflineStoreConfig)
table_reference = _get_table_reference_for_new_entity(
- client, client.project, config.offline_store.dataset
+ client,
+ client.project,
+ config.offline_store.dataset,
+ config.offline_store.location,
)
@contextlib.contextmanager
@@ -340,13 +343,16 @@ def _wait_until_done(bq_job):
def _get_table_reference_for_new_entity(
- client: Client, dataset_project: str, dataset_name: str
+ client: Client,
+ dataset_project: str,
+ dataset_name: str,
+ dataset_location: Optional[str],
) -> str:
"""Gets the table_id for the new entity to be uploaded."""
# First create the BigQuery dataset if it doesn't exist
dataset = bigquery.Dataset(f"{dataset_project}.{dataset_name}")
- dataset.location = "US"
+ dataset.location = dataset_location if dataset_location else "US"
try:
client.get_dataset(dataset)
| Dataset location harcoded to US when using BigQueryOfflineStore.get_historical_features()
## Expected Behavior
Create the BQ dataset in the location specified in feature_store.yaml if the dataset doesn't exists
## Current Behavior
Always create the BQ dataset in the US, raising a _dataset not found_ since the dataset is not located in the same region as the one specified in _feature_store.yam_ config file
## Steps to reproduce
use an bigquery offline_store with location set to EU in feature_store.yaml
### Specifications
https://github.com/feast-dev/feast/blob/ec4c02bb149bd8462697644ea20e2660ddfba81a/sdk/python/feast/infra/offline_stores/bigquery.py#L340
- Version:
- Platform:
- Subsystem:
## Possible Solution
update feast.infra.offline_stores.bigquery._get_table_reference_for_new_entity() by adding a new _dataset_location_ arg
Dataset location harcoded to US when using BigQueryOfflineStore.get_historical_features()
## Expected Behavior
Create the BQ dataset in the location specified in feature_store.yaml if the dataset doesn't exists
## Current Behavior
Always create the BQ dataset in the US, raising a _dataset not found_ since the dataset is not located in the same region as the one specified in _feature_store.yam_ config file
## Steps to reproduce
use an bigquery offline_store with location set to EU in feature_store.yaml
### Specifications
https://github.com/feast-dev/feast/blob/ec4c02bb149bd8462697644ea20e2660ddfba81a/sdk/python/feast/infra/offline_stores/bigquery.py#L340
- Version:
- Platform:
- Subsystem:
## Possible Solution
update feast.infra.offline_stores.bigquery._get_table_reference_for_new_entity() by adding a new _dataset_location_ arg
| I'll do a PR
I'll do a PR | 2021-10-21T08:56:47 |
|
feast-dev/feast | 1,968 | feast-dev__feast-1968 | [
"1967"
] | 9359441700e565e92bafa79ca37416e1c2b95b75 | diff --git a/sdk/python/feast/infra/online_stores/redis.py b/sdk/python/feast/infra/online_stores/redis.py
--- a/sdk/python/feast/infra/online_stores/redis.py
+++ b/sdk/python/feast/infra/online_stores/redis.py
@@ -142,7 +142,7 @@ def _get_client(self, online_store_config: RedisOnlineStoreConfig):
startup_nodes, kwargs = self._parse_connection_string(
online_store_config.connection_string
)
- if online_store_config.type == RedisType.redis_cluster:
+ if online_store_config.redis_type == RedisType.redis_cluster:
kwargs["startup_nodes"] = startup_nodes
self._client = RedisCluster(**kwargs)
else:
| Redis Cluster materialization error
## Expected Behavior
Materialization for Redis Cluster works
## Current Behavior
During materialization Feast use Redis client instead of RedisCluster client
## Steps to reproduce
configure redis cluster
```
project: my_feature_repo
registry: data/registry.db
provider: local
online_store:
type: redis
redis_type: redis_cluster
connection_string: "redis1:6379,redis2:6379,ssl=true,password=my_password"
```
try to materialize
### Specifications
- Version: 0.14.0
- Platform: python 3.7
- Subsystem: linux (anaconda3 docker image)
## Possible Solution
correct _get_client method in redis.py l. 124
| 2021-10-23T23:22:12 |
||
feast-dev/feast | 1,990 | feast-dev__feast-1990 | [
"1795"
] | 600d38eb7f2e638929d2b0038c20c00df236b613 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -368,6 +368,7 @@ def apply(
OnDemandFeatureView,
RequestFeatureView,
FeatureService,
+ FeatureTable,
List[
Union[
FeatureView,
@@ -375,9 +376,21 @@ def apply(
RequestFeatureView,
Entity,
FeatureService,
+ FeatureTable,
]
],
],
+ objects_to_delete: List[
+ Union[
+ FeatureView,
+ OnDemandFeatureView,
+ RequestFeatureView,
+ Entity,
+ FeatureService,
+ FeatureTable,
+ ]
+ ] = [],
+ partial: bool = True,
commit: bool = True,
):
"""Register objects to metadata store and update related infrastructure.
@@ -389,6 +402,10 @@ def apply(
Args:
objects: A single object, or a list of objects that should be registered with the Feature Store.
+ objects_to_delete: A list of objects to be deleted from the registry and removed from the
+ provider's infrastructure. This deletion will only be performed if partial is set to False.
+ partial: If True, apply will only handle the specified objects; if False, apply will also delete
+ all the objects in objects_to_delete, and tear down any associated cloud resources.
commit: whether to commit changes to the registry
Raises:
@@ -421,11 +438,26 @@ def apply(
assert isinstance(objects, list)
+ # Separate all objects into entities, feature services, and different feature view types.
+ entities_to_update = [ob for ob in objects if isinstance(ob, Entity)]
views_to_update = [ob for ob in objects if isinstance(ob, FeatureView)]
request_views_to_update = [
ob for ob in objects if isinstance(ob, RequestFeatureView)
]
odfvs_to_update = [ob for ob in objects if isinstance(ob, OnDemandFeatureView)]
+ services_to_update = [ob for ob in objects if isinstance(ob, FeatureService)]
+ tables_to_update = [ob for ob in objects if isinstance(ob, FeatureTable)]
+
+ if len(entities_to_update) + len(views_to_update) + len(
+ request_views_to_update
+ ) + len(odfvs_to_update) + len(services_to_update) + len(
+ tables_to_update
+ ) != len(
+ objects
+ ):
+ raise ValueError("Unknown object type provided as part of apply() call")
+
+ # Validate all types of feature views.
if (
not flags_helper.enable_on_demand_feature_views(self.config)
and len(odfvs_to_update) > 0
@@ -438,8 +470,6 @@ def apply(
_validate_feature_views(
[*views_to_update, *odfvs_to_update, *request_views_to_update]
)
- entities_to_update = [ob for ob in objects if isinstance(ob, Entity)]
- services_to_update = [ob for ob in objects if isinstance(ob, FeatureService)]
# Make inferences
update_entities_with_inferred_types_from_feature_views(
@@ -456,12 +486,7 @@ def apply(
for odfv in odfvs_to_update:
odfv.infer_features()
- if len(views_to_update) + len(entities_to_update) + len(
- services_to_update
- ) + len(odfvs_to_update) + len(request_views_to_update) != len(objects):
- raise ValueError("Unknown object type provided as part of apply() call")
-
- # DUMMY_ENTITY is a placeholder entity used in entityless FeatureViews
+ # Handle all entityless feature views by using DUMMY_ENTITY as a placeholder entity.
DUMMY_ENTITY = Entity(
name=DUMMY_ENTITY_NAME,
join_key=DUMMY_ENTITY_ID,
@@ -469,6 +494,7 @@ def apply(
)
entities_to_update.append(DUMMY_ENTITY)
+ # Add all objects to the registry and update the provider's infrastructure.
for view in itertools.chain(
views_to_update, odfvs_to_update, request_views_to_update
):
@@ -477,14 +503,62 @@ def apply(
self._registry.apply_entity(ent, project=self.project, commit=False)
for feature_service in services_to_update:
self._registry.apply_feature_service(feature_service, project=self.project)
+ for table in tables_to_update:
+ self._registry.apply_feature_table(table, project=self.project)
+
+ if not partial:
+ # Delete all registry objects that should not exist.
+ entities_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, Entity)
+ ]
+ views_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, FeatureView)
+ ]
+ request_views_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, RequestFeatureView)
+ ]
+ odfvs_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, OnDemandFeatureView)
+ ]
+ services_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, FeatureService)
+ ]
+ tables_to_delete = [
+ ob for ob in objects_to_delete if isinstance(ob, FeatureTable)
+ ]
+
+ for entity in entities_to_delete:
+ self._registry.delete_entity(
+ entity.name, project=self.project, commit=False
+ )
+ for view in views_to_delete:
+ self._registry.delete_feature_view(
+ view.name, project=self.project, commit=False
+ )
+ for request_view in request_views_to_delete:
+ self._registry.delete_feature_view(
+ request_view.name, project=self.project, commit=False
+ )
+ for odfv in odfvs_to_delete:
+ self._registry.delete_feature_view(
+ odfv.name, project=self.project, commit=False
+ )
+ for service in services_to_delete:
+ self._registry.delete_feature_service(
+ service.name, project=self.project, commit=False
+ )
+ for table in tables_to_delete:
+ self._registry.delete_feature_table(
+ table.name, project=self.project, commit=False
+ )
self._get_provider().update_infra(
project=self.project,
- tables_to_delete=[],
- tables_to_keep=views_to_update,
- entities_to_delete=[],
+ tables_to_delete=views_to_delete + tables_to_delete if not partial else [],
+ tables_to_keep=views_to_update + tables_to_update,
+ entities_to_delete=entities_to_delete if not partial else [],
entities_to_keep=entities_to_update,
- partial=True,
+ partial=partial,
)
if commit:
diff --git a/sdk/python/feast/registry.py b/sdk/python/feast/registry.py
--- a/sdk/python/feast/registry.py
+++ b/sdk/python/feast/registry.py
@@ -629,6 +629,32 @@ def delete_feature_view(self, name: str, project: str, commit: bool = True):
raise FeatureViewNotFoundException(name, project)
+ def delete_entity(self, name: str, project: str, commit: bool = True):
+ """
+ Deletes an entity or raises an exception if not found.
+
+ Args:
+ name: Name of entity
+ project: Feast project that this entity belongs to
+ commit: Whether the change should be persisted immediately
+ """
+ self._prepare_registry_for_changes()
+ assert self.cached_registry_proto
+
+ for idx, existing_entity_proto in enumerate(
+ self.cached_registry_proto.entities
+ ):
+ if (
+ existing_entity_proto.spec.name == name
+ and existing_entity_proto.spec.project == project
+ ):
+ del self.cached_registry_proto.entities[idx]
+ if commit:
+ self.commit()
+ return
+
+ raise EntityNotFoundException(name, project)
+
def commit(self):
"""Commits the state of the registry cache to the remote registry store."""
if self.cached_registry_proto:
diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -13,9 +13,8 @@
from feast import Entity, FeatureTable
from feast.base_feature_view import BaseFeatureView
from feast.feature_service import FeatureService
-from feast.feature_store import FeatureStore, _validate_feature_views
+from feast.feature_store import FeatureStore
from feast.feature_view import FeatureView
-from feast.infra.provider import get_provider
from feast.names import adjectives, animals
from feast.on_demand_feature_view import OnDemandFeatureView
from feast.registry import Registry
@@ -142,13 +141,6 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
registry._initialize_registry()
sys.dont_write_bytecode = True
repo = parse_repo(repo_path)
- _validate_feature_views(
- [
- *list(repo.feature_views),
- *list(repo.on_demand_feature_views),
- *list(repo.request_feature_views),
- ]
- )
if not skip_source_validation:
data_sources = [t.batch_source for t in repo.feature_views]
@@ -156,6 +148,7 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
for data_source in data_sources:
data_source.validate(store.config)
+ # For each object in the registry, determine whether it should be kept or deleted.
entities_to_keep, entities_to_delete = _tag_registry_entities_for_keep_delete(
project, registry, repo
)
@@ -169,50 +162,60 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
tables_to_keep, tables_to_delete = _tag_registry_tables_for_keep_delete(
project, registry, repo
)
- (services_to_keep, services_to_delete,) = _tag_registry_services_for_keep_delete(
+ services_to_keep, services_to_delete = _tag_registry_services_for_keep_delete(
project, registry, repo
)
sys.dont_write_bytecode = False
- # Delete views that should not exist
- for registry_view in views_to_delete:
- registry.delete_feature_view(registry_view.name, project=project, commit=False)
+ # Apply all changes to the registry and infrastructure.
+ all_to_apply: List[
+ Union[
+ Entity, BaseFeatureView, FeatureService, OnDemandFeatureView, FeatureTable
+ ]
+ ] = []
+ all_to_apply.extend(entities_to_keep)
+ all_to_apply.extend(views_to_keep)
+ all_to_apply.extend(services_to_keep)
+ all_to_apply.extend(odfvs_to_keep)
+ all_to_apply.extend(tables_to_keep)
+ all_to_delete: List[
+ Union[
+ Entity, BaseFeatureView, FeatureService, OnDemandFeatureView, FeatureTable
+ ]
+ ] = []
+ all_to_delete.extend(entities_to_delete)
+ all_to_delete.extend(views_to_delete)
+ all_to_delete.extend(services_to_delete)
+ all_to_delete.extend(odfvs_to_delete)
+ all_to_delete.extend(tables_to_delete)
+
+ store.apply(
+ all_to_apply, objects_to_delete=all_to_delete, partial=False, commit=False
+ )
+
+ for entity in entities_to_delete:
click.echo(
- f"Deleted feature view {Style.BRIGHT + Fore.GREEN}{registry_view.name}{Style.RESET_ALL} from registry"
+ f"Deleted entity {Style.BRIGHT + Fore.GREEN}{entity.name}{Style.RESET_ALL} from registry"
)
-
- # Delete feature services that should not exist
- for feature_service_to_delete in services_to_delete:
- registry.delete_feature_service(
- feature_service_to_delete.name, project=project, commit=False
+ for view in views_to_delete:
+ click.echo(
+ f"Deleted feature view {Style.BRIGHT + Fore.GREEN}{view.name}{Style.RESET_ALL} from registry"
)
+ for odfv in odfvs_to_delete:
click.echo(
- f"Deleted feature service {Style.BRIGHT + Fore.GREEN}{feature_service_to_delete.name}{Style.RESET_ALL} "
- f"from registry"
+ f"Deleted on demand feature view {Style.BRIGHT + Fore.GREEN}{odfv.name}{Style.RESET_ALL} from registry"
)
-
- # Delete tables that should not exist
- for registry_table in tables_to_delete:
- registry.delete_feature_table(
- registry_table.name, project=project, commit=False
+ for table in tables_to_delete:
+ click.echo(
+ f"Deleted feature table {Style.BRIGHT + Fore.GREEN}{table.name}{Style.RESET_ALL} from registry"
)
+ for feature_service in services_to_delete:
click.echo(
- f"Deleted feature table {Style.BRIGHT + Fore.GREEN}{registry_table.name}{Style.RESET_ALL} from registry"
+ f"Deleted feature service {Style.BRIGHT + Fore.GREEN}{feature_service.name}{Style.RESET_ALL} "
+ f"from registry"
)
- # TODO: delete entities from the registry too
-
- # Add / update views + entities + services
- all_to_apply: List[
- Union[Entity, BaseFeatureView, FeatureService, OnDemandFeatureView]
- ] = []
- all_to_apply.extend(entities_to_keep)
- all_to_apply.extend(views_to_keep)
- all_to_apply.extend(services_to_keep)
- all_to_apply.extend(odfvs_to_keep)
- # TODO: delete odfvs
- store.apply(all_to_apply, commit=False)
for entity in entities_to_keep:
click.echo(
f"Registered entity {Style.BRIGHT + Fore.GREEN}{entity.name}{Style.RESET_ALL}"
@@ -231,12 +234,10 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
)
# Create tables that should exist
for table in tables_to_keep:
- registry.apply_feature_table(table, project, commit=False)
click.echo(
f"Registered feature table {Style.BRIGHT + Fore.GREEN}{table.name}{Style.RESET_ALL}"
)
- infra_provider = get_provider(repo_config, repo_path)
views_to_keep_in_infra = [
view for view in views_to_keep if isinstance(view, FeatureView)
]
@@ -257,21 +258,6 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
)
# TODO: consider echoing also entities being deployed/removed
- all_to_delete: List[Union[FeatureTable, FeatureView]] = []
- all_to_delete.extend(tables_to_delete)
- all_to_delete.extend(views_to_delete_from_infra)
- all_to_keep: List[Union[FeatureTable, FeatureView]] = []
- all_to_keep.extend(tables_to_keep)
- all_to_keep.extend(views_to_keep_in_infra)
- infra_provider.update_infra(
- project,
- tables_to_delete=all_to_delete,
- tables_to_keep=all_to_keep,
- entities_to_delete=list(entities_to_delete),
- entities_to_keep=list(entities_to_keep),
- partial=False,
- )
-
# Commit the update to the registry only after successful infra update
registry.commit()
| diff --git a/sdk/python/tests/integration/registration/test_registry.py b/sdk/python/tests/integration/registration/test_registry.py
--- a/sdk/python/tests/integration/registration/test_registry.py
+++ b/sdk/python/tests/integration/registration/test_registry.py
@@ -101,6 +101,10 @@ def test_apply_entity_success(test_registry):
and entity.labels["team"] == "matchmaking"
)
+ test_registry.delete_entity("driver_car_id", project)
+ entities = test_registry.list_entities(project)
+ assert len(entities) == 0
+
test_registry.teardown()
# Will try to reload registry, which will fail because the file has been deleted
| Duplicate update_infra calls in `feast apply`
## Expected Behavior
Running `feast apply` only calls `update_infra()` on a provider once.
## Current Behavior
Running `feast apply` calls `update_infra()` on a provider twice.
## Steps to reproduce
There are two places where update infra is being called
* https://github.com/feast-dev/feast/blob/745a1b43d20c0169b675b1f28039854205fb8180/sdk/python/feast/repo_operations.py#L229
* https://github.com/feast-dev/feast/blob/745a1b43d20c0169b675b1f28039854205fb8180/sdk/python/feast/repo_operations.py#L188
## Possible Solution
Remove one of the update_infra calls
| 2021-11-01T21:18:25 |
|
feast-dev/feast | 2,002 | feast-dev__feast-2002 | [
"1995"
] | 63680bad344f55e41a281d643bef5972f2ea28da | diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -90,6 +90,8 @@ def feast_value_type_to_pandas_type(value_type: ValueType) -> Any:
ValueType.BOOL: "bool",
ValueType.UNIX_TIMESTAMP: "datetime",
}
+ if value_type.name.endswith("_LIST"):
+ return "object"
if value_type in value_type_to_pandas_type:
return value_type_to_pandas_type[value_type]
raise TypeError(
@@ -451,6 +453,9 @@ def pa_to_redshift_value_type(pa_type: pyarrow.DataType) -> str:
# PyArrow decimal types (e.g. "decimal(38,37)") luckily directly map to the Redshift type.
return pa_type_as_str
+ if pa_type_as_str.startswith("list"):
+ return "super"
+
# We have to take into account how arrow types map to parquet types as well.
# For example, null type maps to int32 in parquet, so we have to use int4 in Redshift.
# Other mappings have also been adjusted accordingly.
| diff --git a/sdk/python/tests/integration/feature_repos/universal/entities.py b/sdk/python/tests/integration/feature_repos/universal/entities.py
--- a/sdk/python/tests/integration/feature_repos/universal/entities.py
+++ b/sdk/python/tests/integration/feature_repos/universal/entities.py
@@ -16,3 +16,7 @@ def customer():
def location():
return Entity(name="location_id", value_type=ValueType.INT64)
+
+
+def item():
+ return Entity(name="item_id", value_type=ValueType.INT64)
diff --git a/sdk/python/tests/integration/feature_repos/universal/feature_views.py b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
--- a/sdk/python/tests/integration/feature_repos/universal/feature_views.py
+++ b/sdk/python/tests/integration/feature_repos/universal/feature_views.py
@@ -1,6 +1,7 @@
from datetime import timedelta
from typing import Dict, List, Optional, Union
+import numpy as np
import pandas as pd
from feast import Feature, FeatureView, OnDemandFeatureView, ValueType
@@ -68,6 +69,40 @@ def conv_rate_plus_100_feature_view(
)
+def similarity(features_df: pd.DataFrame) -> pd.DataFrame:
+ if features_df.size == 0:
+ # give hint to Feast about return type
+ df = pd.DataFrame({"cos_double": [0.0]})
+ df["cos_float"] = df["cos_double"].astype(np.float32)
+ return df
+ vectors_a = features_df["embedding_double"].apply(np.array)
+ vectors_b = features_df["vector_double"].apply(np.array)
+ dot_products = vectors_a.mul(vectors_b).apply(sum)
+ norms_q = vectors_a.apply(np.linalg.norm)
+ norms_doc = vectors_b.apply(np.linalg.norm)
+ df = pd.DataFrame()
+ df["cos_double"] = dot_products / (norms_q * norms_doc)
+ df["cos_float"] = df["cos_double"].astype(np.float32)
+ return df
+
+
+def similarity_feature_view(
+ inputs: Dict[str, Union[RequestDataSource, FeatureView]],
+ infer_features: bool = False,
+ features: Optional[List[Feature]] = None,
+) -> OnDemandFeatureView:
+ _features = features or [
+ Feature("cos_double", ValueType.DOUBLE),
+ Feature("cos_float", ValueType.FLOAT),
+ ]
+ return OnDemandFeatureView(
+ name=similarity.__name__,
+ inputs=inputs,
+ features=[] if infer_features else _features,
+ udf=similarity,
+ )
+
+
def create_driver_age_request_feature_view():
return RequestFeatureView(
name="driver_age",
@@ -83,6 +118,32 @@ def create_conv_rate_request_data_source():
)
+def create_similarity_request_data_source():
+ return RequestDataSource(
+ name="similarity_input",
+ schema={
+ "vector_double": ValueType.DOUBLE_LIST,
+ "vector_float": ValueType.FLOAT_LIST,
+ },
+ )
+
+
+def create_item_embeddings_feature_view(source, infer_features: bool = False):
+ item_embeddings_feature_view = FeatureView(
+ name="item_embeddings",
+ entities=["item"],
+ features=None
+ if infer_features
+ else [
+ Feature(name="embedding_double", dtype=ValueType.DOUBLE_LIST),
+ Feature(name="embedding_float", dtype=ValueType.FLOAT_LIST),
+ ],
+ batch_source=source,
+ ttl=timedelta(hours=2),
+ )
+ return item_embeddings_feature_view
+
+
def create_driver_hourly_stats_feature_view(source, infer_features: bool = False):
driver_stats_feature_view = FeatureView(
name="driver_stats",
diff --git a/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py b/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
--- a/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
+++ b/sdk/python/tests/integration/registration/test_universal_odfv_feature_inference.py
@@ -1,12 +1,19 @@
+from datetime import datetime
+
+import pandas as pd
import pytest
from feast import Feature, ValueType
from feast.errors import SpecifiedFeaturesNotPresentError
-from tests.integration.feature_repos.universal.entities import customer, driver
+from feast.infra.offline_stores.file_source import FileSource
+from tests.integration.feature_repos.universal.entities import customer, driver, item
from tests.integration.feature_repos.universal.feature_views import (
conv_rate_plus_100_feature_view,
create_conv_rate_request_data_source,
create_driver_hourly_stats_feature_view,
+ create_item_embeddings_feature_view,
+ create_similarity_request_data_source,
+ similarity_feature_view,
)
@@ -33,6 +40,37 @@ def test_infer_odfv_features(environment, universal_data_sources, infer_features
assert len(odfv.features) == 3
[email protected]
[email protected]("infer_features", [True, False], ids=lambda v: str(v))
+def test_infer_odfv_list_features(environment, infer_features, tmp_path):
+ fake_embedding = [1.0, 1.0]
+ items_df = pd.DataFrame(
+ data={
+ "item_id": [0],
+ "embedding_float": [fake_embedding],
+ "embedding_double": [fake_embedding],
+ "event_timestamp": [pd.Timestamp(datetime.utcnow())],
+ "created": [pd.Timestamp(datetime.utcnow())],
+ }
+ )
+ output_path = f"{tmp_path}/items.parquet"
+ items_df.to_parquet(output_path)
+ fake_items_src = FileSource(
+ path=output_path,
+ event_timestamp_column="event_timestamp",
+ created_timestamp_column="created",
+ )
+ items = create_item_embeddings_feature_view(fake_items_src)
+ sim_odfv = similarity_feature_view(
+ {"items": items, "input_request": create_similarity_request_data_source()},
+ infer_features=infer_features,
+ )
+ store = environment.feature_store
+ store.apply([item(), items, sim_odfv])
+ odfv = store.get_on_demand_feature_view("similarity")
+ assert len(odfv.features) == 2
+
+
@pytest.mark.integration
@pytest.mark.universal
def test_infer_odfv_features_with_error(environment, universal_data_sources):
| Registering ODFV UDFs that operate on lists of numbers (e.g., cosine similarity of embeddings/vectors) throws an error
## Expected Behavior
Registering ODFV UDFs that operate on lists of numbers (e.g., cosine similarity of embeddings/vectors) should not
throw errors.
## Current Behavior
Defining a ODFV UDF such as cosine similarity and then running `feast apply` will result in the following error:
```
def feast_value_type_to_pandas_type(value_type: ValueType) -> Any:
value_type_to_pandas_type: Dict[ValueType, str] = {
ValueType.FLOAT: "float",
ValueType.INT32: "int",
ValueType.INT64: "int",
ValueType.STRING: "str",
ValueType.DOUBLE: "float",
ValueType.BYTES: "bytes",
ValueType.BOOL: "bool",
ValueType.UNIX_TIMESTAMP: "datetime",
}
if value_type in value_type_to_pandas_type:
return value_type_to_pandas_type[value_type]
raise TypeError(
> f"Casting to pandas type for type {value_type} failed. "
f"Type {value_type} not found"
)
E TypeError: Casting to pandas type for type ValueType.DOUBLE_LIST failed. Type ValueType.DOUBLE_LIST not found
```
## Steps to reproduce
Define an ODFV UDF for cosine similarity and try to register it:
```python
from feast import Entity, Feature, FeatureView, ValueType
from feast.data_source import RequestDataSource
from feast.infra.offline_stores.file_source import FileSource
from feast.on_demand_feature_view import on_demand_feature_view
from google.protobuf.duration_pb2 import Duration
import numpy as np
import pandas as pd
item = Entity(
name="item_id",
value_type=ValueType.INT64,
description="item ID",
)
items_fv = FeatureView(
name="items",
entities=["item"],
features=[
Feature(name="embedding", dtype=ValueType.DOUBLE_LIST),
],
batch_source=FileSource(
path="YOUR_PATH",
event_timestamp_column="event_timestamp",
created_timestamp_column="created",
),
online=True,
ttl=Duration(),
tags={},
)
similarity_req = RequestDataSource(
name="similarity_input",
schema={
"vector": ValueType.DOUBLE_LIST,
},
)
@on_demand_feature_view(
inputs={
"items": items_fv,
"similarity_req": similarity_req,
},
features=[
Feature(name="cos", dtype=ValueType.DOUBLE),
],
)
def similarity(features_df: pd.DataFrame) -> pd.DataFrame:
if features_df.size == 0:
return pd.DataFrame({"cos": [0.0]}) # give hint to Feast about return type
vectors_a = features_df["embedding"].apply(np.array)
vectors_b = features_df["vector"].apply(np.array)
dot_products = vectors_a.mul(vectors_b).apply(sum)
norms_q = vectors_a.apply(np.linalg.norm)
norms_doc = vectors_b.apply(np.linalg.norm)
df = pd.DataFrame()
df["cos"] = dot_products / (norms_q * norms_doc)
return df
```
### Specifications
- Version: 0.14.0
- Platform: all
- Subsystem: Python SDK
## Possible Solution
Add the following 2 lines to `feast_value_type_to_pandas_type()` in `type_map.py`:
```python
ValueType.FLOAT_LIST: "object",
ValueType.DOUBLE_LIST: "object",
```
| thanks for filing this! we'll take a look at this
@adchia Thanks! I actually have the fix ready 😄 But feel free to submit it before I do.
Similar to #1640 . @judahrand @karlhigley , FYI, you may be interested in this as well.
Yeah, I ran into this today - it looks like an easy fix. I'll stick a PR in over the next few days if you don't get it in. There are plenty of other issues with ODFVs though. | 2021-11-05T00:35:33 |
feast-dev/feast | 2,016 | feast-dev__feast-2016 | [
"2009"
] | cbe45a087178fadc945f6573c1205079ec2eca57 | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -54,6 +54,7 @@
"pandas>=1.0.0",
"pandavro==1.5.*",
"protobuf>=3.10",
+ "proto-plus",
"pyarrow>=4.0.0",
"pydantic>=1.0.0",
"PyYAML>=5.4.*",
@@ -113,15 +114,10 @@
"firebase-admin==4.5.2",
"pre-commit",
"assertpy==1.1",
- "proto-plus<1.19.7",
- "google-cloud-bigquery>=2.28.1",
- "google-cloud-bigquery-storage >= 2.0.0",
- "google-cloud-datastore>=2.1.*",
- "google-cloud-storage>=1.20.*,<1.41",
- "google-cloud-core==1.4.*",
- "redis-py-cluster==2.1.2",
- "boto3==1.17.*",
-]
+ "pip-tools"
+] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED
+
+DEV_REQUIRED = ["mypy-protobuf==1.*", "grpcio-testing==1.*"] + CI_REQUIRED
# Get git repo root directory
repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
@@ -215,7 +211,7 @@ def run(self):
# https://stackoverflow.com/questions/28509965/setuptools-development-requirements
# Install dev requirements with: pip install -e .[dev]
extras_require={
- "dev": ["mypy-protobuf==1.*", "grpcio-testing==1.*"],
+ "dev": DEV_REQUIRED,
"ci": CI_REQUIRED,
"gcp": GCP_REQUIRED,
"aws": AWS_REQUIRED,
| diff --git a/.github/workflows/unit_tests.yml b/.github/workflows/unit_tests.yml
--- a/.github/workflows/unit_tests.yml
+++ b/.github/workflows/unit_tests.yml
@@ -41,6 +41,8 @@ jobs:
run: make install-python-ci-dependencies
- name: Test Python
run: FEAST_USAGE=False pytest -n 8 --cov=./ --cov-report=xml --verbose --color=yes sdk/python/tests
+ - name: Ensure conflict-free dependencies
+ run: FEAST_USAGE=False pip-compile --dry-run sdk/python/setup.py --extra ci
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
| Feast CLI error - "No module named 'proto'"
## Expected Behavior
Create a feature repo as expected.
## Current Behavior
Feast dies with an error when running the following:
~ ❯ feast init feature_repo
Traceback (most recent call last):
File "/home/richard/anaconda3/envs/feast/bin/feast", line 5, in <module>
from feast.cli import cli
File "/home/richard/anaconda3/envs/feast/lib/python3.7/site-packages/feast/__init__.py", line 12, in <module>
from .feature_service import FeatureService
File "/home/richard/anaconda3/envs/feast/lib/python3.7/site-packages/feast/feature_service.py", line 6, in <module>
from feast.base_feature_view import BaseFeatureView
File "/home/richard/anaconda3/envs/feast/lib/python3.7/site-packages/feast/base_feature_view.py", line 19, in <module>
from proto import Message
ModuleNotFoundError: No module named 'proto'
## Environment
Python 3.7.11
Created a brand new conda environment and then ran `pip install feast`, which installed the following:
```
Successfully installed Click-7.1.2 PyYAML-6.0 anyio-3.3.4 asgiref-3.4.1 attrs-21.2.0 cachetools-4.2.4 charset-normalizer-2.0.7 colorama-0.4.4 dill-0.3.4 fastapi-0.70.0 fastavro-1.4.7 feast-0.15.0 google-api-core-2.2.2 google-auth-2.3.3 googleapis-common-protos-1.52.0 grpcio-1.42.0rc1 grpcio-reflection-1.42.0rc1 h11-0.12.0 httptools-0.2.0 idna-3.3 importlib-metadata-4.8.2 importlib-resources-5.4.0 jsonschema-4.2.1 mmh3-3.0.0 numpy-1.21.4 pandas-1.3.4 pandavro-1.5.2 protobuf-3.19.1 pyarrow-6.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pydantic-1.8.2 pyrsistent-0.18.0 python-dateutil-2.8.2 python-dotenv-0.19.1 pytz-2021.3 requests-2.26.0 rsa-4.7.2 six-1.16.0 sniffio-1.2.0 starlette-0.16.0 tabulate-0.8.9 tenacity-8.0.1 toml-0.10.2 tqdm-4.62.3 typing-extensions-3.10.0.2 urllib3-1.26.7 uvicorn-0.15.0 uvloop-0.16.0 watchgod-0.7 websockets-10.0 zipp-3.6.0
```
No other packages installed beforehand.
| Seems to be fixed after running `pip install proto-plus jinja2`. These were not installed during the initial installation via pip.
The main issue here is we have `proto-plus` as a requirement under `CI`, so if a user just does `pip install feast` instead of `pip install feast[ci]`, they'll run into this issue. I'll just move `proto-plus` to be a general requirement.
yeah, now pypi feast 0.15.0 having this issue, better to release a version soon
> Seems to be fixed after running `pip install proto-plus jinja2`. These were not installed during the initial installation via pip.
This one worked for me.
| 2021-11-10T19:25:36 |
feast-dev/feast | 2,031 | feast-dev__feast-2031 | [
"2030"
] | b456c4612a0d1280d88ff3194789607b1bc8f0fc | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -47,14 +47,14 @@
"google-api-core>=1.23.0",
"googleapis-common-protos==1.52.*",
"grpcio>=1.34.0",
- "grpcio-reflection>=1.34.0"
+ "grpcio-reflection>=1.34.0",
"Jinja2>=2.0.0",
"jsonschema",
"mmh3",
"pandas>=1.0.0",
"pandavro==1.5.*",
"protobuf>=3.10",
- "proto-plus",
+ "proto-plus<1.19.7",
"pyarrow>=4.0.0",
"pydantic>=1.0.0",
"PyYAML>=5.4.*",
@@ -67,7 +67,6 @@
]
GCP_REQUIRED = [
- "proto-plus<1.19.7",
"google-cloud-bigquery>=2.28.1",
"google-cloud-bigquery-storage >= 2.0.0",
"google-cloud-datastore>=2.1.*",
| Error in requirement definition
After installing feast in a conda environment, exporting the environment to a .yml fails. This is probably related to a missing comma at the end of this line:
https://github.com/feast-dev/feast/blob/63680bad344f55e41a281d643bef5972f2ea28da/sdk/python/setup.py#L50
| 2021-11-12T18:13:51 |
||
feast-dev/feast | 2,066 | feast-dev__feast-2066 | [
"2051"
] | f2f5dc6ec03cca8e234149198add8a81597fe6f0 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -888,7 +888,10 @@ def tqdm_builder(length):
@log_exceptions_and_usage
def write_to_online_store(
- self, feature_view_name: str, df: pd.DataFrame,
+ self,
+ feature_view_name: str,
+ df: pd.DataFrame,
+ allow_registry_cache: bool = True,
):
"""
ingests data directly into the Online store
@@ -899,7 +902,9 @@ def write_to_online_store(
)
# TODO: restrict this to work with online StreamFeatureViews and validate the FeatureView type
- feature_view = self._registry.get_feature_view(feature_view_name, self.project)
+ feature_view = self._registry.get_feature_view(
+ feature_view_name, self.project, allow_cache=allow_registry_cache
+ )
entities = []
for entity_name in feature_view.entities:
entities.append(self._registry.get_entity(entity_name, self.project))
diff --git a/sdk/python/feast/registry.py b/sdk/python/feast/registry.py
--- a/sdk/python/feast/registry.py
+++ b/sdk/python/feast/registry.py
@@ -519,19 +519,22 @@ def get_feature_table(self, name: str, project: str) -> FeatureTable:
return FeatureTable.from_proto(feature_table_proto)
raise FeatureTableNotFoundException(name, project)
- def get_feature_view(self, name: str, project: str) -> FeatureView:
+ def get_feature_view(
+ self, name: str, project: str, allow_cache: bool = False
+ ) -> FeatureView:
"""
Retrieves a feature view.
Args:
name: Name of feature view
project: Feast project that this feature view belongs to
+ allow_cache: Allow returning feature view from the cached registry
Returns:
Returns either the specified feature view, or raises an exception if
none is found
"""
- registry_proto = self._get_registry_proto()
+ registry_proto = self._get_registry_proto(allow_cache=allow_cache)
for feature_view_proto in registry_proto.feature_views:
if (
feature_view_proto.spec.name == name
| Make direct data ingestion into online store faster
We use the direct data ingestion capability so we can write inference data in a real-time flow and then use it immediately. We need to be able it ingest in milliseconds. Currently, the online ingestion takes a couple of seconds since it does a lookup to refresh the registry for every call to `write_to_online_store`. This blocks the new capability from being used in real-time.
It would be great to have an `allow_cache` argument added to `write_to_online_store` so that we can do the write without needing to refresh the registry.
Currently, I moved the registry to Postgres so the refresh is quicker, but it is still not quick enough
| @benny-hal thanks for the issue - this makes a lot of sense. | 2021-11-18T20:45:43 |
|
feast-dev/feast | 2,107 | feast-dev__feast-2107 | [
"2106"
] | 2d3cea1d1485d0af1a35a7f30abfed475b58973a | diff --git a/sdk/python/feast/infra/passthrough_provider.py b/sdk/python/feast/infra/passthrough_provider.py
--- a/sdk/python/feast/infra/passthrough_provider.py
+++ b/sdk/python/feast/infra/passthrough_provider.py
@@ -87,7 +87,9 @@ def online_read(
requested_features: List[str] = None,
) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
set_usage_attribute("provider", self.__class__.__name__)
- result = self.online_store.online_read(config, table, entity_keys)
+ result = self.online_store.online_read(
+ config, table, entity_keys, requested_features
+ )
return result
| requested_features are not passed to online_read() from passthrough_provider
`OnlineStore.online_read()` has a parameter `requested_features`. Similarly, `PassthroughProvider.online_read()` also has a parameter `requested_features`.
`PassthroughProvider.online_read()` calls `OnlineStore.online_read()` but doesn't pass on the `requested_features` variable.
I am writing a new OnlineStore where I need the `requested_features` in `OnlineStore.online_read()`.
## Issue Detected
In [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)
```
def online_read(
self,
config: RepoConfig,
table: Union[FeatureTable, FeatureView],
entity_keys: List[EntityKeyProto],
requested_features: List[str] = None,
) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
set_usage_attribute("provider", self.__class__.__name__)
result = self.online_store.online_read(config, table, entity_keys) # should also pass requested_features
return result
```
- Version: 0.15.1
- Platform: all
- Subsystem: all
## Possible Solution
In [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)
`result = self.online_store.online_read(config, table, entity_keys)`
change to
`result = self.online_store.online_read(config, table, entity_keys, requested_features)`
Note: This won't break any of the current tests since none of the Online store implementation uses requested_features in online_read() method.
If it was as per design, can someone explain why the variable `requested_features` was not passed on?
| Good catch @aurobindoc , this is definitely a bug. Would you like to take a stab at fixing it?
Sure. I will raise a PR with the fix | 2021-12-05T17:07:50 |
|
feast-dev/feast | 2,111 | feast-dev__feast-2111 | [
"2110"
] | 5826a9d83009918d3f05ef777d154b1722f3bbdb | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -126,7 +126,7 @@
# README file from Feast repo root directory
README_FILE = os.path.join(repo_root, "README.md")
-with open(README_FILE, "r") as f:
+with open(README_FILE, "r", encoding="utf8") as f:
LONG_DESCRIPTION = f.read()
# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
| Feast setup script returns "'charmap' codec can't decode byte" on Windows
## Expected Behavior
`pip install -e "sdk/python[ci]"` must succeed and install feast package in editable mode.
## Current Behavior
Installation error:
```
Obtaining file:///C:/Users/<username>/Downloads/feast/sdk/python
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\<username>\Downloads\feast\venv\Scripts\python.exe' 'C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\<folder_name>\AppData\Local\Temp\tmpwd6unq88'
cwd: C:\Users\<username>\Downloads\feast\sdk\python
Complete output (20 lines):
Traceback (most recent call last):
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 280, in <module>
main()
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 114, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 267, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 137, in <module>
LONG_DESCRIPTION = f.read()
File "c:\users\<username>\appdata\local\programs\python\python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 1599: character maps to <undefined>
```
## Steps to reproduce
Install feast for development:
```
pip install -e "sdk/python[ci]"
```
### Specifications
- Version: Master branch
- Platform: Windows 10
- Python version: 3.9.5
- pip version: 21.3.1
- virtualenv version: 20.10.0
## Possible Solution
`README.md` contains emojis resulting character encoding issues in the package long description. This PR sets package long description encoding to UTF-8.
```python
# README file from Feast repo root directory
README_FILE = os.path.join(repo_root, "README.md")
with open(README_FILE, "r", encoding="utf8") as f:
LONG_DESCRIPTION = f.read()
```
Pull request #2111
| 2021-12-06T18:10:04 |
||
feast-dev/feast | 2,167 | feast-dev__feast-2167 | [
"2053"
] | b8daefa982dd4f2af50e9bf7d4e3a1a2d43fd9d7 | diff --git a/sdk/python/feast/on_demand_feature_view.py b/sdk/python/feast/on_demand_feature_view.py
--- a/sdk/python/feast/on_demand_feature_view.py
+++ b/sdk/python/feast/on_demand_feature_view.py
@@ -259,7 +259,9 @@ def get_requested_odfvs(feature_refs, project, registry):
return requested_on_demand_feature_views
-def on_demand_feature_view(features: List[Feature], inputs: Dict[str, FeatureView]):
+def on_demand_feature_view(
+ features: List[Feature], inputs: Dict[str, Union[FeatureView, RequestDataSource]]
+):
"""
Declare an on-demand feature view
diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -97,7 +97,7 @@ def feast_value_type_to_pandas_type(value_type: ValueType) -> Any:
ValueType.DOUBLE: "float",
ValueType.BYTES: "bytes",
ValueType.BOOL: "bool",
- ValueType.UNIX_TIMESTAMP: "datetime",
+ ValueType.UNIX_TIMESTAMP: "datetime64[ns]",
}
if value_type.name.endswith("_LIST"):
return "object"
| diff --git a/sdk/python/tests/integration/registration/test_inference.py b/sdk/python/tests/integration/registration/test_inference.py
--- a/sdk/python/tests/integration/registration/test_inference.py
+++ b/sdk/python/tests/integration/registration/test_inference.py
@@ -1,12 +1,15 @@
+import pandas as pd
import pytest
-from feast import Entity, RepoConfig, ValueType
+from feast import Entity, Feature, RepoConfig, ValueType
+from feast.data_source import RequestDataSource
from feast.errors import RegistryInferenceFailure
from feast.feature_view import FeatureView
from feast.inference import (
update_data_sources_with_inferred_event_timestamp_col,
update_entities_with_inferred_types_from_feature_views,
)
+from feast.on_demand_feature_view import on_demand_feature_view
from tests.utils.data_source_utils import (
prep_file_source,
simple_bq_source_using_query_arg,
@@ -81,3 +84,21 @@ def test_update_data_sources_with_inferred_event_timestamp_col(simple_dataset_1)
update_data_sources_with_inferred_event_timestamp_col(
[file_source], RepoConfig(provider="local", project="test")
)
+
+
+def test_modify_feature_views_success():
+ # Create Feature Views
+ date_request = RequestDataSource(
+ name="date_request", schema={"some_date": ValueType.UNIX_TIMESTAMP}
+ )
+
+ @on_demand_feature_view(
+ inputs={"date_request": date_request},
+ features=[Feature("output", ValueType.UNIX_TIMESTAMP)],
+ )
+ def test_view(features_df: pd.DataFrame) -> pd.DataFrame:
+ data = pd.DataFrame()
+ data["output"] = features_df["some_date"]
+ return data
+
+ test_view.infer_features()
| ODFV does not allow feast.ValueType.UNIX_TIMESTAMP in RequestDataSource schema
## Expected Behavior
When defining a field with ValueType.UNIX_TIMESTAMP in a RequestDataSource, the input will be a datetime64[ns] or datetime64[ns, tz] pd.Series.
## Current Behavior
ValueType.UNIX_TIMESTAMP is translated to "datetime" which is not understood by the pd.Series constructor. Thus, infer_features fails if there are ValueType.UNIX_TIMESTAMP present in the RequestDataSource schema.
## Steps to reproduce
While this will work:
``` python
import feast
import pandas as pd
date_request = feast.data_source.RequestDataSource(
name="date_request",
schema={"some_date": feast.ValueType.STRING},
)
@feast.on_demand_feature_view.on_demand_feature_view(
inputs={
"date_request": date_request,
},
features=[
feast.Feature("output", feast.ValueType.STRING),
],
)
def test_view(features_df: pd.DataFrame) -> pd.DataFrame:
data = pd.DataFrame()
data["output"] = features_df["some_date"].astype("category")
return data
test_view.infer_features()
```
This won't:
``` python
import feast
import pandas as pd
date_request = feast.data_source.RequestDataSource(
name="date_request",
schema={"some_date": feast.ValueType.UNIX_TIMESTAMP}, # <-- now a timestamp
)
@feast.on_demand_feature_view.on_demand_feature_view(
inputs={
"date_request": date_request,
},
features=[
feast.Feature("output", feast.ValueType.STRING),
],
)
def test_view(features_df: pd.DataFrame) -> pd.DataFrame:
data = pd.DataFrame()
data["output"] = features_df["some_date"].astype("category")
return data
test_view.infer_features()
```
### Specifications
- Version: 0.15.1
- Platform: macOS
- Subsystem: BigSur
## Possible Solution
Change the dtype mapping of ValueType.UNIX_TIMESTAMP from "datetime" to "datetime64[ns]" locally for OnDemandFeatureView.infer_features() or in feast_value_type_to_pandas_type().
| 2021-12-21T17:48:03 |
|
feast-dev/feast | 2,181 | feast-dev__feast-2181 | [
"1633"
] | e435d927ecaa2037a6e96cbdf1584e4a15d17597 | diff --git a/sdk/python/feast/infra/online_stores/datastore.py b/sdk/python/feast/infra/online_stores/datastore.py
--- a/sdk/python/feast/infra/online_stores/datastore.py
+++ b/sdk/python/feast/infra/online_stores/datastore.py
@@ -196,18 +196,18 @@ def _write_minibatch(
key=key, exclude_from_indexes=("created_ts", "event_ts", "values")
)
- entity.update(
- dict(
- key=entity_key.SerializeToString(),
- values={k: v.SerializeToString() for k, v in features.items()},
- event_ts=utils.make_tzaware(timestamp),
- created_ts=(
- utils.make_tzaware(created_ts)
- if created_ts is not None
- else None
- ),
- )
+ content_entity = datastore.Entity(
+ exclude_from_indexes=tuple(features.keys())
)
+ for k, v in features.items():
+ content_entity[k] = v.SerializeToString()
+ entity["key"] = entity_key.SerializeToString()
+ entity["values"] = content_entity
+ entity["event_ts"] = utils.make_tzaware(timestamp)
+ entity["created_ts"] = (
+ utils.make_tzaware(created_ts) if created_ts is not None else None
+ )
+
entities.append(entity)
with client.transaction():
client.put_multi(entities)
| "The value of property is longer than 1500 bytes" error on BigQquery REPEATED STRING materialization
## Expected Behavior
When materializing REPEATED features from a BigQuery table into GCP Feast online store we should not get this error
## Current Behavior
One column of a BigQuery table REPEATED STRING. The number of values in this column varies from a few to many hundreds of string IDs. It appears that the total size of the repeated string causes the issue. After 97% ingestion:
```
97%|█████████████████████████████████████████████████████▍ | 49606/51056 [00:30<00:00, 1604.20it/s]
```
The actual stack trace is:
```
Traceback (most recent call last):
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The value of property "content_30d" is longer than 1500 bytes."
debug_error_string = "{"created":"@1623177301.342896000","description":"Error received from peer ipv6:[2a00:1450:4009:81e::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1067,"grpc_message":"The value of property "content_30d" is longer than 1500 bytes.","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/bin/feast", line 8, in <module>
sys.exit(cli())
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/cli.py", line 243, in materialize_command
store.materialize(
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/telemetry.py", line 151, in exception_logging_wrapper
result = func(*args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/feature_store.py", line 444, in materialize
provider.materialize_single_feature_view(
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/infra/gcp.py", line 192, in materialize_single_feature_view
self.online_write_batch(
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/infra/gcp.py", line 121, in online_write_batch
pool.map(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/infra/gcp.py", line 122, in <lambda>
lambda b: _write_minibatch(client, project, table, b, progress),
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/feast/infra/gcp.py", line 270, in _write_minibatch
client.put_multi(entities)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/cloud/datastore/batch.py", line 328, in __exit__
self.commit()
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/cloud/datastore/transaction.py", line 304, in commit
super(Transaction, self).commit(**kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/cloud/datastore/batch.py", line 300, in commit
self._commit(retry=retry, timeout=timeout)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/cloud/datastore/batch.py", line 257, in _commit
commit_response_pb = self._client._datastore_api.commit(
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/cloud/datastore_v1/services/datastore/client.py", line 627, in commit
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/gaya/Library/Caches/pypoetry/virtualenvs/explore-feast-feature-store-ECdpd3VL-py3.8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 The value of property "content_30d" is longer than 1500 bytes.
```
The Feature View is:
```
propensity_model_data_view = FeatureView(
name="propensity_model_data_stats",
entities=["customer_id"],
ttl=Duration(seconds=86400 * 10),
features=[
Feature(name="avg_duration_30d", dtype=ValueType.FLOAT),
Feature(name="content_30d", dtype=ValueType.STRING),
Feature(name="common_genre", dtype=ValueType.STRING),
Feature(name="tenure", dtype=ValueType.FLOAT)
],
online=True,
input=propensity_model_data,
tags={},
)
```
The BQ schema is:
| Field name | Type | Mode |
| --- | --- | --- |
| tenure | INTEGER | |
| target | INTEGER | |
| sub_timestamp | TIMESTAMP | |
| customer_id | STRING | |
| created_timestamp | TIMESTAMP |
| avg_duration_30d | FLOAT |
| common_genre | STRING | |
| content_30d | STRING | REPEATED |
## Steps to reproduce
Materialise a column of BigQuery REPEATED STRING with total byte count > 1500
Reducing the total size, materialization runs to completion
### Specifications
- Version: 0.10.6
- Platform: GCP
- Subsystem: Firestore
## Possible Solution
I raised this on Slack originally and Willem Pienaar thinks he knows the cause of the problem. The Slack thread is here:
https://tectonfeast.slack.com/archives/C01MSKCMB37/p1623155984116000
| Thanks for raising this issue @adriangay. It's slipped under my radar up until now. It seems specific to Firestore/Datastore.
One solution would be an optional flag that we can enable for Datastore which compresses values for storage.
Do you think that would be effective for your data? You can try it out over here http://www.txtwizard.net/compression
@woop apologies for missing your reply. If you mean compressing the values for storage in Datastore and decompressing them on the way out transparently at the Feast API, then I guess thats OK. It should not increase latency much [for online features].
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
| 2022-01-04T01:19:02 |
|
feast-dev/feast | 2,185 | feast-dev__feast-2185 | [
"2184"
] | ecdf15e208a4736b6c111936079c61bfec9fd37a | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -1144,6 +1144,9 @@ def get_online_features(
# Also create entity values to append to the result
result_rows.append(_entity_row_to_field_values(entity_row_proto))
+ # Keep track of what has been requested from the OnlineStore
+ # to avoid requesting the same thing twice for ODFVs.
+ retrieved_feature_refs: Set[str] = set()
for table, requested_features in grouped_refs:
table_join_keys = [
entity_name_to_join_key_map[entity_name]
@@ -1158,6 +1161,11 @@ def get_online_features(
table,
union_of_entity_keys,
)
+ table_feature_names = {feature.name for feature in table.features}
+ retrieved_feature_refs |= {
+ f"{table.name}:{feature}" if feature in table_feature_names else feature
+ for feature in requested_features
+ }
requested_result_row_names = self._get_requested_result_fields(
result_rows, needed_request_fv_features
@@ -1170,6 +1178,7 @@ def get_online_features(
request_data_features,
result_rows,
union_of_entity_keys,
+ retrieved_feature_refs,
)
self._augment_response_with_on_demand_transforms(
@@ -1205,6 +1214,7 @@ def _populate_odfv_dependencies(
request_data_features: Dict[str, List[Any]],
result_rows: List[GetOnlineFeaturesResponse.FieldValues],
union_of_entity_keys: List[EntityKeyProto],
+ retrieved_feature_refs: Set[str],
):
# Add more feature values to the existing result rows for the request data features
for feature_name, feature_values in request_data_features.items():
@@ -1223,19 +1233,32 @@ def _populate_odfv_dependencies(
if len(grouped_odfv_refs) > 0:
for odfv, _ in grouped_odfv_refs:
for fv in odfv.input_feature_views.values():
- table_join_keys = [
- entity_name_to_join_key_map[entity_name]
- for entity_name in fv.entities
- ]
- self._populate_result_rows_from_feature_view(
- table_join_keys,
- full_feature_names,
- provider,
- [feature.name for feature in fv.features],
- result_rows,
- fv,
- union_of_entity_keys,
- )
+ # Find the set of required Features which have not yet
+ # been retrieved.
+ not_yet_retrieved = {
+ feature.name
+ for feature in fv.projection.features
+ if f"{fv.name}:{feature.name}" not in retrieved_feature_refs
+ }
+ # If there are required Features which have not yet been retrieved
+ # retrieve them.
+ if not_yet_retrieved:
+ table_join_keys = [
+ entity_name_to_join_key_map[entity_name]
+ for entity_name in fv.entities
+ ]
+ self._populate_result_rows_from_feature_view(
+ table_join_keys,
+ full_feature_names,
+ provider,
+ list(not_yet_retrieved),
+ result_rows,
+ fv,
+ union_of_entity_keys,
+ )
+ # Update the set of retrieved Features with any newly retrieved
+ # Features.
+ retrieved_feature_refs |= not_yet_retrieved
def get_needed_request_data(
self,
| ODFV dependencies can be retrieved twice from the OnlineStore
## Expected Behavior
When calling `get_online_features` a given feature should only ever be retrieved from the OnlineStore once.
## Current Behavior
When a feature which is also a dependency of an ODFV which is requested is requested the feature is retrieved twice from the OnlineStore.
Once in [`_populate_result_rows_from_feature_view`](https://github.com/feast-dev/feast/blob/e435d927ecaa2037a6e96cbdf1584e4a15d17597/sdk/python/feast/feature_store.py#L1152) and again in [`_populate_odfv_dependencies`](https://github.com/feast-dev/feast/blob/e435d927ecaa2037a6e96cbdf1584e4a15d17597/sdk/python/feast/feature_store.py#L1165).
## Steps to reproduce
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
Check if the feature has already been requested before retrieving it again.
| 2022-01-04T12:12:39 |
||
feast-dev/feast | 2,202 | feast-dev__feast-2202 | [
"2201"
] | 1b98ec94e3573991627d561d6d207126a40a21cf | diff --git a/sdk/python/feast/feature_server.py b/sdk/python/feast/feature_server.py
--- a/sdk/python/feast/feature_server.py
+++ b/sdk/python/feast/feature_server.py
@@ -8,7 +8,6 @@
import feast
from feast import proto_json
from feast.protos.feast.serving.ServingService_pb2 import GetOnlineFeaturesRequest
-from feast.type_map import feast_value_type_to_python_type
def get_app(store: "feast.FeatureStore"):
@@ -41,16 +40,11 @@ def get_online_features(body=Depends(get_body)):
if any(batch_size != num_entities for batch_size in batch_sizes):
raise HTTPException(status_code=500, detail="Uneven number of columns")
- entity_rows = [
- {
- k: feast_value_type_to_python_type(v.val[idx])
- for k, v in request_proto.entities.items()
- }
- for idx in range(num_entities)
- ]
-
- response_proto = store.get_online_features(
- features, entity_rows, full_feature_names=full_feature_names
+ response_proto = store._get_online_features(
+ features,
+ request_proto.entities,
+ full_feature_names=full_feature_names,
+ native_entity_values=False,
).proto
# Convert the Protobuf object to JSON and return it
diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -23,8 +23,10 @@
Dict,
Iterable,
List,
+ Mapping,
NamedTuple,
Optional,
+ Sequence,
Set,
Tuple,
Union,
@@ -72,7 +74,7 @@
GetOnlineFeaturesResponse,
)
from feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto
-from feast.protos.feast.types.Value_pb2 import Value
+from feast.protos.feast.types.Value_pb2 import RepeatedValue, Value
from feast.registry import Registry
from feast.repo_config import RepoConfig, load_repo_config
from feast.request_feature_view import RequestFeatureView
@@ -267,14 +269,18 @@ def _list_feature_views(
return feature_views
@log_exceptions_and_usage
- def list_on_demand_feature_views(self) -> List[OnDemandFeatureView]:
+ def list_on_demand_feature_views(
+ self, allow_cache: bool = False
+ ) -> List[OnDemandFeatureView]:
"""
Retrieves the list of on demand feature views from the registry.
Returns:
A list of on demand feature views.
"""
- return self._registry.list_on_demand_feature_views(self.project)
+ return self._registry.list_on_demand_feature_views(
+ self.project, allow_cache=allow_cache
+ )
@log_exceptions_and_usage
def get_entity(self, name: str) -> Entity:
@@ -1067,6 +1073,30 @@ def get_online_features(
... )
>>> online_response_dict = online_response.to_dict()
"""
+ columnar: Dict[str, List[Any]] = {k: [] for k in entity_rows[0].keys()}
+ for entity_row in entity_rows:
+ for key, value in entity_row.items():
+ try:
+ columnar[key].append(value)
+ except KeyError as e:
+ raise ValueError("All entity_rows must have the same keys.") from e
+
+ return self._get_online_features(
+ features=features,
+ entity_values=columnar,
+ full_feature_names=full_feature_names,
+ native_entity_values=True,
+ )
+
+ def _get_online_features(
+ self,
+ features: Union[List[str], FeatureService],
+ entity_values: Mapping[
+ str, Union[Sequence[Any], Sequence[Value], RepeatedValue]
+ ],
+ full_feature_names: bool = False,
+ native_entity_values: bool = True,
+ ):
_feature_refs = self._get_features(features, allow_cache=True)
(
requested_feature_views,
@@ -1076,6 +1106,29 @@ def get_online_features(
features=features, allow_cache=True, hide_dummy_entity=False
)
+ entity_name_to_join_key_map, entity_type_map = self._get_entity_maps(
+ requested_feature_views
+ )
+
+ # Extract Sequence from RepeatedValue Protobuf.
+ entity_value_lists: Dict[str, Union[List[Any], List[Value]]] = {
+ k: list(v) if isinstance(v, Sequence) else list(v.val)
+ for k, v in entity_values.items()
+ }
+
+ entity_proto_values: Dict[str, List[Value]]
+ if native_entity_values:
+ # Convert values to Protobuf once.
+ entity_proto_values = {
+ k: python_values_to_proto_values(
+ v, entity_type_map.get(k, ValueType.UNKNOWN)
+ )
+ for k, v in entity_value_lists.items()
+ }
+ else:
+ entity_proto_values = entity_value_lists
+
+ num_rows = _validate_entity_values(entity_proto_values)
_validate_feature_refs(_feature_refs, full_feature_names)
(
grouped_refs,
@@ -1101,111 +1154,72 @@ def get_online_features(
}
feature_views = list(view for view, _ in grouped_refs)
- entityless_case = DUMMY_ENTITY_NAME in [
- entity_name
- for feature_view in feature_views
- for entity_name in feature_view.entities
- ]
-
- provider = self._get_provider()
- entities = self._list_entities(allow_cache=True, hide_dummy_entity=False)
- entity_name_to_join_key_map: Dict[str, str] = {}
- join_key_to_entity_type_map: Dict[str, ValueType] = {}
- for entity in entities:
- entity_name_to_join_key_map[entity.name] = entity.join_key
- join_key_to_entity_type_map[entity.join_key] = entity.value_type
- for feature_view in requested_feature_views:
- for entity_name in feature_view.entities:
- entity = self._registry.get_entity(
- entity_name, self.project, allow_cache=True
- )
- # User directly uses join_key as the entity reference in the entity_rows for the
- # entity mapping case.
- entity_name = feature_view.projection.join_key_map.get(
- entity.join_key, entity.name
- )
- join_key = feature_view.projection.join_key_map.get(
- entity.join_key, entity.join_key
- )
- entity_name_to_join_key_map[entity_name] = join_key
- join_key_to_entity_type_map[join_key] = entity.value_type
needed_request_data, needed_request_fv_features = self.get_needed_request_data(
grouped_odfv_refs, grouped_request_fv_refs
)
- join_key_rows = []
- request_data_features: Dict[str, List[Any]] = defaultdict(list)
+ join_key_values: Dict[str, List[Value]] = {}
+ request_data_features: Dict[str, List[Value]] = {}
# Entity rows may be either entities or request data.
- for row in entity_rows:
- join_key_row = {}
- for entity_name, entity_value in row.items():
- # Found request data
- if (
- entity_name in needed_request_data
- or entity_name in needed_request_fv_features
- ):
- if entity_name in needed_request_fv_features:
- # If the data was requested as a feature then
- # make sure it appears in the result.
- requested_result_row_names.add(entity_name)
- request_data_features[entity_name].append(entity_value)
- else:
- try:
- join_key = entity_name_to_join_key_map[entity_name]
- except KeyError:
- raise EntityNotFoundException(entity_name, self.project)
- # All join keys should be returned in the result.
- requested_result_row_names.add(join_key)
- join_key_row[join_key] = entity_value
- if entityless_case:
- join_key_row[DUMMY_ENTITY_ID] = DUMMY_ENTITY_VAL
- if len(join_key_row) > 0:
- # May be empty if this entity row was request data
- join_key_rows.append(join_key_row)
+ for entity_name, values in entity_proto_values.items():
+ # Found request data
+ if (
+ entity_name in needed_request_data
+ or entity_name in needed_request_fv_features
+ ):
+ if entity_name in needed_request_fv_features:
+ # If the data was requested as a feature then
+ # make sure it appears in the result.
+ requested_result_row_names.add(entity_name)
+ request_data_features[entity_name] = values
+ else:
+ try:
+ join_key = entity_name_to_join_key_map[entity_name]
+ except KeyError:
+ raise EntityNotFoundException(entity_name, self.project)
+ # All join keys should be returned in the result.
+ requested_result_row_names.add(join_key)
+ join_key_values[join_key] = values
self.ensure_request_data_values_exist(
needed_request_data, needed_request_fv_features, request_data_features
)
- # Convert join_key_rows from rowise to columnar.
- join_key_python_values: Dict[str, List[Value]] = defaultdict(list)
- for join_key_row in join_key_rows:
- for join_key, value in join_key_row.items():
- join_key_python_values[join_key].append(value)
-
- # Convert all join key values to Protobuf Values
- join_key_proto_values = {
- k: python_values_to_proto_values(v, join_key_to_entity_type_map[k])
- for k, v in join_key_python_values.items()
- }
-
- # Populate online features response proto with join keys
+ # Populate online features response proto with join keys and request data features
online_features_response = GetOnlineFeaturesResponse(
- results=[
- GetOnlineFeaturesResponse.FeatureVector()
- for _ in range(len(entity_rows))
- ]
+ results=[GetOnlineFeaturesResponse.FeatureVector() for _ in range(num_rows)]
)
- for key, values in join_key_proto_values.items():
- online_features_response.metadata.feature_names.val.append(key)
- for row_idx, result_row in enumerate(online_features_response.results):
- result_row.values.append(values[row_idx])
- result_row.statuses.append(FieldStatus.PRESENT)
- result_row.event_timestamps.append(Timestamp())
+ self._populate_result_rows_from_columnar(
+ online_features_response=online_features_response,
+ data=dict(**join_key_values, **request_data_features),
+ )
+
+ # Add the Entityless case after populating result rows to avoid having to remove
+ # it later.
+ entityless_case = DUMMY_ENTITY_NAME in [
+ entity_name
+ for feature_view in feature_views
+ for entity_name in feature_view.entities
+ ]
+ if entityless_case:
+ join_key_values[DUMMY_ENTITY_ID] = python_values_to_proto_values(
+ [DUMMY_ENTITY_VAL] * num_rows, DUMMY_ENTITY.value_type
+ )
# Initialize the set of EntityKeyProtos once and reuse them for each FeatureView
# to avoid initialization overhead.
- entity_keys = [EntityKeyProto() for _ in range(len(join_key_rows))]
+ entity_keys = [EntityKeyProto() for _ in range(num_rows)]
+ provider = self._get_provider()
for table, requested_features in grouped_refs:
# Get the correct set of entity values with the correct join keys.
- entity_values = self._get_table_entity_values(
- table, entity_name_to_join_key_map, join_key_proto_values,
+ table_entity_values = self._get_table_entity_values(
+ table, entity_name_to_join_key_map, join_key_values,
)
# Set the EntityKeyProtos inplace.
self._set_table_entity_keys(
- entity_values, entity_keys,
+ table_entity_values, entity_keys,
)
# Populate the result_rows with the Features from the OnlineStore inplace.
@@ -1218,10 +1232,6 @@ def get_online_features(
table,
)
- self._populate_request_data_features(
- online_features_response, request_data_features
- )
-
if grouped_odfv_refs:
self._augment_response_with_on_demand_transforms(
online_features_response,
@@ -1235,6 +1245,50 @@ def get_online_features(
)
return OnlineResponse(online_features_response)
+ @staticmethod
+ def _get_columnar_entity_values(
+ rowise: Optional[List[Dict[str, Any]]], columnar: Optional[Dict[str, List[Any]]]
+ ) -> Dict[str, List[Any]]:
+ if (rowise is None and columnar is None) or (
+ rowise is not None and columnar is not None
+ ):
+ raise ValueError(
+ "Exactly one of `columnar_entity_values` and `rowise_entity_values` must be set."
+ )
+
+ if rowise is not None:
+ # Convert entity_rows from rowise to columnar.
+ res = defaultdict(list)
+ for entity_row in rowise:
+ for key, value in entity_row.items():
+ res[key].append(value)
+ return res
+ return cast(Dict[str, List[Any]], columnar)
+
+ def _get_entity_maps(self, feature_views):
+ entities = self._list_entities(allow_cache=True, hide_dummy_entity=False)
+ entity_name_to_join_key_map: Dict[str, str] = {}
+ entity_type_map: Dict[str, ValueType] = {}
+ for entity in entities:
+ entity_name_to_join_key_map[entity.name] = entity.join_key
+ entity_type_map[entity.name] = entity.value_type
+ for feature_view in feature_views:
+ for entity_name in feature_view.entities:
+ entity = self._registry.get_entity(
+ entity_name, self.project, allow_cache=True
+ )
+ # User directly uses join_key as the entity reference in the entity_rows for the
+ # entity mapping case.
+ entity_name = feature_view.projection.join_key_map.get(
+ entity.join_key, entity.name
+ )
+ join_key = feature_view.projection.join_key_map.get(
+ entity.join_key, entity.join_key
+ )
+ entity_name_to_join_key_map[entity_name] = join_key
+ entity_type_map[join_key] = entity.value_type
+ return entity_name_to_join_key_map, entity_type_map
+
@staticmethod
def _get_table_entity_values(
table: FeatureView,
@@ -1275,23 +1329,21 @@ def _set_table_entity_keys(
entity_key.entity_values.extend(next(rowise_values))
@staticmethod
- def _populate_request_data_features(
+ def _populate_result_rows_from_columnar(
online_features_response: GetOnlineFeaturesResponse,
- request_data_features: Dict[str, List[Any]],
+ data: Dict[str, List[Value]],
):
- # Add more feature values to the existing result rows for the request data features
- for feature_name, feature_values in request_data_features.items():
- proto_values = python_values_to_proto_values(
- feature_values, ValueType.UNKNOWN
- )
+ timestamp = Timestamp() # Only initialize this timestamp once.
+ # Add more values to the existing result rows
+ for feature_name, feature_values in data.items():
online_features_response.metadata.feature_names.val.append(feature_name)
- for row_idx, proto_value in enumerate(proto_values):
+ for row_idx, proto_value in enumerate(feature_values):
result_row = online_features_response.results[row_idx]
result_row.values.append(proto_value)
result_row.statuses.append(FieldStatus.PRESENT)
- result_row.event_timestamps.append(Timestamp())
+ result_row.event_timestamps.append(timestamp)
@staticmethod
def get_needed_request_data(
@@ -1567,6 +1619,13 @@ def serve_transformations(self, port: int) -> None:
transformation_server.start_server(self, port)
+def _validate_entity_values(join_key_values: Dict[str, List[Value]]):
+ set_of_row_lengths = {len(v) for v in join_key_values.values()}
+ if len(set_of_row_lengths) > 1:
+ raise ValueError("All entity rows must have the same columns.")
+ return set_of_row_lengths.pop()
+
+
def _validate_feature_refs(feature_refs: List[str], full_feature_names: bool = False):
collided_feature_refs = []
| Python FeatureServer inefficiently converts entity rows
Problem:
It seems silly for the Python FeatureServer to convert EntityValues to Python native types only to then immediately convert them back to Protobuf Values. This can only increase latency by requiring a bunch of memory assignment and copying.
https://github.com/feast-dev/feast/blob/14048fd535f982eed7a3ba3af7d93557335e325a/sdk/python/feast/feature_server.py#L44-L50
Solution:
One option would be to refactor `get_online_features()` to convert `entity_rows` to a `Dict[str, List[Value]` and then call a new wrapper method, `_get_online_features()`, `entity_rows` as a `Dict[str, List[Value]`. This would allow the FeatureServer to directly call `_get_online_features()` without converting the EntityValues to Python native types.
This solution would align nicely with the refactoring done in #2186.
| A question around this is whether or not the FeatureServer is intended to support FeatureServices? Currently, it doesn't look like it does? | 2022-01-07T12:38:09 |
|
feast-dev/feast | 2,219 | feast-dev__feast-2219 | [
"2209"
] | 62fae057d8c36af5f0f302c398f6134fc3c0407d | diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -212,7 +212,7 @@ def _type_err(item, dtype):
ValueType.UNIX_TIMESTAMP_LIST: (
Int64List,
"int64_list_val",
- [np.int64, np.int32, int],
+ [np.datetime64, np.int64, np.int32, int, datetime, Timestamp],
),
ValueType.STRING_LIST: (StringList, "string_list_val", [np.str_, str]),
ValueType.BOOL_LIST: (BoolList, "bool_list_val", [np.bool_, bool]),
@@ -272,6 +272,24 @@ def _python_value_to_proto_value(
)
raise _type_err(first_invalid, valid_types[0])
+ if feast_value_type == ValueType.UNIX_TIMESTAMP_LIST:
+ converted_values = []
+ for value in values:
+ converted_sub_values = []
+ for sub_value in value:
+ if isinstance(sub_value, datetime):
+ converted_sub_values.append(int(sub_value.timestamp()))
+ elif isinstance(sub_value, Timestamp):
+ converted_sub_values.append(int(sub_value.ToSeconds()))
+ elif isinstance(sub_value, np.datetime64):
+ converted_sub_values.append(
+ sub_value.astype("datetime64[s]").astype("int")
+ )
+ else:
+ converted_sub_values.append(sub_value)
+ converted_values.append(converted_sub_values)
+ values = converted_values
+
return [
ProtoValue(**{field_name: proto_type(val=value)})
if value is not None
@@ -290,6 +308,11 @@ def _python_value_to_proto_value(
return [
ProtoValue(int64_val=int(value.ToSeconds())) for value in values
]
+ elif isinstance(sample, np.datetime64):
+ return [
+ ProtoValue(int64_val=value.astype("datetime64[s]").astype("int"))
+ for value in values
+ ]
return [ProtoValue(int64_val=int(value)) for value in values]
if feast_value_type in PYTHON_SCALAR_VALUE_TYPE_TO_PROTO_VALUE:
| diff --git a/sdk/python/tests/data/data_creator.py b/sdk/python/tests/data/data_creator.py
--- a/sdk/python/tests/data/data_creator.py
+++ b/sdk/python/tests/data/data_creator.py
@@ -60,6 +60,13 @@ def get_feature_values_for_dtype(
"float": [1.0, None, 3.0, 4.0, 5.0],
"string": ["1", None, "3", "4", "5"],
"bool": [True, None, False, True, False],
+ "datetime": [
+ datetime(1980, 1, 1),
+ None,
+ datetime(1981, 1, 1),
+ datetime(1982, 1, 1),
+ datetime(1982, 1, 1),
+ ],
}
non_list_val = dtype_map[dtype]
if is_list:
diff --git a/sdk/python/tests/integration/registration/test_universal_types.py b/sdk/python/tests/integration/registration/test_universal_types.py
--- a/sdk/python/tests/integration/registration/test_universal_types.py
+++ b/sdk/python/tests/integration/registration/test_universal_types.py
@@ -1,4 +1,5 @@
import logging
+import re
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import List
@@ -28,6 +29,7 @@ def populate_test_configs(offline: bool):
(ValueType.INT64, "int64"),
(ValueType.STRING, "float"),
(ValueType.STRING, "bool"),
+ (ValueType.INT32, "datetime"),
]
configs: List[TypeTestConfig] = []
for test_repo_config in FULL_REPO_CONFIGS:
@@ -232,6 +234,7 @@ def test_feature_get_online_features_types_match(online_types_test_fixtures):
"float": float,
"string": str,
"bool": bool,
+ "datetime": int,
}
expected_dtype = feature_list_dtype_to_expected_online_response_value_type[
config.feature_dtype
@@ -258,6 +261,8 @@ def create_feature_view(
value_type = ValueType.FLOAT_LIST
elif feature_dtype == "bool":
value_type = ValueType.BOOL_LIST
+ elif feature_dtype == "datetime":
+ value_type = ValueType.UNIX_TIMESTAMP_LIST
else:
if feature_dtype == "int32":
value_type = ValueType.INT32
@@ -267,6 +272,8 @@ def create_feature_view(
value_type = ValueType.FLOAT
elif feature_dtype == "bool":
value_type = ValueType.BOOL
+ elif feature_dtype == "datetime":
+ value_type = ValueType.UNIX_TIMESTAMP
return driver_feature_view(data_source, name=name, value_type=value_type,)
@@ -281,6 +288,7 @@ def assert_expected_historical_feature_types(
"float": (pd.api.types.is_float_dtype,),
"string": (pd.api.types.is_string_dtype,),
"bool": (pd.api.types.is_bool_dtype, pd.api.types.is_object_dtype),
+ "datetime": (pd.api.types.is_datetime64_any_dtype,),
}
dtype_checkers = feature_dtype_to_expected_historical_feature_dtype[feature_dtype]
assert any(
@@ -307,6 +315,7 @@ def assert_feature_list_types(
bool,
np.bool_,
), # Can be `np.bool_` if from `np.array` rather that `list`
+ "datetime": np.datetime64,
}
expected_dtype = feature_list_dtype_to_expected_historical_feature_list_dtype[
feature_dtype
@@ -328,22 +337,23 @@ def assert_expected_arrow_types(
historical_features_arrow = historical_features.to_arrow()
print(historical_features_arrow)
feature_list_dtype_to_expected_historical_feature_arrow_type = {
- "int32": "int64",
- "int64": "int64",
- "float": "double",
- "string": "string",
- "bool": "bool",
+ "int32": r"int64",
+ "int64": r"int64",
+ "float": r"double",
+ "string": r"string",
+ "bool": r"bool",
+ "datetime": r"timestamp\[.+\]",
}
arrow_type = feature_list_dtype_to_expected_historical_feature_arrow_type[
feature_dtype
]
if feature_is_list:
- assert (
- str(historical_features_arrow.schema.field_by_name("value").type)
- == f"list<item: {arrow_type}>"
+ assert re.match(
+ f"list<item: {arrow_type}>",
+ str(historical_features_arrow.schema.field_by_name("value").type),
)
else:
- assert (
- str(historical_features_arrow.schema.field_by_name("value").type)
- == arrow_type
+ assert re.match(
+ arrow_type,
+ str(historical_features_arrow.schema.field_by_name("value").type),
)
| Serving not working with datetime64 numpy env.
## Expected Behavior
should work.
## Current Behavior
Serving not working with datetime64 numpy env.
```
File "/Users/zf/go/src/github.com/opendoor-labs/code/py/.venv/lib/python3.9/site-packages/feast/type_map.py", line 332, in python_values_to_proto_values
return _python_value_to_proto_value(value_type, values)
File "/Users/zf/go/src/github.com/opendoor-labs/code/py/.venv/lib/python3.9/site-packages/feast/type_map.py", line 293, in _python_value_to_proto_value
return [ProtoValue(int64_val=int(value)) for value in values]
File "/Users/zf/go/src/github.com/opendoor-labs/code/py/.venv/lib/python3.9/site-packages/feast/type_map.py", line 293, in <listcomp>
return [ProtoValue(int64_val=int(value)) for value in values]
TypeError: int() argument must be a string, a bytes-like object or a number, not 'datetime.datetime'
```
## Steps to reproduce
Serve a feature
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
Fix:
https://github.com/feast-dev/feast/blob/master/sdk/python/feast/type_map.py#L292
++ this to type map
```
elif isinstance(sample, np.datetime64):
return [
ProtoValue(int64_val=int(pd.Timestamp(value).to_pydatetime().timestamp())) for value in values
]
```
https://github.com/feast-dev/feast/commit/72bae7d39e3f08df3f98743c412f408924ac3261
| Would you be able to open a Pull Request with this fix? | 2022-01-18T11:00:56 |
feast-dev/feast | 2,226 | feast-dev__feast-2226 | [
"2128"
] | 1f3a595ea3879e8800cf1c290db7cdbac196164d | diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -237,12 +237,13 @@ def extract_objects_for_apply_delete(project, registry, repo):
return all_to_apply, all_to_delete, views_to_delete, views_to_keep
-@log_exceptions_and_usage
-def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation: bool):
-
- os.chdir(repo_path)
- project, registry, repo, store = _prepare_registry_and_repo(repo_config, repo_path)
-
+def apply_total_with_repo_instance(
+ store: FeatureStore,
+ project: str,
+ registry: Registry,
+ repo: RepoContents,
+ skip_source_validation: bool,
+):
if not skip_source_validation:
data_sources = [t.batch_source for t in repo.feature_views]
# Make sure the data source used by this feature view is supported by Feast
@@ -262,6 +263,16 @@ def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation
log_cli_output(diff, views_to_delete, views_to_keep)
+@log_exceptions_and_usage
+def apply_total(repo_config: RepoConfig, repo_path: Path, skip_source_validation: bool):
+
+ os.chdir(repo_path)
+ project, registry, repo, store = _prepare_registry_and_repo(repo_config, repo_path)
+ apply_total_with_repo_instance(
+ store, project, registry, repo, skip_source_validation
+ )
+
+
def log_cli_output(diff, views_to_delete, views_to_keep):
from colorama import Fore, Style
| Split up parse_repo and apply_total
**Is your feature request related to a problem? Please describe.**
I have a framework that handles the offline store. It creates the tables, indexes, reads data from different data sources, does some transformations, and then inserts into the offline store. As a part of this, I can construct the entities, feature views, feature services, etc, a instance of the ParsedRepo class for Feast. What I need is the ability to pass my instance into the `apply_total` method.
**Describe the solution you'd like**
Remove the call to `parse_repo` from the `apply_total` ( [link](https://github.com/feast-dev/feast/blob/497809f216b7b8051ad6611df75795c25694e287/sdk/python/feast/repo_operations.py#L144) ) and instead perform that call elsewhere and pass the ParsedRepo object into the `apply_total` function.
**Describe alternatives you've considered**
Copy the `apply_total` method into my code and make the changes needed.
The reason why I don't like this approach is because I would be maintaining a copy/paste of the method, while the change in the Feast repo wouldn't impact functionality since `feast apply` would still call `parse_repo` and then pass the object to `apply_total`.
```python
@cli.command("apply", cls=NoOptionDefaultFormat)
@click.option(
"--skip-source-validation",
is_flag=True,
help="Don't validate the data sources by checking for that the tables exist.",
)
@click.pass_context
def apply_total_command(ctx: click.Context, skip_source_validation: bool):
"""
Create or update a feature store deployment
"""
repo = ctx.obj["CHDIR"]
cli_check_repo(repo)
repo_config = load_repo_config(repo)
try:
parsed_repo = parse_repo(repo) # <- this
apply_total(parsed_repo, repo_config, repo, skip_source_validation)
except FeastProviderLoginError as e:
print(str(e))
```
| Hi @nossrannug , I think this `apply_total` was not intended to be used directly and seems to be written specifically for CLI. That being said, it could be easily split into sub-functions. And those sub-functions could be reused directly or new interfaces can be created.
If you have time, please feel free to contribute. | 2022-01-19T19:05:54 |
|
feast-dev/feast | 2,229 | feast-dev__feast-2229 | [
"2220"
] | 428c14561bbea9730c9725417ca01b45d809c65c | diff --git a/sdk/python/feast/type_map.py b/sdk/python/feast/type_map.py
--- a/sdk/python/feast/type_map.py
+++ b/sdk/python/feast/type_map.py
@@ -97,6 +97,7 @@ def python_type_to_feast_value_type(
type_map = {
"int": ValueType.INT64,
"str": ValueType.STRING,
+ "string": ValueType.STRING, # pandas.StringDtype
"float": ValueType.DOUBLE,
"bytes": ValueType.BYTES,
"float64": ValueType.DOUBLE,
@@ -119,48 +120,50 @@ def python_type_to_feast_value_type(
if type_name in type_map:
return type_map[type_name]
- if type_name == "ndarray" or isinstance(value, list):
- if recurse:
-
- # Convert to list type
- list_items = pd.core.series.Series(value)
-
- # This is the final type which we infer from the list
- common_item_value_type = None
- for item in list_items:
- if isinstance(item, ProtoValue):
- current_item_value_type: ValueType = _proto_value_to_value_type(
- item
- )
- else:
- # Get the type from the current item, only one level deep
- current_item_value_type = python_type_to_feast_value_type(
- name=name, value=item, recurse=False
- )
- # Validate whether the type stays consistent
- if (
- common_item_value_type
- and not common_item_value_type == current_item_value_type
- ):
- raise ValueError(
- f"List value type for field {name} is inconsistent. "
- f"{common_item_value_type} different from "
- f"{current_item_value_type}."
- )
- common_item_value_type = current_item_value_type
- if common_item_value_type is None:
- return ValueType.UNKNOWN
- return ValueType[common_item_value_type.name + "_LIST"]
- else:
- assert value
+ if isinstance(value, np.ndarray) and str(value.dtype) in type_map:
+ item_type = type_map[str(value.dtype)]
+ return ValueType[item_type.name + "_LIST"]
+
+ if isinstance(value, (list, np.ndarray)):
+ # if the value's type is "ndarray" and we couldn't infer from "value.dtype"
+ # this is most probably array of "object",
+ # so we need to iterate over objects and try to infer type of each item
+ if not recurse:
raise ValueError(
- f"Value type for field {name} is {value.dtype.__str__()} but "
+ f"Value type for field {name} is {type(value)} but "
f"recursion is not allowed. Array types can only be one level "
f"deep."
)
- assert value
- return type_map[value.dtype.__str__()]
+ # This is the final type which we infer from the list
+ common_item_value_type = None
+ for item in value:
+ if isinstance(item, ProtoValue):
+ current_item_value_type: ValueType = _proto_value_to_value_type(item)
+ else:
+ # Get the type from the current item, only one level deep
+ current_item_value_type = python_type_to_feast_value_type(
+ name=name, value=item, recurse=False
+ )
+ # Validate whether the type stays consistent
+ if (
+ common_item_value_type
+ and not common_item_value_type == current_item_value_type
+ ):
+ raise ValueError(
+ f"List value type for field {name} is inconsistent. "
+ f"{common_item_value_type} different from "
+ f"{current_item_value_type}."
+ )
+ common_item_value_type = current_item_value_type
+ if common_item_value_type is None:
+ return ValueType.UNKNOWN
+ return ValueType[common_item_value_type.name + "_LIST"]
+
+ raise ValueError(
+ f"Value with native type {type_name} "
+ f"cannot be converted into Feast value type"
+ )
def python_values_to_feast_value_type(
| diff --git a/sdk/python/tests/integration/registration/test_inference.py b/sdk/python/tests/integration/registration/test_inference.py
--- a/sdk/python/tests/integration/registration/test_inference.py
+++ b/sdk/python/tests/integration/registration/test_inference.py
@@ -3,7 +3,7 @@
from feast import Entity, Feature, RepoConfig, ValueType
from feast.data_source import RequestDataSource
-from feast.errors import RegistryInferenceFailure
+from feast.errors import RegistryInferenceFailure, SpecifiedFeaturesNotPresentError
from feast.feature_view import FeatureView
from feast.inference import (
update_data_sources_with_inferred_event_timestamp_col,
@@ -86,7 +86,7 @@ def test_update_data_sources_with_inferred_event_timestamp_col(simple_dataset_1)
)
-def test_modify_feature_views_success():
+def test_on_demand_features_type_inference():
# Create Feature Views
date_request = RequestDataSource(
name="date_request", schema={"some_date": ValueType.UNIX_TIMESTAMP}
@@ -94,11 +94,46 @@ def test_modify_feature_views_success():
@on_demand_feature_view(
inputs={"date_request": date_request},
- features=[Feature("output", ValueType.UNIX_TIMESTAMP)],
+ features=[
+ Feature("output", ValueType.UNIX_TIMESTAMP),
+ Feature("string_output", ValueType.STRING),
+ ],
)
def test_view(features_df: pd.DataFrame) -> pd.DataFrame:
data = pd.DataFrame()
data["output"] = features_df["some_date"]
+ data["string_output"] = features_df["some_date"].astype(pd.StringDtype())
return data
test_view.infer_features()
+
+ @on_demand_feature_view(
+ inputs={"date_request": date_request},
+ features=[
+ Feature("output", ValueType.UNIX_TIMESTAMP),
+ Feature("object_output", ValueType.STRING),
+ ],
+ )
+ def invalid_test_view(features_df: pd.DataFrame) -> pd.DataFrame:
+ data = pd.DataFrame()
+ data["output"] = features_df["some_date"]
+ data["object_output"] = features_df["some_date"].astype(str)
+ return data
+
+ with pytest.raises(ValueError, match="Value with native type object"):
+ invalid_test_view.infer_features()
+
+ @on_demand_feature_view(
+ inputs={"date_request": date_request},
+ features=[
+ Feature("output", ValueType.UNIX_TIMESTAMP),
+ Feature("missing", ValueType.STRING),
+ ],
+ )
+ def test_view_with_missing_feature(features_df: pd.DataFrame) -> pd.DataFrame:
+ data = pd.DataFrame()
+ data["output"] = features_df["some_date"]
+ return data
+
+ with pytest.raises(SpecifiedFeaturesNotPresentError):
+ test_view_with_missing_feature.infer_features()
| On demand feature view not supporting objects
## Expected Behavior
```
@on_demand_feature_view(
inputs={
'driver_hourly_stats': driver_hourly_stats_view
},
features=[
Feature(name='conv_rate_plus_val1', dtype=ValueType.DOUBLE),
Feature(name='conv_rate_plus_val2', dtype=ValueType.DOUBLE),
Feature(name='conv_rate_plus_val3', dtype=ValueType.STRING)
]
)
def transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame:
df = pd.DataFrame()
df['conv_rate_plus_val1'] = (inputs['conv_rate'] + 1)
df['conv_rate_plus_val2'] = (inputs['conv_rate'] + 1)
df['conv_rate_plus_val3'] = str(inputs['conv_rate'])
return df
```
Should process fine when applying.
## Current Behavior
```
Traceback (most recent call last):
File "/Users/user/anaconda3/envs/feast/bin/feast", line 8, in <module>
sys.exit(cli())
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/cli.py", line 390, in apply_total_command
apply_total(repo_config, repo, skip_source_validation)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/repo_operations.py", line 260, in apply_total
diff = store.apply(all_to_apply, objects_to_delete=all_to_delete, partial=False)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/feature_store.py", line 579, in apply
odfv.infer_features()
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/on_demand_feature_view.py", line 226, in infer_features
name=f, dtype=python_type_to_feast_value_type(f, type_name=str(dt))
File "/Users/user/anaconda3/envs/feast/lib/python3.10/site-packages/feast/type_map.py", line 162, in python_type_to_feast_value_type
assert value
AssertionError
```
## Steps to reproduce
Get the on demand feature view demo from: https://github.com/feast-dev/feast-demo/blob/main/feature_repo/features.py
Add the 2 lines as noted in the snippet above for the new STRING feature: `conv_rate_plus_val3`
### Specifications
- Version: 0.17.0
- Platform: Mac
- Subsystem: 12.0.1
## Possible Solution
I might be doing something wrong or expecting something which shouldn't be supported. Would love to hear feedback.
Incase this turns out to be a bug, this might be a possible solution --
The type map defined at:
https://github.com/feast-dev/feast/blob/9dc9e60aa6a5d6a85f012307f1910f5233a251c6/sdk/python/feast/type_map.py#L94
Should possibly include an entry for pandas `obj` dtype?
| Hi @roy651. Thanks for reporting this. I understand that exception information is very confusing. **We should add some details to the assertion or better raise proper exceptions.**
That being said, this line is not correct
```
df['conv_rate_plus_val3'] = str(inputs['conv_rate'])
```
If you need to convert column to string the correct way would be:
```
df['conv_rate_plus_val3'] = inputs['conv_rate'].astype(str)
```
Otherwise, you create a column with dtype object, which is intentionally not supported by Feast.
Thanks @pyalex for the guidance to use `astype` instead of the plain casting, however, in practice it yields the same result.
Using this code:
```
@on_demand_feature_view(
inputs={
'driver_hourly_stats': driver_hourly_stats_view
},
features=[
Feature(name='conv_rate_plus_val1', dtype=ValueType.DOUBLE),
Feature(name='conv_rate_plus_val2', dtype=ValueType.DOUBLE),
Feature(name='conv_rate_plus_val3', dtype=ValueType.STRING)
]
)
def transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame:
df = pd.DataFrame()
df['conv_rate_plus_val1'] = (inputs['conv_rate'] + 1)
df['conv_rate_plus_val2'] = (inputs['conv_rate'] + 1)
df['conv_rate_plus_val3'] = inputs['conv_rate'].astype("string")
print(f"###df.dtypes:\n{df.dtypes}")
return df
```
Crashes similarly with:
```
###df.dtypes:
conv_rate_plus_val1 float64
conv_rate_plus_val2 float64
conv_rate_plus_val3 object
dtype: object
>>>type_name:object
Traceback (most recent call last):
File "/Users/user/anaconda3/envs/feast_wo/bin/feast", line 8, in <module>
sys.exit(cli())
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/cli.py", line 390, in apply_total_command
apply_total(repo_config, repo, skip_source_validation)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/repo_operations.py", line 260, in apply_total
diff = store.apply(all_to_apply, objects_to_delete=all_to_delete, partial=False)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/feature_store.py", line 579, in apply
odfv.infer_features()
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/on_demand_feature_view.py", line 226, in infer_features
name=f, dtype=python_type_to_feast_value_type(f, type_name=str(dt))
File "/Users/user/anaconda3/envs/feast_wo/lib/python3.10/site-packages/feast/type_map.py", line 163, in python_type_to_feast_value_type
assert value
AssertionError
```
Note that I've added a couple of debug prints, the last of which comes from within `type_map.py`.
A quick search reveals that Pandas relies upon Numpy for typing and, in short, because Numpy treats strings as arrays [they become objects](https://stackoverflow.com/a/21020411/3767429). There is an alternative approach, starting from Pandas 1.0.0, but it supports the use of `string` type and not `str` as explained [in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html).
I saw that the `type_map.py` already has a newer version (than 0.17.0) with a different flow for the type check but I believe it will fail in a similar manner as the type map still doesn't contain `string` or `obj`.
@pyalex Just to clarify the point - This is obviously just an example, but as it stands, I couldn't get any strings to work in the transformed ODFV.
Hence the (previous) title which I gave the issue.
@roy651, you're right. This is my mistake. It always will be dtype `object` unless we use special `pandas.StringType`. But we currently are not converting either, because even `pandas.StringType` will have the name `string`, which is not in [the list](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/type_map.py#L97)
I will increase priority and return the bug label. We're gonna fix it ASAP. | 2022-01-20T11:03:59 |
feast-dev/feast | 2,240 | feast-dev__feast-2240 | [
"2150"
] | 396f7294425169d69445d31863c5d34d6f71389f | diff --git a/sdk/python/feast/infra/online_stores/redis.py b/sdk/python/feast/infra/online_stores/redis.py
--- a/sdk/python/feast/infra/online_stores/redis.py
+++ b/sdk/python/feast/infra/online_stores/redis.py
@@ -72,11 +72,11 @@ class RedisOnlineStoreConfig(FeastConfigBaseModel):
class RedisOnlineStore(OnlineStore):
_client: Optional[Union[Redis, RedisCluster]] = None
- def delete_table_values(self, config: RepoConfig, table: FeatureView):
+ def delete_entity_values(self, config: RepoConfig, join_keys: List[str]):
client = self._get_client(config.online_store)
deleted_count = 0
pipeline = client.pipeline()
- prefix = _redis_key_prefix(table.entities)
+ prefix = _redis_key_prefix(join_keys)
for _k in client.scan_iter(
b"".join([prefix, b"*", config.project.encode("utf8")])
@@ -85,7 +85,7 @@ def delete_table_values(self, config: RepoConfig, table: FeatureView):
deleted_count += 1
pipeline.execute()
- logger.debug(f"Deleted {deleted_count} keys for {table.name}")
+ logger.debug(f"Deleted {deleted_count} rows for entity {', '.join(join_keys)}")
@log_exceptions_and_usage(online_store="redis")
def update(
@@ -98,10 +98,16 @@ def update(
partial: bool,
):
"""
- We delete the keys in redis for tables/views being removed.
+ Look for join_keys (list of entities) that are not in use anymore
+ (usually this happens when the last feature view that was using specific compound key is deleted)
+ and remove all features attached to this "join_keys".
"""
- for table in tables_to_delete:
- self.delete_table_values(config, table)
+ join_keys_to_keep = set(tuple(table.entities) for table in tables_to_keep)
+
+ join_keys_to_delete = set(tuple(table.entities) for table in tables_to_delete)
+
+ for join_keys in join_keys_to_delete - join_keys_to_keep:
+ self.delete_entity_values(config, list(join_keys))
def teardown(
self,
@@ -112,8 +118,10 @@ def teardown(
"""
We delete the keys in redis for tables/views being removed.
"""
- for table in tables:
- self.delete_table_values(config, table)
+ join_keys_to_delete = set(tuple(table.entities) for table in tables)
+
+ for join_keys in join_keys_to_delete:
+ self.delete_entity_values(config, list(join_keys))
@staticmethod
def _parse_connection_string(connection_string: str):
| diff --git a/sdk/python/tests/integration/feature_repos/repo_configuration.py b/sdk/python/tests/integration/feature_repos/repo_configuration.py
--- a/sdk/python/tests/integration/feature_repos/repo_configuration.py
+++ b/sdk/python/tests/integration/feature_repos/repo_configuration.py
@@ -7,7 +7,7 @@
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from pathlib import Path
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, Union
import pandas as pd
import yaml
@@ -283,7 +283,9 @@ def construct_test_environment(
execution_role_name="arn:aws:iam::402087665549:role/lambda_execution_role",
)
- registry = f"s3://feast-integration-tests/registries/{project}/registry.db"
+ registry = (
+ f"s3://feast-integration-tests/registries/{project}/registry.db"
+ ) # type: Union[str, RegistryConfig]
else:
# Note: even if it's a local feature server, the repo config does not have this configured
feature_server = None
diff --git a/sdk/python/tests/integration/online_store/test_universal_online.py b/sdk/python/tests/integration/online_store/test_universal_online.py
--- a/sdk/python/tests/integration/online_store/test_universal_online.py
+++ b/sdk/python/tests/integration/online_store/test_universal_online.py
@@ -28,6 +28,7 @@
)
from tests.integration.feature_repos.universal.feature_views import (
create_driver_hourly_stats_feature_view,
+ driver_feature_view,
)
from tests.utils.data_source_utils import prep_file_source
@@ -503,6 +504,79 @@ def test_online_retrieval(environment, universal_data_sources, full_feature_name
)
[email protected]
[email protected]
+def test_online_store_cleanup(environment, universal_data_sources):
+ """
+ Some online store implementations (like Redis) keep features from different features views
+ but with common entities together.
+ This might end up with deletion of all features attached to the entity,
+ when only one feature view was deletion target (see https://github.com/feast-dev/feast/issues/2150).
+
+ Plan:
+ 1. Register two feature views with common entity "driver"
+ 2. Materialize data
+ 3. Check if features are available (via online retrieval)
+ 4. Delete one feature view
+ 5. Check that features for other are still available
+ 6. Delete another feature view (and create again)
+ 7. Verify that features for both feature view were deleted
+ """
+ fs = environment.feature_store
+ entities, datasets, data_sources = universal_data_sources
+ driver_stats_fv = construct_universal_feature_views(data_sources)["driver"]
+
+ df = pd.DataFrame(
+ {
+ "ts_1": [environment.end_date] * len(entities["driver"]),
+ "created_ts": [environment.end_date] * len(entities["driver"]),
+ "driver_id": entities["driver"],
+ "value": np.random.random(size=len(entities["driver"])),
+ }
+ )
+
+ ds = environment.data_source_creator.create_data_source(
+ df, destination_name="simple_driver_dataset"
+ )
+
+ simple_driver_fv = driver_feature_view(
+ data_source=ds, name="test_universal_online_simple_driver"
+ )
+
+ fs.apply([driver(), simple_driver_fv, driver_stats_fv])
+
+ fs.materialize(
+ environment.start_date - timedelta(days=1),
+ environment.end_date + timedelta(days=1),
+ )
+ expected_values = df.sort_values(by="driver_id")
+
+ features = [f"{simple_driver_fv.name}:value"]
+ entity_rows = [{"driver": driver_id} for driver_id in sorted(entities["driver"])]
+
+ online_features = fs.get_online_features(
+ features=features, entity_rows=entity_rows
+ ).to_dict()
+ assert np.allclose(expected_values["value"], online_features["value"])
+
+ fs.apply(
+ objects=[simple_driver_fv], objects_to_delete=[driver_stats_fv], partial=False
+ )
+
+ online_features = fs.get_online_features(
+ features=features, entity_rows=entity_rows
+ ).to_dict()
+ assert np.allclose(expected_values["value"], online_features["value"])
+
+ fs.apply(objects=[], objects_to_delete=[simple_driver_fv], partial=False)
+ fs.apply([simple_driver_fv])
+
+ online_features = fs.get_online_features(
+ features=features, entity_rows=entity_rows
+ ).to_dict()
+ assert all(v is None for v in online_features["value"])
+
+
def response_feature_name(feature: str, full_feature_names: bool) -> str:
if (
feature in {"current_balance", "avg_passenger_count", "lifetime_trip_count"}
| Redis can delete incorrect keys
I haven't tested this explicitly but from the data model + this function it looks to be as if two FeatureViews have the same Entities (but are different tables) and one view is deleted the online keys for the other view will also be deleted... This seems like a bit of a flaw either in the Redis data model or in the deletion mechanism.
https://github.com/feast-dev/feast/blob/ec4165396f70ab20b42246b093f777dfcc9f5277/sdk/python/feast/infra/online_stores/redis.py#L75-L88
| good catch - briefly looking at the code, I'm not even convinced that the key we're using to delete matches the key for insertion (e.g. we might be deleting nothing at all) | 2022-01-26T09:29:27 |
feast-dev/feast | 2,255 | feast-dev__feast-2255 | [
"2223"
] | 592af75e63766a380673b2dc8a879e70d2df5818 | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -132,7 +132,7 @@
+ AWS_REQUIRED
)
-DEV_REQUIRED = ["mypy-protobuf>=1.*", "grpcio-testing==1.*"] + CI_REQUIRED
+DEV_REQUIRED = ["mypy-protobuf>=3.1.0", "grpcio-testing==1.*"] + CI_REQUIRED
# Get git repo root directory
repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
@@ -244,7 +244,7 @@ def run(self):
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
use_scm_version=use_scm_version,
- setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
+ setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==3.1.0", "sphinx!=4.0.0"],
package_data={
"": [
"protos/feast/**/*.proto",
| Optimize `_populate_result_rows_from_feature_view`
Signed-off-by: Judah Rand <[email protected]>
<!-- Thanks for sending a pull request! Here are some tips for you:
1. Ensure that your code follows our code conventions: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#code-style--linting
2. Run unit tests and ensure that they are passing: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#unit-tests
3. If your change introduces any API changes, make sure to update the integration tests scripts here: https://github.com/feast-dev/feast/tree/master/sdk/python/tests or https://github.com/feast-dev/feast/tree/master/sdk/go
4. Make sure documentation is updated for your PR!
5. Make sure you have signed the CLA https://cla.developers.google.com/clas
-->
**What this PR does / why we need it**:
This commit optimizes the fetching of features by only fetching
the features for each unique Entity once and then expands the result
to the shape of input EntityKeys.
Previously, if an Entity occurred twice the features would be fetched
from the OnlineStore twice. This can be hugely inefficient.
The only assumption that this makes is that the OnlineStore will return
the feature data in the same order as the EntityKeyProtos are provided.
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
-->
Fixes #
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
For more information about release notes, see kubernetes' guide here:
http://git.k8s.io/community/contributors/guide/release-notes.md
-->
```release-note
Speed up `get_online_features` when duplicate Entities are present.
```
| # [Codecov](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) Report
> Merging [#2223](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) (c49bcd4) into [master](https://codecov.io/gh/feast-dev/feast/commit/428c14561bbea9730c9725417ca01b45d809c65c?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) (428c145) will **increase** coverage by `0.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None)
```diff
@@ Coverage Diff @@
## master #2223 +/- ##
==========================================
+ Coverage 84.94% 84.96% +0.02%
==========================================
Files 105 105
Lines 8496 8515 +19
==========================================
+ Hits 7217 7235 +18
- Misses 1279 1280 +1
```
| Flag | Coverage Δ | |
|---|---|---|
| integrationtests | `73.14% <97.05%> (-0.26%)` | :arrow_down: |
| unittests | `59.80% <77.94%> (+0.07%)` | :arrow_up: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) | Coverage Δ | |
|---|---|---|
| [sdk/python/feast/feature\_store.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi9mZWFzdC9mZWF0dXJlX3N0b3JlLnB5) | `90.05% <100.00%> (+0.20%)` | :arrow_up: |
| [sdk/python/feast/infra/online\_stores/redis.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi9mZWFzdC9pbmZyYS9vbmxpbmVfc3RvcmVzL3JlZGlzLnB5) | `94.07% <100.00%> (ø)` | |
| [sdk/python/feast/repo\_operations.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi9mZWFzdC9yZXBvX29wZXJhdGlvbnMucHk=) | `48.85% <100.00%> (+0.48%)` | :arrow_up: |
| [sdk/python/feast/type\_map.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi9mZWFzdC90eXBlX21hcC5weQ==) | `66.14% <100.00%> (-0.27%)` | :arrow_down: |
| [sdk/python/feast/value\_type.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi9mZWFzdC92YWx1ZV90eXBlLnB5) | `100.00% <100.00%> (ø)` | |
| [sdk/python/tests/data/data\_creator.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi90ZXN0cy9kYXRhL2RhdGFfY3JlYXRvci5weQ==) | `100.00% <100.00%> (ø)` | |
| [...ts/integration/feature\_repos/repo\_configuration.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi90ZXN0cy9pbnRlZ3JhdGlvbi9mZWF0dXJlX3JlcG9zL3JlcG9fY29uZmlndXJhdGlvbi5weQ==) | `94.23% <100.00%> (ø)` | |
| [...n/feature\_repos/universal/data\_sources/bigquery.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi90ZXN0cy9pbnRlZ3JhdGlvbi9mZWF0dXJlX3JlcG9zL3VuaXZlcnNhbC9kYXRhX3NvdXJjZXMvYmlncXVlcnkucHk=) | `97.61% <100.00%> (ø)` | |
| [...n/feature\_repos/universal/data\_sources/redshift.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi90ZXN0cy9pbnRlZ3JhdGlvbi9mZWF0dXJlX3JlcG9zL3VuaXZlcnNhbC9kYXRhX3NvdXJjZXMvcmVkc2hpZnQucHk=) | `100.00% <100.00%> (ø)` | |
| [...fline\_store/test\_universal\_historical\_retrieval.py](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None#diff-c2RrL3B5dGhvbi90ZXN0cy9pbnRlZ3JhdGlvbi9vZmZsaW5lX3N0b3JlL3Rlc3RfdW5pdmVyc2FsX2hpc3RvcmljYWxfcmV0cmlldmFsLnB5) | `100.00% <100.00%> (ø)` | |
| ... and [6 more](https://codecov.io/gh/feast-dev/feast/pull/2223/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None). Last update [428c145...c49bcd4](https://codecov.io/gh/feast-dev/feast/pull/2223?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=None).
Whats the usecase for having duplicate entities? Why not e.g. dedup before sending?
Seems like we adding a lot of complexity and really unnecessary optimization for the use case that should be solved on client side.
> Whats the usecase for having duplicate entities? Why not e.g. dedup before sending?
These would be example entity rows: `[{"user_id" 1, "item_id: 214}, {"user_id" 1, "item_id: 2354}, {"user_id" 1, "item_id: 4736}, ]`
The reason for not deduping before the request is that I'm requesting features that have `user_id` as an entity, features that have `item_id` as an entity, and also features that have `user_id, item_id` as an entity. For example feature refs that look like: `user:age, item:age, user_x_item:times_viewed`
In this case the features for the `user_id` are processed for every single row even though the `user:age` will be the same for every row. You also can't dedupe on the client side because `user_x_item:times_viewed` requires both join keys. This causes real issues in this case as the number of rows grows. You end up fetching large amounts of redundant data from the OnlineStore and processing it multiple times.
This change means that only the unique entities for each FeatureView are actually retrieved. For very large numbers of rows (~9k in my test case) this can reduce the time spent interfacing with RedisOnlineStore by almost half from about 600ms down to 300ms because for one of the two FeatureViews I'm retrieving features from only a single Entity is needed rather than 9k.
Does that make sense?
> Seems like we adding a lot of complexity and really unnecessary optimization for the use case that should be solved on client side.
Absolutely disagree. See example in reply above. There are cases where doing that would require multiple calls to Feast which would somewhat defeat the purpose.
To me this is an obvious optimization and the current implementation is actually rather naive.
I think the issue here is probably one of terminology rather than an actual disagreement, however? The terminology perhaps being the distinction between an `entity_row`, an `Entity`, and an `EntityKey`.
I suppose the case I'm handling here is actually duplicate `EntityKey`s rather than duplicate `Entity`. Where an `EntityKey` is specific to a given `FeatureView`.
There is still an additional optimization that could be done here. Imagine I'm getting features from 2 `FeatureViews` which have the same `EntityKey`. At the moment the `EntityKeyProto`s would be created twice.
> There are cases where doing that would require multiple calls to Feast which would somewhat defeat the purpose.
But is it so bad thing (I mean multiple calls)? It will be converted to multiple calls to the backend in any case.
Also, the user knows much better their data model and all additional logic proposed in this PR can be expressed on the user side simpler:
```
user_features = fs.get_online_features(user_features,
entity_rows=[{"user_id": user_id} for user_id in set(entity_df.user_ids)])
item_features = fs.get_online_features(item_features,
entity_rows=[{"item_id": item_id} for user_id in set(entity_df.item_ids)])
entity_df.join(user_feature.to_df(), ...).join(item_features.to_df(), ...)
```
I'm trying to say that we cannot push all optimizations for all possible use cases to the Feast side. The code in `FeatureStore.get_online_features` is already quite complicated, pretty hard to read, and just partially tested (and with each optimization test coverage only decreases).
IMO, we should move towards simplifying logic on `FeatureStore` level and split it between online store side (actual implementation) and just leave the rest to the user to optimize (maybe we can gather some cookbook with examples like I showed above).
*Complain MODE ON*
Regarding already existing complexity, let me give a few examples:
1. Redis implementation currently supports retrieving all features by entity, but on `FeatureStore` level, we force it to retrieve each feature view separately. We put a lot of effort and code to split those feature references by view and copy entities. `FeatureStore` class has too much responsibility, instead of just leaving optimizations to the specific implementation.
2. another example:
>There is still an additional optimization that could be done here. Imagine I'm getting features from 2 FeatureViews which have the same EntityKey. At the moment the EntityKeyProtos would be created twice.
EntityKeyProto is redundant and not even used as a key anymore, because proto serialization doesn't guarantee stability. And each implementation converts this proto into its own key. So we do a lot of not very useful job by massaging those input from user, packing it into different protos and then throw it away mostly.
*Complain MODE OFF*
Thanks for listening :)
> > There are cases where doing that would require multiple calls to Feast which would somewhat defeat the purpose.
>
> But is it so bad thing (I mean multiple calls)? It will be converted to multiple calls to the backend in any case.
I'd say yes. Feast is supposed to make the serving logic simpler not require the user to know what Feast is going to do internally and optimize around that. It is in no way intuitive that if I provide `[{"user_id" 1, "item_id: 214}, {"user_id" 1, "item_id: 2354}, {"user_id" 1, "item_id: 4736}]` as `entity_rows` that the features for the user will be retrieved and processed 3 times.
> Also, the user knows much better their data model
The data model is Feast's data model - Feast chooses how to store the data in the OnlineStore. Feast should be able to query its own data model efficiently, surely? The whole point of Feast / a FeatureStore is for the user to be able to query any features in the store for any set of given entities. Feast should be able to do this efficiently. The user should not have to understand the internal workings of the SDK! It is not intuitive that Feast will behave naively to duplicates. And in fact from the user's perspective they are not duplicate - a `user_id, item_id` set is unique!
> and all additional logic proposed in this PR can be expressed on the user side simpler:
>
> ```
> user_features = fs.get_online_features(user_features,
> entity_rows=[{"user_id": user_id} for user_id in set(entity_df.user_ids)])
> item_features = fs.get_online_features(item_features,
> entity_rows=[{"item_id": item_id} for user_id in set(entity_df.item_ids)])
>
> entity_df.join(user_feature.to_df(), ...).join(item_features.to_df(), ...)
> ```
You've neglected the `user_x_item` features in this example so it would actually become:
```
user_features = fs.get_online_features(user_features,
entity_rows=[{"user_id": user_id} for user_id in set(entity_df.user_ids)])
item_features = fs.get_online_features(item_features,
entity_rows=[{"item_id": item_id} for user_id in set(entity_df.item_ids)])
user_x_item_features = fs.get_online_features(user_x_item_features,
entity_rows=df[['user_id', 'item_id']].drop_duplicates().to_dict(orient='records')
entity_df.join(user_feature.to_df(), ...).join(item_features.to_df(), ...).join(user_x_item_features, ...)
```
Furthermore, without Pandas this becomes much more cumbersome and it is (in my experience) a bad idea to put Pandas in a latency sensitive user request path.
> I'm trying to say that we cannot push all optimizations for all possible use cases to the Feast side. The code in `FeatureStore.get_online_features` is already quite complicated, pretty hard to read, and just partially tested (and with each optimization test coverage only decreases).
I don't think this is 'an edge case'. This would be incredibly common for any recommender system.
> IMO, we should move towards simplifying logic on `FeatureStore` level and split it between online store side (actual implementation) and just leave the rest to the user to optimize (maybe we can gather some cookbook with examples like I showed above).
If you're going to go this far then why not limit the user to only querying one FeatureView at a time? Or better yet they can just directly query the OnlineStore? The user can use and understands Redis or Datastore, right?
> _Complain MODE ON_
>
> Regarding already existing complexity, let me give a few examples:
>
> 1. Redis implementation currently supports retrieving all features by entity, but on `FeatureStore` level, we force it to retrieve each feature view separately. We put a lot of effort and code to split those feature references by view and copy entities. `FeatureStore` class has too much responsibility, instead of just leaving optimizations to the specific implementation.
There are pros and cons here aren't there? It was my impression that one of the core features of Feast is the ease of implementing new Online and Offline Stores? If this is considered a core feature then the logic of these should be simple - so as to encourage the community to develop them? Therefore, I believe that they should not be expected to deduplicate. Whether or not they fetch by FeatureView or Entity is a more interesting question but isn't where we are right now.
I do, however, agree that the limitation of one call per FeatureView is problematic. But I don't think this is an argument against not asking for the same data multiple times when we don't have to.
> 2. another example:
>
> > There is still an additional optimization that could be done here. Imagine I'm getting features from 2 FeatureViews which have the same EntityKey. At the moment the EntityKeyProtos would be created twice.
>
> EntityKeyProto is redundant and not even used as a key anymore, because proto serialization doesn't guarantee stability. And each implementation converts this proto into its own key. So we do a lot of not very useful job by massaging those input from user, packing it into different protos and then throw it away mostly.
Okay, then that optimization doesn't need to be done. It can just naturally fall out when/if `EntityKeyProto` is removed.
>
> _Complain MODE OFF_
>
> Thanks for listening :)
I honestly don't think that the logic implemented in this PR is actually very complex:
1. Work out at which indexes each Entity occurs
2. Get each Entity once (rather than as many times as they occur)
3. Update the result with the correct data at each index
As long as the OnlineStore always returns the Feature data in the same order it is requested then this is really a very simple algorithm. Do the work once and copy it.
It's 9 lines of additional code.
@judahrand, I see your point that Feast should make lives easier, and ideally, users should not think about internal implementation details. Totally agree. But I would argue that it's two different things: (a) provide comprehensive functionality, so
> the user to be able to query any features in the store for any set of given entities
in a reasonable time (and that time should be expected to grow with the growing complexity of the request) and (b) do some "magic" to run retrieval efficiently for 9K rows with features attached to different entities (with duplicated values in some keys) in the same request. And in the latter case, I'd say - yes it's fine to know some details of implementation and do this optimization in user code (as I proposed above).
And I still would consider this as an edge case since most of our users even with RecSys (and I worked with recommendation systems in Gojek a lot as well) don't retrieve more than a few hundred rows. Alternatively, rows could be clustered and features can be stored as arrays in clusters, hence much fewer rows to retrieve (this is what we did in Gojek).
That being said, I could be wrong and this optimization is indeed critical.
But my concern about code complexity still remains. The problem is that the proposed implementation is mixed in with other optimization and main retrieval (business) logic. Since there are no tests or documentation the code becomes even harder to read & maintain. Moreover, after one refactoring this optimization could be gone simply because there is no guarding test that would prevent it.
So my proposal is to rewrite it in a modular way, not coupling it with an already existing flow. If it is possible to implement on the user side, it should be possible to put the whole optimization logic into a single method and write a unit test for it.
Just an illustration for this idea (not necessarily best code):
```
def get_online_features(...):
entities_groups = group_features_by_entities(features_ref)
if len(entities_groups) > 1:
return _get_online_features_with_split(entities_groups, entity_rows)
... usual flow _get_online_features() ...
def _get_online_features_with_split(entities_groups, entity_rows):
all_responses = []
for entities, features_refs in entities_groups:
entity_rows_projected = project(entity_rows, entities)
entity_rows_projected = set(entity_rows_projected) # deduplication
r = _get_online_features(features_refs, entity_rows_projected) # use standard implementation
all_responses.append(r)
... merge responses (without pandas) ...
```
`_get_online_features_with_split` in this example could be tested with mocks. Maybe, there's an even better way. And `_get_online_features` not changed.
> do some "magic" to run retrieval efficiently for 9K rows with features attached to different entities (with duplicated values in some keys) in the same request.
It really isn't magic. It's a really simple algorithm. It also doesn't only apply to the case of 9K rows, it is also more efficient with 2 rows.
> Moreover, after one refactoring this optimization could be gone simply because there is no guarding test that would prevent it.
This is a fair comment, perhaps I need to add some tests which test for the goal of this optimization. This should be completely doable using `unittest.mock.patch`.
> ```
> def get_online_features(...):
> entities_groups = group_features_by_entities(features_ref)
> if len(entities_groups) > 1:
> return _get_online_features_with_split(entities_groups, entity_rows)
>
> ... usual flow _get_online_features() ...
>
> def _get_online_features_with_split(entities_groups, entity_rows):
> all_responses = []
> for entities, features_refs in entities_groups:
> entity_rows_projected = project(entity_rows, entities)
> entity_rows_projected = set(entity_rows_projected) # deduplication
>
> r = _get_online_features(features_refs, entity_rows_projected) # use standard implementation
> all_responses.append(r)
>
> ... merge responses (without pandas) ...
> ```
>
> `_get_online_features_with_split` in this example could be tested with mocks. Maybe, there's an even better way. And `_get_online_features` not changed.
Unfortunately, this approach cannot work with the way that `_get_online_features` works. The problem is not duplicate rows provided to `get_online_features`. The problem is duplicate subsets of the columns in the rows which correspond to the join keys of a subset of the Feature Views required.
I once again draw your attention to `[{"user_id" 1, "item_id: 214}, {"user_id" 1, "item_id: 2354}, {"user_id" 1, "item_id: 4736}]`. There are no duplicate rows. However, when features from a Feature View which has only `User` as an Entity are requested there are duplicate users. Your code snippet has no way of determining this at all.
This information can currently only be gleaned inside `_get_online_features`. Furthermore, you still get up producing multiple OnlineResponse objects and call `to_dict` multiple times. This has a number of issues: a) it cannot be used with the FeatureServer b) it will be slow.
I'm all for making this more modular, tested, etc. I think the modularity will be an issue given how the rest of `get_online_features` is implemented. However, I think that adding a regression test is perfectly doable and reasonable. It should deal with your concern that this optimization could be refactored out.
Another option would be to completely overhaul how `get_online_features` works to identify all subsets of entities for all requested Feature Views earlier. This could then decouple the retrieval from the OnlineStore from the population of results rows and may make the logic less coupled and more clear.
> rows could be clustered and features can be stored as arrays in clusters, hence much fewer rows to retrieve
I'm not sure I entirely understand what you're getting at here, do you mind if I reach out on Slack to learn more as I'm not sure this is directly relevant to this PR?
> I once again draw your attention to [{"user_id" 1, "item_id: 214}, {"user_id" 1, "item_id: 2354}, {"user_id" 1, "item_id: 4736}]. There are no duplicate rows. However, when features from a Feature View which has only User as an Entity are requested there are duplicate users. Your code snippet has no way of determining this at all.
I got this. That's why there's projection before deduplication. Please pay attention to these lines:
```
for entities, features_refs in entities_groups:
entity_rows_projected = project(entity_rows, entities) # here we leave only {"user_id": 1}
# because on this point we should know
# that user features needs only this entity column
entity_rows_projected = set(entity_rows_projected) # deduplication
```
Yes, we would need to group feature_refs, so we will need to have another util function:
```
def group_features_by_entities(features_ref) -> List[Tuple[<list of join keys>, <list of related feature refs>]
```
but that would be trivial to implement (plus we already inside FeatureStore and have access to the registry)
> Another option would be to completely overhaul how get_online_features works to identify all subsets of entities for all requested Feature Views earlier. This could then decouple the retrieval from the OnlineStore from the population of results rows and may make the logic less coupled and more clear.
Yeah, that seems similar to what I proposed.
> This information can currently only be gleaned inside _get_online_features. Furthermore, you still get up producing multiple OnlineResponse objects and call to_dict multiple times. This has a number of issues: a) it cannot be used with the FeatureServer b) it will be slow.
Yeah, this is a reasonable concern. And it's not alone. The whole `_get_online_features` is a mess. And it definitely should not return `OnlineResponse`. But we may leave it out of this PR. And fix this inefficiency later. I'd put modularity first and then refactoring would be much easier.
> I'm not sure I entirely understand what you're getting at here, do you mind if I reach out on Slack to learn more as I'm not sure this is directly relevant to this PR?
Sure, this is not relevant to this PR.
> I got this. That's why there's projection before deduplication. Please pay attention to these lines:
Whoops! Yes, sorry, I did miss this.
> I'd put modularity first and then refactoring would be much easier.
I've done some refactoring to at least disentangle some of this functionality now. The only thing I think could perhaps be done to further improve this PR without completely re-writing `get_online_features`, which I think is out of scope here, is to add a test to verify that this does actually reduce the amount of data retrieved from the `OnlineStore`.
Would this be satisfactory? @pyalex
This is great, @judahrand . Just one request: can you please add a unit test for `_get_unique_entities` method? Its signature is quite complex, test would help to understand what exactly it does better.
@pyalex I've added a simple unittest and just as well too! I hadn't realized that `itertools.groupby` expects your `Iterator` to be pre-sorted. It wasn't breaking anything as such... it was just only grouping contiguous entities 🤦 I'd been playing with cases that were too simple! Easily fixed, however.
Thanks for all the feedback and working towards a maintainable solution with me.
/lgtm
[APPROVALNOTIFIER] This PR is **APPROVED**
Approval requirements bypassed by manually added approval.
This pull-request has been approved by: *<a href="https://github.com/feast-dev/feast/pull/2223#" title="Author self-approved">judahrand</a>*, *<a href="https://github.com/feast-dev/feast/pull/2223#pullrequestreview-863187911" title="Approved">pyalex</a>*
The full list of commands accepted by this bot can be found [here](https://go.k8s.io/bot-commands?repo=feast-dev%2Ffeast).
The pull request process is described [here](https://git.k8s.io/community/contributors/guide/owners.md#the-code-review-process)
<details >
Needs approval from an approver in each of these files:
- **[OWNERS](https://github.com/feast-dev/feast/blob/master/OWNERS)**
Approvers can indicate their approval by writing `/approve` in a comment
Approvers can cancel approval by writing `/approve cancel` in a comment
</details>
<!-- META={"approvers":["tsotnet"]} --> | 2022-01-28T10:33:33 |
|
feast-dev/feast | 2,275 | feast-dev__feast-2275 | [
"2235"
] | fc37896f103fb31bba1180f0fb5d7f3872d5ebe5 | diff --git a/sdk/python/feast/constants.py b/sdk/python/feast/constants.py
--- a/sdk/python/feast/constants.py
+++ b/sdk/python/feast/constants.py
@@ -29,6 +29,9 @@
# Environment variable for toggling usage
FEAST_USAGE = "FEAST_USAGE"
+# Default value for FEAST_USAGE when environment variable is not set
+DEFAULT_FEAST_USAGE_VALUE = "True"
+
# Environment variable for the path for overwriting universal test configs
FULL_REPO_CONFIGS_MODULE_ENV_NAME: str = "FULL_REPO_CONFIGS_MODULE"
diff --git a/sdk/python/feast/usage.py b/sdk/python/feast/usage.py
--- a/sdk/python/feast/usage.py
+++ b/sdk/python/feast/usage.py
@@ -29,7 +29,7 @@
import requests
-from feast.constants import FEAST_USAGE
+from feast.constants import DEFAULT_FEAST_USAGE_VALUE, FEAST_USAGE
from feast.version import get_version
USAGE_ENDPOINT = "https://usage.feast.dev"
@@ -37,7 +37,7 @@
_logger = logging.getLogger(__name__)
_executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
-_is_enabled = os.getenv(FEAST_USAGE, default="True") == "True"
+_is_enabled = os.getenv(FEAST_USAGE, default=DEFAULT_FEAST_USAGE_VALUE) == "True"
_constant_attributes = {
"session_id": str(uuid.uuid4()),
| FEAST_USAGE should default to "False"
**Is your feature request related to a problem? Please describe.**
I'm currently upgrading feast for our organization, making quite a big jump from 0.8 to 0.14. During testing I discovered the addition of the Telemetry/Usage class, to send anonymized metrics/error reports the feast devs.
While it is mentioned in the logging and if you read the release log carefully, I'm surprised/annoyed the decision was made to make `FEAST_USAGE="True"` the default. For my organization, metrics reporting like this will result in a big NO from our security team, regardless of anonymization. While other organizations might not be so strict, I'm not so sure they would appreciate it either.
A feature store is a complex system to deploy, and to make sure we disable usage I have to set up `FEAST_USAGE="True"` everywhere. While not impossible, it is really easy to miss one or two corner cases, and concerns such as these basically block us from using feast directly.
I understand the need for metrics/feedback to steer future feast development work, but IMHO this cannot be the way to that goal.
For now, I will resort to forking feast inside our organization and changing the default to False myself.
**Describe the solution you'd like**
Make sure the default `FEAST_USAGE="False"` is used
**Describe alternatives you've considered**
Mentioned above
| Hi @jbvaningen!
The tricky thing is that keeping `FEAST_USAGE="False"` will result in virtually nobody enabling telemetry, meaning it's actually more an argument for removing telemetry completely. We've also tried to make it super clear to users that we are tracking metrics (by printing a warning during first startup), and we do provide a way to disable it. Feast is also not very unique in tracking metrics to help us inform our development (other projects like Ray, dbt, dagster, etc, are also tracking anonymized metrics).
What about if we provided a build of Feast with telemetry removed, something like `feast-no-telemetry` on Pypi?
Hi @woop, thanks for the prompt reply!
I like your suggestion, if a telemetry-free feast is available on pypi then we would definitely make use of it! | 2022-02-03T04:04:25 |
|
feast-dev/feast | 2,311 | feast-dev__feast-2311 | [
"2297"
] | a7d4cb71af97156905e4567e3ccf484f8255ca88 | diff --git a/sdk/python/feast/infra/online_stores/redis.py b/sdk/python/feast/infra/online_stores/redis.py
--- a/sdk/python/feast/infra/online_stores/redis.py
+++ b/sdk/python/feast/infra/online_stores/redis.py
@@ -41,7 +41,7 @@
try:
from redis import Redis
- from redis.cluster import RedisCluster
+ from redis.cluster import ClusterNode, RedisCluster
except ImportError as e:
from feast.errors import FeastExtrasDependencyImportError
@@ -160,7 +160,9 @@ def _get_client(self, online_store_config: RedisOnlineStoreConfig):
online_store_config.connection_string
)
if online_store_config.redis_type == RedisType.redis_cluster:
- kwargs["startup_nodes"] = startup_nodes
+ kwargs["startup_nodes"] = [
+ ClusterNode(**node) for node in startup_nodes
+ ]
self._client = RedisCluster(**kwargs)
else:
kwargs["host"] = startup_nodes[0]["host"]
| diff --git a/.github/workflows/pr_integration_tests.yml b/.github/workflows/pr_integration_tests.yml
--- a/.github/workflows/pr_integration_tests.yml
+++ b/.github/workflows/pr_integration_tests.yml
@@ -145,6 +145,10 @@ jobs:
run: pip install pip-tools
- name: Install dependencies
run: make install-python-ci-dependencies
+ - name: Setup Redis Cluster
+ run: |
+ docker pull vishnunair/docker-redis-cluster:latest
+ docker run -d -p 6001:6379 -p 6002:6380 -p 6003:6381 -p 6004:6382 -p 6005:6383 -p 6006:6384 --name redis-cluster vishnunair/docker-redis-cluster
- name: Test python
if: ${{ always() }} # this will guarantee that step won't be canceled and resources won't leak
env:
diff --git a/sdk/python/tests/conftest.py b/sdk/python/tests/conftest.py
--- a/sdk/python/tests/conftest.py
+++ b/sdk/python/tests/conftest.py
@@ -30,6 +30,7 @@
)
from tests.integration.feature_repos.repo_configuration import (
FULL_REPO_CONFIGS,
+ REDIS_CLUSTER_CONFIG,
REDIS_CONFIG,
Environment,
construct_test_environment,
@@ -170,10 +171,14 @@ def cleanup():
return e
[email protected]()
[email protected](
+ scope="session",
+ params=[REDIS_CONFIG, REDIS_CLUSTER_CONFIG],
+ ids=[str(c) for c in [REDIS_CONFIG, REDIS_CLUSTER_CONFIG]],
+)
def local_redis_environment(request, worker_id):
e = construct_test_environment(
- IntegrationTestRepoConfig(online_store=REDIS_CONFIG), worker_id=worker_id
+ IntegrationTestRepoConfig(online_store=request.param), worker_id=worker_id
)
def cleanup():
diff --git a/sdk/python/tests/integration/feature_repos/repo_configuration.py b/sdk/python/tests/integration/feature_repos/repo_configuration.py
--- a/sdk/python/tests/integration/feature_repos/repo_configuration.py
+++ b/sdk/python/tests/integration/feature_repos/repo_configuration.py
@@ -46,6 +46,12 @@
DYNAMO_CONFIG = {"type": "dynamodb", "region": "us-west-2"}
REDIS_CONFIG = {"type": "redis", "connection_string": "localhost:6379,db=0"}
+REDIS_CLUSTER_CONFIG = {
+ "type": "redis",
+ "redis_type": "redis_cluster",
+ # Redis Cluster Port Forwarding is setup in "pr_integration_tests.yaml" under "Setup Redis Cluster".
+ "connection_string": "127.0.0.1:6001,127.0.0.1:6002,127.0.0.1:6003",
+}
# FULL_REPO_CONFIGS contains the repo configurations (e.g. provider, offline store,
# online store, test data, and more parameters) that most integration tests will test
@@ -62,7 +68,9 @@
if os.getenv("FEAST_IS_LOCAL_TEST", "False") != "True":
DEFAULT_FULL_REPO_CONFIGS.extend(
[
+ # Redis configurations
IntegrationTestRepoConfig(online_store=REDIS_CONFIG),
+ IntegrationTestRepoConfig(online_store=REDIS_CLUSTER_CONFIG),
# GCP configurations
IntegrationTestRepoConfig(
provider="gcp",
| Materialization and serving fail when using Redis cluster for the online store
## Expected Behavior
Using a Redis cluster as the online storage should work without errors.
## Current Behavior
Using a Redis cluster as the online storage no longer works after updating from 0.17.0 to 0.18.0. The issue happens whenever Feast tries to connect to the Redis cluster - e.g. during `feast materialize` or during feature serving. The exact error that is raised looks like this:
```
Traceback (most recent call last):
File "/usr/local/bin/feast", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/feast/cli.py", line 444, in materialize_command
end_date=utils.make_tzaware(datetime.fromisoformat(end_ts)),
File "/usr/local/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/feast/feature_store.py", line 1087, in materialize
tqdm_builder=tqdm_builder,
File "/usr/local/lib/python3.7/site-packages/feast/infra/passthrough_provider.py", line 168, in materialize_single_feature_view
lambda x: pbar.update(x),
File "/usr/local/lib/python3.7/site-packages/feast/infra/passthrough_provider.py", line 86, in online_write_batch
self.online_store.online_write_batch(config, table, data, progress)
File "/usr/local/lib/python3.7/site-packages/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/usr/local/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/feast/infra/online_stores/redis.py", line 184, in online_write_batch
client = self._get_client(online_store_config)
File "/usr/local/lib/python3.7/site-packages/feast/infra/online_stores/redis.py", line 164, in _get_client
self._client = RedisCluster(**kwargs)
File "/usr/local/lib/python3.7/site-packages/redis/cluster.py", line 517, in __init__
**kwargs,
File "/usr/local/lib/python3.7/site-packages/redis/cluster.py", line 1125, in __init__
self.populate_startup_nodes(startup_nodes)
File "/usr/local/lib/python3.7/site-packages/redis/cluster.py", line 1251, in populate_startup_nodes
self.startup_nodes[n.name] = n
AttributeError: 'dict' object has no attribute 'name'
```
## Steps to reproduce
- Install Feast version 0.18.0
- Set up a Redis cluster as the online store:
```yaml
online_store:
type: redis
redis_type: redis_cluster
connection_string: "host1:port1,host2:port2,host3:port3"
```
- Run `feast materialize`
### Specifications
- Version: 0.18.0
- Platform: Linux
## Possible Solution
Some investigation revealed that versions 0.17.0 and 0.18.0 differ in the sense that 0.18.0 uses `RedisCluster` from `redis.cluster` instead of `rediscluster`. As a result, the code [here](https://github.com/feast-dev/feast/blob/144f25c6dae6dbc4181ffa13fba1714c0baddab3/sdk/python/feast/infra/online_stores/redis.py#L164) breaks because the new `RedisCluster` instance expects to receive a list of `ClusterNode` instead of a list of dicts. Temporarily we worked around this issue by creating a custom online store (by copying all of the code in the linked file 😁 ) and changing it like so:
```diff
+ from redis.cluster import ClusterNode
...
- kwargs["startup_nodes"] = startup_nodes
+ kwargs["startup_nodes"] = [ClusterNode(**node) for node in startup_nodes]
self._client = RedisCluster(**kwargs)
```
This worked for us, but I'm not sure if this is the best approach in general because I'm not too familiar with the codebase yet.
| hey @vstrimaitis thanks for reporting this issue and for the detailed investigation! we'll take a look at this ASAP | 2022-02-15T17:53:22 |
feast-dev/feast | 2,325 | feast-dev__feast-2325 | [
"2324"
] | 876986ff9e48cf4038b9435ff254ada8643a0a33 | diff --git a/sdk/python/feast/repo_operations.py b/sdk/python/feast/repo_operations.py
--- a/sdk/python/feast/repo_operations.py
+++ b/sdk/python/feast/repo_operations.py
@@ -209,8 +209,8 @@ def apply_total_with_repo_instance(
(
all_to_apply,
all_to_delete,
- views_to_delete,
views_to_keep,
+ views_to_delete,
) = extract_objects_for_apply_delete(project, registry, repo)
click.echo(registry_diff.to_string())
@@ -224,7 +224,7 @@ def apply_total_with_repo_instance(
def log_infra_changes(
- views_to_keep: List[FeatureView], views_to_delete: List[FeatureView]
+ views_to_keep: Set[FeatureView], views_to_delete: Set[FeatureView]
):
from colorama import Fore, Style
| Apply is incorrectly reporting tables to add as tables to delete
## Expected Behavior
```
feast apply
Deploying infrastructure for ...
```
## Current Behavior
```
feast apply
Removing infrastructure for ...
```
## Steps to reproduce
Add a feature view and run feast apply.
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
Luckily this looks like it is only a logging bug. The tuple is just set in the wrong order. `extract_objects_for_apply_delete` returns keep then delete, but `apply_total_with_repo_instance` sets the variables in reverse.
```
(
all_to_apply,
all_to_delete,
views_to_delete,
views_to_keep,
) = extract_objects_for_apply_delete(project, registry, repo)
```
```
def extract_objects_for_apply_delete(project, registry, repo):
...
return (
all_to_apply,
all_to_delete,
set(
objs_to_add[FeastObjectType.FEATURE_VIEW].union(
objs_to_update[FeastObjectType.FEATURE_VIEW]
)
),
objs_to_delete[FeastObjectType.FEATURE_VIEW],
)
```
| 2022-02-18T18:54:34 |
||
feast-dev/feast | 2,348 | feast-dev__feast-2348 | [
"2346"
] | 236a108c87aed106e0a46e48172d31dc94ed9c2b | diff --git a/sdk/python/feast/infra/online_stores/dynamodb.py b/sdk/python/feast/infra/online_stores/dynamodb.py
--- a/sdk/python/feast/infra/online_stores/dynamodb.py
+++ b/sdk/python/feast/infra/online_stores/dynamodb.py
@@ -56,6 +56,10 @@ class DynamoDBOnlineStoreConfig(FeastConfigBaseModel):
class DynamoDBOnlineStore(OnlineStore):
"""
Online feature store for AWS DynamoDB.
+
+ Attributes:
+ _dynamodb_client: Boto3 DynamoDB client.
+ _dynamodb_resource: Boto3 DynamoDB resource.
"""
_dynamodb_client = None
@@ -71,6 +75,14 @@ def update(
entities_to_keep: Sequence[Entity],
partial: bool,
):
+ """
+ Update tables from the DynamoDB Online Store.
+
+ Args:
+ config: The RepoConfig for the current FeatureStore.
+ tables_to_delete: Tables to delete from the DynamoDB Online Store.
+ tables_to_keep: Tables to keep in the DynamoDB Online Store.
+ """
online_config = config.online_store
assert isinstance(online_config, DynamoDBOnlineStoreConfig)
dynamodb_client = self._get_dynamodb_client(online_config.region)
@@ -109,6 +121,13 @@ def teardown(
tables: Sequence[FeatureView],
entities: Sequence[Entity],
):
+ """
+ Delete tables from the DynamoDB Online Store.
+
+ Args:
+ config: The RepoConfig for the current FeatureStore.
+ tables: Tables to delete from the feature repo.
+ """
online_config = config.online_store
assert isinstance(online_config, DynamoDBOnlineStoreConfig)
dynamodb_resource = self._get_dynamodb_resource(online_config.region)
@@ -126,6 +145,21 @@ def online_write_batch(
],
progress: Optional[Callable[[int], Any]],
) -> None:
+ """
+ Write a batch of feature rows to online DynamoDB store.
+
+ Note: This method applies a ``batch_writer`` to automatically handle any unprocessed items
+ and resend them as needed, this is useful if you're loading a lot of data at a time.
+
+ Args:
+ config: The RepoConfig for the current FeatureStore.
+ table: Feast FeatureView.
+ data: a list of quadruplets containing Feature data. Each quadruplet contains an Entity Key,
+ a dict containing feature values, an event timestamp for the row, and
+ the created timestamp for the row if it exists.
+ progress: Optional function to be called once every mini-batch of rows is written to
+ the online store. Can be used to display progress.
+ """
online_config = config.online_store
assert isinstance(online_config, DynamoDBOnlineStoreConfig)
dynamodb_resource = self._get_dynamodb_resource(online_config.region)
@@ -155,6 +189,17 @@ def online_read(
entity_keys: List[EntityKeyProto],
requested_features: Optional[List[str]] = None,
) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
+ """
+ Retrieve feature values from the online DynamoDB store.
+
+ Note: This method is currently not optimized to retrieve a lot of data at a time
+ as it does sequential gets from the DynamoDB table.
+
+ Args:
+ config: The RepoConfig for the current FeatureStore.
+ table: Feast FeatureView.
+ entity_keys: a list of entity keys that should be read from the FeatureStore.
+ """
online_config = config.online_store
assert isinstance(online_config, DynamoDBOnlineStoreConfig)
dynamodb_resource = self._get_dynamodb_resource(online_config.region)
| [Docs] Explain DynamoDB online_write_batch uses a batch_writer
## Expected Behavior
`DynamoDBOnlineStore` method `online_write_batch` uses a [BatchWriter](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/dynamodb.html#batch-writing) under the hood.
User documentation should explain the advantages of using it, including
> automatically handle buffering and sending items in batches. In addition, the batch writer will also automatically handle any unprocessed items and resend them as needed.
| Thanks for raising this issue @TremaMiguel!
Are you suggesting we need to make a change to our docs to explain how BatchWriter is used?
Yes @woop, the idea is that the user is aware that he/she can load a lot of data at a time without worrying about it. It could read something like
```
Write a batch of feature rows to the online DynamoDB store. This method implements a ``BatchWriter`` to handle
loading a lot of data at a time. Including automatically handling any unprocessed items and resending them if necessary.
```
Other public methods [online_read](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L151), [update](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L65) etc could have similar documentation.
I'm open to write a PR documenting the public methods from [DynamoDBOnlineStore](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L56).
Sounds good. We'd love a PR. The docs are all in this repo. | 2022-03-01T00:15:50 |
|
feast-dev/feast | 2,356 | feast-dev__feast-2356 | [
"2292"
] | d202d5170b7e6bf1e1b0f103aac247bfc04c2760 | diff --git a/sdk/python/feast/infra/utils/snowflake_utils.py b/sdk/python/feast/infra/utils/snowflake_utils.py
--- a/sdk/python/feast/infra/utils/snowflake_utils.py
+++ b/sdk/python/feast/infra/utils/snowflake_utils.py
@@ -43,29 +43,27 @@ def get_snowflake_conn(config, autocommit=True) -> SnowflakeConnection:
if config.type == "snowflake.offline":
config_header = "connections.feast_offline_store"
- config = dict(config)
+ config_dict = dict(config)
# read config file
config_reader = configparser.ConfigParser()
- config_reader.read([config["config_path"]])
+ config_reader.read([config_dict["config_path"]])
if config_reader.has_section(config_header):
kwargs = dict(config_reader[config_header])
else:
kwargs = {}
- kwargs.update((k, v) for k, v in config.items() if v is not None)
+ kwargs.update((k, v) for k, v in config_dict.items() if v is not None)
+ [
+ kwargs.update({k: '"' + v + '"'})
+ for k, v in kwargs.items()
+ if k in ["role", "warehouse", "database", "schema_"]
+ ]
+ kwargs["schema"] = kwargs.pop("schema_")
try:
conn = snowflake.connector.connect(
- account=kwargs["account"],
- user=kwargs["user"],
- password=kwargs["password"],
- role=f'''"{kwargs['role']}"''',
- warehouse=f'''"{kwargs['warehouse']}"''',
- database=f'''"{kwargs['database']}"''',
- schema=f'''"{kwargs['schema_']}"''',
- application="feast",
- autocommit=autocommit,
+ application="feast", autocommit=autocommit, **kwargs
)
return conn
| Snowflake login support private key or web browser authentication
The current snowflake support seems only allow password authentication but we are using Azure AD login without password for the account.
Can we add functionality to allow different mechanism of authentication?
| @realLyans thanks for this feature request! cc @sfc-gh-madkins who's been working on Snowflake - do you think this could be supported easily?
From what I can see backend is not very hard - just need to change https://github.com/feast-dev/feast/blob/f2bc4119b44fd35ea6739118273cfa48aa243cbe/sdk/python/feast/infra/utils/snowflake_utils.py to add different method of authentication; main question is design of front end which I am not very familiar with.
by frontend do you mean what the ideal API is in defining the SnowflakeSource? I think it'd probably be a new field and then we enforce with some validation check in the constructor that the user has only one form of authentication specified.
@adchia meaning the CLI interface that sets up the repo which doesn't really allow configuration there and I am also not too sure how I should enter a private key if it being a long binary typed variable. I don't think SnowflakeSource need a change (I might be wrong).
@felixwang9817 @adchia @realLyans -- the snowflake integration for feast relies on the snowflake-connector-python ... any authentication method available for that package is available to feast users. The current design is for your project .yaml file to override any values set in a config file that by defaults points at where you config lives if you download the snowflake CLI tool ... if you want to define your own config file ... simple provide the path in the project .yaml file
Where the enhancement needs to happen is that the snowflake connection method should accept kwargs ... which it currently doesnt | 2022-03-03T05:07:38 |
|
feast-dev/feast | 2,366 | feast-dev__feast-2366 | [
"2364"
] | d4a606ad68fedeb12839038a956043a66e6f518f | diff --git a/sdk/python/feast/infra/offline_stores/contrib/__init__.py b/sdk/python/feast/infra/offline_stores/contrib/__init__.py
new file mode 100644
diff --git a/sdk/python/feast/infra/offline_stores/contrib/contrib_repo_configuration.py b/sdk/python/feast/infra/offline_stores/contrib/contrib_repo_configuration.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/feast/infra/offline_stores/contrib/contrib_repo_configuration.py
@@ -0,0 +1,10 @@
+from tests.integration.feature_repos.integration_test_repo_config import (
+ IntegrationTestRepoConfig,
+)
+from tests.integration.feature_repos.universal.data_sources.spark_data_source_creator import (
+ SparkDataSourceCreator,
+)
+
+FULL_REPO_CONFIGS = [
+ IntegrationTestRepoConfig(offline_store_creator=SparkDataSourceCreator)
+]
| Makefile for Running Contrib offline store tests
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Currently the contrib folder in the infra/offline_stores folder contains offline store implementations(i.e spark) that cannot be easily run against the universal integration test suite.
We would like to add a Makefile that can run contrib against the integration test suite that
s separate from the normal integration test run because we don't want to break anything in the normal github workflow.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Make a makefile
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 2022-03-05T00:12:51 |
||
feast-dev/feast | 2,369 | feast-dev__feast-2369 | [
"2368"
] | 21c1b6f65f85e839dbd3fc1b14306c243b329edd | diff --git a/sdk/python/feast/infra/offline_stores/redshift_source.py b/sdk/python/feast/infra/offline_stores/redshift_source.py
--- a/sdk/python/feast/infra/offline_stores/redshift_source.py
+++ b/sdk/python/feast/infra/offline_stores/redshift_source.py
@@ -90,6 +90,10 @@ def from_proto(data_source: DataSourceProto):
query=data_source.redshift_options.query,
)
+ # Note: Python requires redefining hash in child classes that override __eq__
+ def __hash__(self):
+ return super().__hash__()
+
def __eq__(self, other):
if not isinstance(other, RedshiftSource):
raise TypeError(
diff --git a/sdk/python/feast/infra/offline_stores/snowflake_source.py b/sdk/python/feast/infra/offline_stores/snowflake_source.py
--- a/sdk/python/feast/infra/offline_stores/snowflake_source.py
+++ b/sdk/python/feast/infra/offline_stores/snowflake_source.py
@@ -91,6 +91,10 @@ def from_proto(data_source: DataSourceProto):
query=data_source.snowflake_options.query,
)
+ # Note: Python requires redefining hash in child classes that override __eq__
+ def __hash__(self):
+ return super().__hash__()
+
def __eq__(self, other):
if not isinstance(other, SnowflakeSource):
raise TypeError(
| Feast apply no longer works for Snowflake Source
## Expected Behavior
## Current Behavior
(python3.9) milesadkins@milesadkins coaxium % feast apply
Traceback (most recent call last):
File "/Users/milesadkins/python3.9/bin/feast", line 33, in <module>
sys.exit(load_entry_point('feast', 'console_scripts', 'feast')())
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/Users/milesadkins/python3.9/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/cli.py", line 439, in apply_total_command
apply_total(repo_config, repo, skip_source_validation)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/repo_operations.py", line 249, in apply_total
project, registry, repo, store = _prepare_registry_and_repo(repo_config, repo_path)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/repo_operations.py", line 155, in _prepare_registry_and_repo
repo = parse_repo(repo_path)
File "/Users/milesadkins/Documents/feast/sdk/python/feast/repo_operations.py", line 112, in parse_repo
res.data_sources.add(obj)
TypeError: unhashable type: 'SnowflakeSource'
## Steps to reproduce
feast init -t snowflake
...
feast apply
### Specifications
- Version: 0.19
- Platform: Mac/Intel
- Subsystem:
## Possible Solution
fs.apply([[])
| @milescadkins was this working on master?
It was 0.18
Sent from my iPhone
> On Mar 5, 2022, at 12:39 PM, Willem Pienaar ***@***.***> wrote:
>
>
> @milescadkins was this working on master?
>
> —
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
> You are receiving this because you were mentioned.
I know what's wrong. Give me a few minutes. | 2022-03-05T18:58:29 |
|
feast-dev/feast | 2,371 | feast-dev__feast-2371 | [
"2247"
] | 45db6dcdf064688c55a7a465aa6717887e6cae34 | diff --git a/sdk/python/feast/infra/online_stores/dynamodb.py b/sdk/python/feast/infra/online_stores/dynamodb.py
--- a/sdk/python/feast/infra/online_stores/dynamodb.py
+++ b/sdk/python/feast/infra/online_stores/dynamodb.py
@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+import itertools
import logging
from datetime import datetime
from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple
@@ -50,10 +51,16 @@ class DynamoDBOnlineStoreConfig(FeastConfigBaseModel):
"""Online store type selector"""
region: StrictStr
- """ AWS Region Name """
+ """AWS Region Name"""
table_name_template: StrictStr = "{project}.{table_name}"
- """ DynamoDB table name template """
+ """DynamoDB table name template"""
+
+ sort_response: bool = True
+ """Whether or not to sort BatchGetItem response."""
+
+ batch_size: int = 40
+ """Number of items to retrieve in a DynamoDB BatchGetItem call."""
class DynamoDBOnlineStore(OnlineStore):
@@ -211,26 +218,46 @@ def online_read(
online_config = config.online_store
assert isinstance(online_config, DynamoDBOnlineStoreConfig)
dynamodb_resource = self._get_dynamodb_resource(online_config.region)
+ table_instance = dynamodb_resource.Table(
+ _get_table_name(online_config, config, table)
+ )
result: List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]] = []
- for entity_key in entity_keys:
- table_instance = dynamodb_resource.Table(
- _get_table_name(online_config, config, table)
- )
- entity_id = compute_entity_id(entity_key)
+ entity_ids = [compute_entity_id(entity_key) for entity_key in entity_keys]
+ batch_size = online_config.batch_size
+ sort_response = online_config.sort_response
+ entity_ids_iter = iter(entity_ids)
+ while True:
+ batch = list(itertools.islice(entity_ids_iter, batch_size))
+ # No more items to insert
+ if len(batch) == 0:
+ break
+ batch_entity_ids = {
+ table_instance.name: {
+ "Keys": [{"entity_id": entity_id} for entity_id in batch]
+ }
+ }
with tracing_span(name="remote_call"):
- response = table_instance.get_item(Key={"entity_id": entity_id})
- value = response.get("Item")
-
- if value is not None:
- res = {}
- for feature_name, value_bin in value["values"].items():
- val = ValueProto()
- val.ParseFromString(value_bin.value)
- res[feature_name] = val
- result.append((datetime.fromisoformat(value["event_ts"]), res))
+ response = dynamodb_resource.batch_get_item(
+ RequestItems=batch_entity_ids
+ )
+ response = response.get("Responses")
+ table_responses = response.get(table_instance.name)
+ if table_responses:
+ if sort_response:
+ table_responses = self._sort_dynamodb_response(
+ table_responses, entity_ids
+ )
+ for tbl_res in table_responses:
+ res = {}
+ for feature_name, value_bin in tbl_res["values"].items():
+ val = ValueProto()
+ val.ParseFromString(value_bin.value)
+ res[feature_name] = val
+ result.append((datetime.fromisoformat(tbl_res["event_ts"]), res))
else:
- result.append((None, None))
+ batch_size_nones = ((None, None),) * len(batch)
+ result.extend(batch_size_nones)
return result
def _get_dynamodb_client(self, region: str):
@@ -243,6 +270,20 @@ def _get_dynamodb_resource(self, region: str):
self._dynamodb_resource = _initialize_dynamodb_resource(region)
return self._dynamodb_resource
+ def _sort_dynamodb_response(self, responses: list, order: list):
+ """DynamoDB Batch Get Item doesn't return items in a particular order."""
+ # Assign an index to order
+ order_with_index = {value: idx for idx, value in enumerate(order)}
+ # Sort table responses by index
+ table_responses_ordered = [
+ (order_with_index[tbl_res["entity_id"]], tbl_res) for tbl_res in responses
+ ]
+ table_responses_ordered = sorted(
+ table_responses_ordered, key=lambda tup: tup[0]
+ )
+ _, table_responses_ordered = zip(*table_responses_ordered)
+ return table_responses_ordered
+
def _initialize_dynamodb_client(region: str):
return boto3.client("dynamodb", region_name=region)
| diff --git a/sdk/python/tests/unit/online_store/test_dynamodb_online_store.py b/sdk/python/tests/unit/online_store/test_dynamodb_online_store.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/unit/online_store/test_dynamodb_online_store.py
@@ -0,0 +1,57 @@
+from dataclasses import dataclass
+
+import pytest
+from moto import mock_dynamodb2
+
+from feast.infra.offline_stores.file import FileOfflineStoreConfig
+from feast.infra.online_stores.dynamodb import (
+ DynamoDBOnlineStore,
+ DynamoDBOnlineStoreConfig,
+)
+from feast.repo_config import RepoConfig
+from tests.utils.online_store_utils import (
+ _create_n_customer_test_samples,
+ _create_test_table,
+ _insert_data_test_table,
+)
+
+REGISTRY = "s3://test_registry/registry.db"
+PROJECT = "test_aws"
+PROVIDER = "aws"
+TABLE_NAME = "dynamodb_online_store"
+REGION = "us-west-2"
+
+
+@dataclass
+class MockFeatureView:
+ name: str
+
+
[email protected]
+def repo_config():
+ return RepoConfig(
+ registry=REGISTRY,
+ project=PROJECT,
+ provider=PROVIDER,
+ online_store=DynamoDBOnlineStoreConfig(region=REGION),
+ offline_store=FileOfflineStoreConfig(),
+ )
+
+
+@mock_dynamodb2
[email protected]("n_samples", [5, 50, 100])
+def test_online_read(repo_config, n_samples):
+ """Test DynamoDBOnlineStore online_read method."""
+ _create_test_table(PROJECT, f"{TABLE_NAME}_{n_samples}", REGION)
+ data = _create_n_customer_test_samples(n=n_samples)
+ _insert_data_test_table(data, PROJECT, f"{TABLE_NAME}_{n_samples}", REGION)
+
+ entity_keys, features = zip(*data)
+ dynamodb_store = DynamoDBOnlineStore()
+ returned_items = dynamodb_store.online_read(
+ config=repo_config,
+ table=MockFeatureView(name=f"{TABLE_NAME}_{n_samples}"),
+ entity_keys=entity_keys,
+ )
+ assert len(returned_items) == len(data)
+ assert [item[1] for item in returned_items] == list(features)
diff --git a/sdk/python/tests/utils/online_store_utils.py b/sdk/python/tests/utils/online_store_utils.py
new file mode 100644
--- /dev/null
+++ b/sdk/python/tests/utils/online_store_utils.py
@@ -0,0 +1,54 @@
+from datetime import datetime
+
+import boto3
+
+from feast import utils
+from feast.infra.online_stores.helpers import compute_entity_id
+from feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto
+from feast.protos.feast.types.Value_pb2 import Value as ValueProto
+
+
+def _create_n_customer_test_samples(n=10):
+ return [
+ (
+ EntityKeyProto(
+ join_keys=["customer"], entity_values=[ValueProto(string_val=str(i))]
+ ),
+ {
+ "avg_orders_day": ValueProto(float_val=1.0),
+ "name": ValueProto(string_val="John"),
+ "age": ValueProto(int64_val=3),
+ },
+ )
+ for i in range(n)
+ ]
+
+
+def _create_test_table(project, tbl_name, region):
+ client = boto3.client("dynamodb", region_name=region)
+ client.create_table(
+ TableName=f"{project}.{tbl_name}",
+ KeySchema=[{"AttributeName": "entity_id", "KeyType": "HASH"}],
+ AttributeDefinitions=[{"AttributeName": "entity_id", "AttributeType": "S"}],
+ BillingMode="PAY_PER_REQUEST",
+ )
+
+
+def _delete_test_table(project, tbl_name, region):
+ client = boto3.client("dynamodb", region_name=region)
+ client.delete_table(TableName=f"{project}.{tbl_name}")
+
+
+def _insert_data_test_table(data, project, tbl_name, region):
+ dynamodb_resource = boto3.resource("dynamodb", region_name=region)
+ table_instance = dynamodb_resource.Table(f"{project}.{tbl_name}")
+ for entity_key, features in data:
+ entity_id = compute_entity_id(entity_key)
+ with table_instance.batch_writer() as batch:
+ batch.put_item(
+ Item={
+ "entity_id": entity_id,
+ "event_ts": str(utils.make_tzaware(datetime.utcnow())),
+ "values": {k: v.SerializeToString() for k, v in features.items()},
+ }
+ )
| DynamoDB batch retrieval does slow sequential gets (online_read)
The current DynamoDB implementation does sequential gets (https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L163)
## Possible Solution
A better approach is to do some multi-get operation or at least run these queries in parallel and collect the results.
| cc @vlin-lgtm
@adchia, May I know what is needed to be done. I would like to contribute to this issue.
> @adchia, May I know what is needed to be done. I would like to contribute to this issue.
It appears we have the list of primary keys (entity_ids), so we can switch to [BatchGetItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html), instead.
^^ @Vandinimodi1595
@adchia @vlin-lgtm @Vandinimodi1595 switching to `BatchGetItem` could throw an `ValidationException` [because](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html)
> A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem returns a partial result if the response size limit is exceeded
One alternative could be to write a custom `BatchReader` to automatically handle batch writes to DynamoDB, boto3 [BatchWriter](https://github.com/boto/boto3/blob/develop/boto3/dynamodb/table.py#L63) could be taken as example for doing this, it handles [unprocessed items](https://github.com/boto/boto3/blob/develop/boto3/dynamodb/table.py#L149).
Another alternative is using threads to read items in parallel. SageMaker SDK [Ingestion Manager](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L153) could be taken as example for doing it, it works similar than the previous option calling [put_record](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L223) on a [set of indexes](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L345).
If you're open, I'm happy to collaborate with this.
DynamoDB's record size limit is 400KB. Since we have a list of primary keys, we know the maximum number of records we would get back, and we can assume each record is at most 400KB. We can call `BatchGetItem` in micro batches so `ValidationException` wouldn't be thrown.
@vlin-lgtm I was also considering the case of a secondary index, this might be useful when you want to access to attributes other than pk. For example
```
user_group location value
100 usa value_1.com
100 mexico value_2.com
100 brazil value_3.com
```
@TremaMiguel, I am not sure if secondary indices are relevant. But I don't know Feast' codebase well enough to be certain.
The code snippet in the description is to get features given a set of entity ids. Entity ids are primary keys. I don't know where Feast uses secondary indices or if it even uses it at all.
We are not looking to create a generic wrapper for the BatchQuery API for DynamoDB.
A for-loop to `BatchGet` 40 entities at a time is sufficient for this.
@vlin-lgtm @adchia apologies I've misunderstood that this issue is related to sequential calls in [online_write_batch](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L139), a similar issue could be raise for [online_read](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/dynamodb.py#L185) as it iteratively process the `entity_keys` #2351
I still believe the approach mentioned above could help
> Another alternative is using threads to read items in parallel. SageMaker SDK [Ingestion Manager](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L153) could be taken as example for doing it, it works similar than the previous option calling [put_record](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L223) on a [set of indexes](https://github.com/aws/sagemaker-python-sdk/blob/26b984a2d693f95957e8555f19fa47aa36d5d5ea/src/sagemaker/feature_store/feature_group.py#L345).
what are your thoughts?
Thanks @TremaMiguel,
Sounds good. https://github.com/feast-dev/feast/issues/2351 appears to be a dup of this issue.
I believe doing a BatchGet or a for-loop of BatchGet is good enough. What is your use case? This is used for online; I don't think there will be a case where we will need to fetch features for a lot of entities online in bulk. Multi-threading and multi-processing can add unnecessary complexity.
In any case, thanks a lot for wanting to contribute. Please feel free to open a PR. 🙏
Agreed with @vlin-lgtm on all points! BatchGet is the largest win here and simplest.
Threads are fine too though after you first use batch gets, though I think that'll be much less of an impact. You can see how we expose multithreading in https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/online_stores/datastore.py via write_concurrency.
Thanks @adchia and @vlin-lgtm, I'll start working on this | 2022-03-06T02:20:15 |
feast-dev/feast | 2,418 | feast-dev__feast-2418 | [
"2417"
] | 04dea732a495dfc8fa3dd006c89f73c6d08097c5 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -1347,9 +1347,7 @@ def _get_online_features(
)
# Populate online features response proto with join keys and request data features
- online_features_response = GetOnlineFeaturesResponse(
- results=[GetOnlineFeaturesResponse.FeatureVector() for _ in range(num_rows)]
- )
+ online_features_response = GetOnlineFeaturesResponse(results=[])
self._populate_result_rows_from_columnar(
online_features_response=online_features_response,
data=dict(**join_key_values, **request_data_features),
@@ -1477,14 +1475,14 @@ def _populate_result_rows_from_columnar(
timestamp = Timestamp() # Only initialize this timestamp once.
# Add more values to the existing result rows
for feature_name, feature_values in data.items():
-
online_features_response.metadata.feature_names.val.append(feature_name)
-
- for row_idx, proto_value in enumerate(feature_values):
- result_row = online_features_response.results[row_idx]
- result_row.values.append(proto_value)
- result_row.statuses.append(FieldStatus.PRESENT)
- result_row.event_timestamps.append(timestamp)
+ online_features_response.results.append(
+ GetOnlineFeaturesResponse.FeatureVector(
+ values=feature_values,
+ statuses=[FieldStatus.PRESENT] * len(feature_values),
+ event_timestamps=[timestamp] * len(feature_values),
+ )
+ )
@staticmethod
def get_needed_request_data(
@@ -1625,7 +1623,7 @@ def _populate_response_from_feature_data(
Iterable[Timestamp], Iterable["FieldStatus.ValueType"], Iterable[Value]
]
],
- indexes: Iterable[Iterable[int]],
+ indexes: Iterable[List[int]],
online_features_response: GetOnlineFeaturesResponse,
full_feature_names: bool,
requested_features: Iterable[str],
@@ -1660,15 +1658,21 @@ def _populate_response_from_feature_data(
requested_feature_refs
)
+ timestamps, statuses, values = zip(*feature_data)
+
# Populate the result with data fetched from the OnlineStore
- # which is guarenteed to be aligned with `requested_features`.
- for feature_row, dest_idxs in zip(feature_data, indexes):
- event_timestamps, statuses, values = feature_row
- for dest_idx in dest_idxs:
- result_row = online_features_response.results[dest_idx]
- result_row.event_timestamps.extend(event_timestamps)
- result_row.statuses.extend(statuses)
- result_row.values.extend(values)
+ # which is guaranteed to be aligned with `requested_features`.
+ for (
+ feature_idx,
+ (timestamp_vector, statuses_vector, values_vector),
+ ) in enumerate(zip(zip(*timestamps), zip(*statuses), zip(*values))):
+ online_features_response.results.append(
+ GetOnlineFeaturesResponse.FeatureVector(
+ values=apply_list_mapping(values_vector, indexes),
+ statuses=apply_list_mapping(statuses_vector, indexes),
+ event_timestamps=apply_list_mapping(timestamp_vector, indexes),
+ )
+ )
@staticmethod
def _augment_response_with_on_demand_transforms(
@@ -1731,13 +1735,14 @@ def _augment_response_with_on_demand_transforms(
odfv_result_names |= set(selected_subset)
online_features_response.metadata.feature_names.val.extend(selected_subset)
-
- for row_idx in range(len(online_features_response.results)):
- result_row = online_features_response.results[row_idx]
- for feature_idx, transformed_feature in enumerate(selected_subset):
- result_row.values.append(proto_values[feature_idx][row_idx])
- result_row.statuses.append(FieldStatus.PRESENT)
- result_row.event_timestamps.append(Timestamp())
+ for feature_idx in range(len(selected_subset)):
+ online_features_response.results.append(
+ GetOnlineFeaturesResponse.FeatureVector(
+ values=proto_values[feature_idx],
+ statuses=[FieldStatus.PRESENT] * len(proto_values[feature_idx]),
+ event_timestamps=[Timestamp()] * len(proto_values[feature_idx]),
+ )
+ )
@staticmethod
def _drop_unneeded_columns(
@@ -1764,13 +1769,7 @@ def _drop_unneeded_columns(
for idx in reversed(unneeded_feature_indices):
del online_features_response.metadata.feature_names.val[idx]
-
- for row_idx in range(len(online_features_response.results)):
- result_row = online_features_response.results[row_idx]
- for idx in reversed(unneeded_feature_indices):
- del result_row.values[idx]
- del result_row.statuses[idx]
- del result_row.event_timestamps[idx]
+ del online_features_response.results[idx]
def _get_feature_views_to_use(
self,
@@ -2016,3 +2015,15 @@ def _validate_data_sources(data_sources: List[DataSource]):
)
else:
ds_names.add(case_insensitive_ds_name)
+
+
+def apply_list_mapping(
+ lst: Iterable[Any], mapping_indexes: Iterable[List[int]]
+) -> Iterable[Any]:
+ output_len = sum(len(item) for item in mapping_indexes)
+ output = [None] * output_len
+ for elem, destinations in zip(lst, mapping_indexes):
+ for idx in destinations:
+ output[idx] = elem
+
+ return output
diff --git a/sdk/python/feast/online_response.py b/sdk/python/feast/online_response.py
--- a/sdk/python/feast/online_response.py
+++ b/sdk/python/feast/online_response.py
@@ -40,10 +40,8 @@ def __init__(self, online_response_proto: GetOnlineFeaturesResponse):
for idx, val in enumerate(self.proto.metadata.feature_names.val):
if val == DUMMY_ENTITY_ID:
del self.proto.metadata.feature_names.val[idx]
- for result in self.proto.results:
- del result.values[idx]
- del result.statuses[idx]
- del result.event_timestamps[idx]
+ del self.proto.results[idx]
+
break
def to_dict(self, include_event_timestamps: bool = False) -> Dict[str, Any]:
@@ -55,21 +53,18 @@ def to_dict(self, include_event_timestamps: bool = False) -> Dict[str, Any]:
"""
response: Dict[str, List[Any]] = {}
- for result in self.proto.results:
- for idx, feature_ref in enumerate(self.proto.metadata.feature_names.val):
- native_type_value = feast_value_type_to_python_type(result.values[idx])
- if feature_ref not in response:
- response[feature_ref] = [native_type_value]
- else:
- response[feature_ref].append(native_type_value)
-
- if include_event_timestamps:
- event_ts = result.event_timestamps[idx].seconds
- timestamp_ref = feature_ref + TIMESTAMP_POSTFIX
- if timestamp_ref not in response:
- response[timestamp_ref] = [event_ts]
- else:
- response[timestamp_ref].append(event_ts)
+ for feature_ref, feature_vector in zip(
+ self.proto.metadata.feature_names.val, self.proto.results
+ ):
+ response[feature_ref] = [
+ feast_value_type_to_python_type(v) for v in feature_vector.values
+ ]
+
+ if include_event_timestamps:
+ timestamp_ref = feature_ref + TIMESTAMP_POSTFIX
+ response[timestamp_ref] = [
+ ts.seconds for ts in feature_vector.event_timestamps
+ ]
return response
| diff --git a/sdk/python/tests/integration/online_store/test_e2e_local.py b/sdk/python/tests/integration/online_store/test_e2e_local.py
--- a/sdk/python/tests/integration/online_store/test_e2e_local.py
+++ b/sdk/python/tests/integration/online_store/test_e2e_local.py
@@ -40,12 +40,12 @@ def _assert_online_features(
# Float features should still be floats from the online store...
assert (
- response.proto.results[0]
- .values[
+ response.proto.results[
list(response.proto.metadata.feature_names.val).index(
"driver_hourly_stats__conv_rate"
)
]
+ .values[0]
.float_val
> 0
)
diff --git a/sdk/python/tests/integration/online_store/test_universal_online.py b/sdk/python/tests/integration/online_store/test_universal_online.py
--- a/sdk/python/tests/integration/online_store/test_universal_online.py
+++ b/sdk/python/tests/integration/online_store/test_universal_online.py
@@ -281,9 +281,9 @@ def _get_online_features_dict_remotely(
)
keys = response["metadata"]["feature_names"]
# Get rid of unnecessary structure in the response, leaving list of dicts
- response = [row["values"] for row in response["results"]]
+ values = [row["values"] for row in response["results"]]
# Convert list of dicts (response) into dict of lists which is the format of the return value
- return {key: [row[idx] for row in response] for idx, key in enumerate(keys)}
+ return {key: feature_vector for key, feature_vector in zip(keys, values)}
def get_online_features_dict(
@@ -715,6 +715,7 @@ def eventually_apply() -> Tuple[None, bool]:
assert all(v is None for v in online_features["value"])
[email protected]
@pytest.mark.integration
@pytest.mark.goserver
@pytest.mark.parametrize("full_feature_names", [True, False], ids=lambda v: str(v))
@@ -889,6 +890,7 @@ def test_online_retrieval_with_go_server(
)
[email protected]
@pytest.mark.integration
@pytest.mark.goserver
def test_online_store_cleanup_with_go_server(go_environment, go_data_sources):
@@ -937,6 +939,7 @@ def eventually_apply() -> Tuple[None, bool]:
assert all(v is None for v in online_features["value"])
[email protected]
@pytest.mark.integration
@pytest.mark.goserverlifecycle
def test_go_server_life_cycle(go_cycle_environment, go_data_sources):
| Java and Python implementations of feature servers have inconsistent responses
## Expected Behavior
Both implementations return the same response.
## Current Behavior
Java implementation returns column oriented feature vectors in [GetOnlineFeatureResponse](https://github.com/feast-dev/feast/blob/master/protos/feast/serving/ServingService.proto#L96), which means that feature vector contains all values for single feature across all rows. And Python implementation returns feature vector with all features from single row.
## Steps to reproduce
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
| 2022-03-17T19:42:55 |
|
feast-dev/feast | 2,424 | feast-dev__feast-2424 | [
"2409"
] | 163cbb439bd4450d589a3cd5aa552b1c475a7feb | diff --git a/sdk/python/feast/data_source.py b/sdk/python/feast/data_source.py
--- a/sdk/python/feast/data_source.py
+++ b/sdk/python/feast/data_source.py
@@ -134,6 +134,18 @@ def to_proto(self) -> DataSourceProto.KinesisOptions:
return kinesis_options_proto
+_DATA_SOURCE_OPTIONS = {
+ DataSourceProto.SourceType.BATCH_FILE: "feast.infra.offline_stores.file_source.FileSource",
+ DataSourceProto.SourceType.BATCH_BIGQUERY: "feast.infra.offline_stores.bigquery_source.BigQuerySource",
+ DataSourceProto.SourceType.BATCH_REDSHIFT: "feast.infra.offline_stores.redshift_source.RedshiftSource",
+ DataSourceProto.SourceType.BATCH_SNOWFLAKE: "feast.infra.offline_stores.snowflake_source.SnowflakeSource",
+ DataSourceProto.SourceType.STREAM_KAFKA: "feast.data_source.KafkaSource",
+ DataSourceProto.SourceType.STREAM_KINESIS: "feast.data_source.KinesisSource",
+ DataSourceProto.SourceType.REQUEST_SOURCE: "feast.data_source.RequestDataSource",
+ DataSourceProto.SourceType.PUSH_SOURCE: "feast.data_source.PushSource",
+}
+
+
class DataSource(ABC):
"""
DataSource that can be used to source features.
@@ -210,48 +222,20 @@ def from_proto(data_source: DataSourceProto) -> Any:
Raises:
ValueError: The type of DataSource could not be identified.
"""
- if data_source.data_source_class_type:
- cls = get_data_source_class_from_type(data_source.data_source_class_type)
- return cls.from_proto(data_source)
-
- if data_source.request_data_options and data_source.request_data_options.schema:
- data_source_obj = RequestDataSource.from_proto(data_source)
- elif data_source.file_options.file_format and data_source.file_options.file_url:
- from feast.infra.offline_stores.file_source import FileSource
-
- data_source_obj = FileSource.from_proto(data_source)
- elif (
- data_source.bigquery_options.table_ref or data_source.bigquery_options.query
- ):
- from feast.infra.offline_stores.bigquery_source import BigQuerySource
-
- data_source_obj = BigQuerySource.from_proto(data_source)
- elif data_source.redshift_options.table or data_source.redshift_options.query:
- from feast.infra.offline_stores.redshift_source import RedshiftSource
-
- data_source_obj = RedshiftSource.from_proto(data_source)
-
- elif data_source.snowflake_options.table or data_source.snowflake_options.query:
- from feast.infra.offline_stores.snowflake_source import SnowflakeSource
-
- data_source_obj = SnowflakeSource.from_proto(data_source)
-
- elif (
- data_source.kafka_options.bootstrap_servers
- and data_source.kafka_options.topic
- and data_source.kafka_options.message_format
+ data_source_type = data_source.type
+ if not data_source_type or (
+ data_source_type
+ not in list(_DATA_SOURCE_OPTIONS.keys())
+ + [DataSourceProto.SourceType.CUSTOM_SOURCE]
):
- data_source_obj = KafkaSource.from_proto(data_source)
- elif (
- data_source.kinesis_options.record_format
- and data_source.kinesis_options.region
- and data_source.kinesis_options.stream_name
- ):
- data_source_obj = KinesisSource.from_proto(data_source)
- else:
raise ValueError("Could not identify the source type being added.")
- return data_source_obj
+ if data_source_type == DataSourceProto.SourceType.CUSTOM_SOURCE:
+ cls = get_data_source_class_from_type(data_source.data_source_class_type)
+ return cls.from_proto(data_source)
+
+ cls = get_data_source_class_from_type(_DATA_SOURCE_OPTIONS[data_source_type])
+ return cls.from_proto(data_source)
@abstractmethod
def to_proto(self) -> DataSourceProto:
diff --git a/sdk/python/feast/registry.py b/sdk/python/feast/registry.py
--- a/sdk/python/feast/registry.py
+++ b/sdk/python/feast/registry.py
@@ -320,6 +320,9 @@ def apply_data_source(
del registry.data_sources[idx]
data_source_proto = data_source.to_proto()
data_source_proto.project = project
+ data_source_proto.data_source_class_type = (
+ f"{data_source.__class__.__module__}.{data_source.__class__.__name__}"
+ )
registry.data_sources.append(data_source_proto)
if commit:
self.commit()
| Data source conversion from protobuf doesn't use type information
For some reason we are using field names https://github.com/feast-dev/feast/blob/b35e1e84720523cef70cba6d6306af8f193b469f/sdk/python/feast/data_source.py#L217 to infer data source types. We should be using the actual data source types in the protos.
Also, https://github.com/feast-dev/feast/blob/b35e1e84720523cef70cba6d6306af8f193b469f/sdk/python/feast/data_source.py#L213 doesnt exist
| @woop what do you mean by doesn't exist? I see `data_source_class_type` in `DataSource.proto`:
```
// This is an internal field that is represents the python class for the data source object a proto object represents.
// This should be set by feast, and not by users.
string data_source_class_type = 17;
```
and I also see it being set correctly in the code:
```
batch_source_proto = self.batch_source.to_proto()
batch_source_proto.data_source_class_type = f"{self.batch_source.__class__.__module__}.{self.batch_source.__class__.__name__}"
```
Yep. I think there is some bug that might have been introduced when we started storing data sources separately since the feature view class adds that, but i don't think we add this class_type with applying data sources to the registry.
| 2022-03-21T17:12:11 |
|
feast-dev/feast | 2,430 | feast-dev__feast-2430 | [
"2411"
] | e7dd4b75ba0fbd86338aacf2ecd0cc8979dc803b | diff --git a/sdk/python/feast/infra/offline_stores/offline_store.py b/sdk/python/feast/infra/offline_stores/offline_store.py
--- a/sdk/python/feast/infra/offline_stores/offline_store.py
+++ b/sdk/python/feast/infra/offline_stores/offline_store.py
@@ -179,9 +179,24 @@ def pull_latest_from_table_or_query(
end_date: datetime,
) -> RetrievalJob:
"""
+ This method pulls data from the offline store, and the FeatureStore class is used to write
+ this data into the online store. This method is invoked when running materialization (using
+ the `feast materialize` or `feast materialize-incremental` commands, or the corresponding
+ FeatureStore.materialize() method. This method pulls data from the offline store, and the FeatureStore
+ class is used to write this data into the online store.
+
Note that join_key_columns, feature_name_columns, event_timestamp_column, and created_timestamp_column
have all already been mapped to column names of the source table and those column names are the values passed
into this function.
+
+ Args:
+ config: Repo configuration object
+ data_source: Data source to pull all of the columns from
+ join_key_columns: Columns of the join keys
+ feature_name_columns: Columns of the feature names needed
+ event_timestamp_column: Timestamp column
+ start_date: Starting date of query
+ end_date: Ending date of query
"""
pass
@@ -210,8 +225,19 @@ def pull_all_from_table_or_query(
end_date: datetime,
) -> RetrievalJob:
"""
+ Returns a Retrieval Job for all join key columns, feature name columns, and the event timestamp columns that occur between the start_date and end_date.
+
Note that join_key_columns, feature_name_columns, event_timestamp_column, and created_timestamp_column
have all already been mapped to column names of the source table and those column names are the values passed
into this function.
+
+ Args:
+ config: Repo configuration object
+ data_source: Data source to pull all of the columns from
+ join_key_columns: Columns of the join keys
+ feature_name_columns: Columns of the feature names needed
+ event_timestamp_column: Timestamp column
+ start_date: Starting date of query
+ end_date: Ending date of query
"""
pass
| Missing documentation for abstract `pull_all_from_table_or_query` offline store method
## Expected Behavior
Since https://github.com/feast-dev/feast/pull/2197, offline store method `pull_all_from_table_or_query` must be overridden by custom offline stores. This is currently not documented.
Expectations:
- [Docstring](https://github.com/feast-dev/feast/blob/b35e1e84720523cef70cba6d6306af8f193b469f/sdk/python/feast/infra/offline_stores/offline_store.py#L203) of `pull_all_from_table_or_query` contains a meaningful description of the method
- [Web doc](https://docs.feast.dev/how-to-guides/adding-a-new-offline-store) mentions that `pull_all_from_table_or_query` must be overriden.
## Current Behavior
No documentation for `pull_all_from_table_or_query`.
| 2022-03-21T23:06:29 |
||
feast-dev/feast | 2,456 | feast-dev__feast-2456 | [
"2453"
] | ca7d6160bb32594bd5ae3b8d5e26b860b9e0c638 | diff --git a/sdk/python/feast/feature_store.py b/sdk/python/feast/feature_store.py
--- a/sdk/python/feast/feature_store.py
+++ b/sdk/python/feast/feature_store.py
@@ -560,7 +560,7 @@ def _plan(
new_infra_proto = new_infra.to_proto()
infra_diff = diff_infra_protos(current_infra_proto, new_infra_proto)
- return (registry_diff, infra_diff, new_infra)
+ return registry_diff, infra_diff, new_infra
@log_exceptions_and_usage
def _apply_diffs(
@@ -648,16 +648,23 @@ def apply(
]
odfvs_to_update = [ob for ob in objects if isinstance(ob, OnDemandFeatureView)]
services_to_update = [ob for ob in objects if isinstance(ob, FeatureService)]
- data_sources_to_update = [ob for ob in objects if isinstance(ob, DataSource)]
-
- if len(entities_to_update) + len(views_to_update) + len(
- request_views_to_update
- ) + len(odfvs_to_update) + len(services_to_update) + len(
- data_sources_to_update
- ) != len(
- objects
- ):
- raise ValueError("Unknown object type provided as part of apply() call")
+ data_sources_set_to_update = {
+ ob for ob in objects if isinstance(ob, DataSource)
+ }
+
+ for fv in views_to_update:
+ data_sources_set_to_update.add(fv.batch_source)
+ if fv.stream_source:
+ data_sources_set_to_update.add(fv.stream_source)
+
+ for rfv in request_views_to_update:
+ data_sources_set_to_update.add(rfv.request_data_source)
+
+ for odfv in odfvs_to_update:
+ for v in odfv.input_request_data_sources.values():
+ data_sources_set_to_update.add(v)
+
+ data_sources_to_update = list(data_sources_set_to_update)
# Validate all feature views and make inferences.
self._validate_all_feature_views(
| Inline data sources not added to the registry fields
## Expected Behavior
If I create a feature view with an inline data source, it should be added to the registry and discoverable via `feature_store.get_data_source`.
## Current Behavior
Only data sources defined as top level objects of the feature repo are stored and discoverable in the registry.
## Possible Solution
In registry.py, introspect all feature views and collect them to be added to the registry.
| 2022-03-25T21:37:12 |
Subsets and Splits