Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -47,7 +47,7 @@ pinned: false
|
|
47 |
* [https://github.com/google/JetStream](https://github.com/google/JetStream) (and [https://github.com/google/jetstream-pytorch](https://github.com/google/jetstream-pytorch)) is a throughput and memory optimized engine for large language model (LLM) inference on XLA devices
|
48 |
* [https://github.com/google/flax](https://github.com/google/flax) is a neural network library for JAX that is designed for flexibility
|
49 |
* [https://github.com/kubernetes-sigs/lws](https://github.com/kubernetes-sigs/lws) facilitates Kubernetes deployment patterns for AI/ML inference workloads, especially multi-host inference workloads
|
50 |
-
* [https://
|
51 |
* **Google Research Papers**: [https://research.google/](https://research.google/)
|
52 |
|
53 |
## On-device ML using [Google AI Edge](http://ai.google.dev/edge)
|
|
|
47 |
* [https://github.com/google/JetStream](https://github.com/google/JetStream) (and [https://github.com/google/jetstream-pytorch](https://github.com/google/jetstream-pytorch)) is a throughput and memory optimized engine for large language model (LLM) inference on XLA devices
|
48 |
* [https://github.com/google/flax](https://github.com/google/flax) is a neural network library for JAX that is designed for flexibility
|
49 |
* [https://github.com/kubernetes-sigs/lws](https://github.com/kubernetes-sigs/lws) facilitates Kubernetes deployment patterns for AI/ML inference workloads, especially multi-host inference workloads
|
50 |
+
* [https://gke-ai-labs.dev/](https://gke-ai-labs.dev/) is a collection of AI examples, best-practices, and prebuilt solutions
|
51 |
* **Google Research Papers**: [https://research.google/](https://research.google/)
|
52 |
|
53 |
## On-device ML using [Google AI Edge](http://ai.google.dev/edge)
|