title,url,content,source Deep Lake as a Vector Store for LLM Applications,https://docs.activeloop.ai/#deep-lake-as-a-vector-store-for-llm-applications,"Store and search embeddings and their metadata including text, jsons, images, audio, video, and more. Save the data locally, in your cloud, or on Deep Lake storage.Build LLM Apps using or integrations with LangChain and LlamaIndexRun computations locally or on our Managed Tensor Database",activeloop Deep Lake as a Data Lake For Deep Learning,https://docs.activeloop.ai/#deep-lake-as-a-data-lake-for-deep-learning,"Store images, audios, videos, text and their metadata (i.e. annotations) in a data format optimized for Deep Learning. Save the data locally, in your cloud, or on Activeloop storage.Rapidly train PyTorch and TensorFlow models while streaming data with no boilerplate code.Run version control, dataset queries, and distributed workloads using a simple Python API.Deep Lake Architecture for Inference and Model Development Applications.",activeloop "To start using Deep Lake ASAP, check out our Vector Store Quickstart, Deep Learning Quickstart, Getting Started Guides, Tutorials, and Playbooks.",https://docs.activeloop.ai/,Please check out Deep Lake's GitHub repository and give us a ⭐ if you like the project. Join our Slack Community if you need help or have suggestions for improving documentation!,activeloop Deep Lake Docs Overview,https://docs.activeloop.ai/#deep-lake-docs-overview,Vector Store QuickstartDeep Learning QuickstartStorage & CredentialsGetting StartedTutorials (w Colab)PlaybooksDataset VisualizationBest PracticesLow-Level API Summary,activeloop Deep Lake API Reference,https://docs.deeplake.ai/en/latest/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Vector Store Creating a Deep Lake Vector Store Vector Store Operations Vector Store Properties VectorStore.DeepMemory Creating a Deep Memory Deep Memory Operations Deep Memory Properties Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype 3D Bounding Box Htype Intrinsics Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Nifti Htype Point Cloud Htype Mesh Htype Embedding Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read MMDetection High-Performance Features Dataloader Sampler Tensor Query Language Random Split Deep Memory API Reference deeplake deeplake.VectorStore deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform deeplake.core.vectorstore.deep_memory deeplake.random.seed,activeloop Indices and tables,https://docs.deeplake.ai/en/latest/#indices-and-tables,Index Module Index Search Page,activeloop How to Get Started with Vector Search in Deep Lake in Under 5 Minutes,https://docs.activeloop.ai/quickstart#how-to-get-started-with-vector-search-in-deep-lake-in-under-5-minutes,"If you prefer to use Deep Lake with LangChain, check out this tutorial. This quickstart focuses on vector storage and search, instead of end-2-end LLM apps, and it offers more customization and search options compared to the LangChain integration.",activeloop Installing Deep Lake,https://docs.activeloop.ai/quickstart#installing-deep-lake,"Deep Lake can be installed using pip. By default, Deep Lake does not install dependencies for the compute engine, google-cloud, and other features. Details on all installation options are available here. This quickstart also requires OpenAI.!pip3 install deeplake!pip3 install openai",activeloop Performing Vector Search,https://docs.activeloop.ai/quickstart#performing-vector-search,"Deep Lake offers highly-flexible vector search and hybrid search options discussed in detail in these tutorials. In this Quickstart, we show a simple example of vector search using default options, which performs cosine similarity search in Python on the client. prompt = 'What are the first programs he tried writing?'search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)The search_results is a dictionary with keys for the text, score, id, and metadata, with data ordered by score. If we examine the first returned text using search_results['text'][0], it appears to contain the answer to the prompt.What I Worked OnFebruary 2021Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.The first programs I tried writing were on the IBM 1401 that our school district used for what was then called 'data processing.' This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in",activeloop Creating Your First Vector Store,https://docs.activeloop.ai/quickstart#creating-your-first-vector-store,"Let's embed and store one of Paul Graham's essays in a Deep Lake Vector Store stored locally. First, we download the data:Next, let's import the required modules and set the OpenAI environmental variables for embeddings:from deeplake.core.vectorstore import VectorStoreimport openaiimport osos.environ['OPENAI_API_KEY'] = <OPENAI_API_KEY>Next, lets specify paths for the source text and the Deep Lake Vector Store. Though we store the Vector Store locally, Deep Lake Vectors Stores can also be created in memory, in the Deep Lake Managed Tensor Database, or in your cloud. Further details on storage options are available here. Let's also read and chunk the essay text based on a constant number of characters. source_text = 'paul_graham_essay.txt'vector_store_path = 'pg_essay_deeplake'with open(source_text, 'r') as f: text = f.read()CHUNK_SIZE = 1000chunked_text = [text[i:i+1000] for i in range(0,len(text), CHUNK_SIZE)]Next, let's define an embedding function using OpenAI. It must work for a single string and a list of strings, so that it can both be used to embed a prompt and a batch of texts. def embedding_function(texts, model='text-embedding-ada-002'): if isinstance(texts, str): texts = [texts] texts = [t.replace('\n', ' ') for t in texts] return [data.embedding for data in openai.embeddings.create(input = texts, model=model).data]Finally, let's create the Deep Lake Vector Store and populate it with data. We use a default tensor configuration, which creates tensors with text (str), metadata(json), id (str, auto-populated), embedding (float32). Learn more about tensor customizability here. vector_store = VectorStore( path = vector_store_path,)vector_store.add(text = chunked_text, embedding_function = embedding_function, embedding_data = chunked_text, metadata = [{'source': source_text}]*len(chunked_text))The path parameter is bi-directional:When a new path is specified, a new Vector Store is createdWhen an existing path is specified, the existing Vector Store is loadedThe Vector Store's data structure can be summarized using vector_store.summary(), which shows 4 tensors with 76 samples: tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (76, 1536) float32 None id text (76, 1) str None metadata json (76, 1) str None text text (76, 1) str None To create a vector store using pre-compute embeddings instead of the embedding_data and embedding_function, you may run# vector_store.add(text = chunked_text, # embedding = <list_of_embeddings>, # metadata = [{'source': source_text}]*len(chunked_text))",activeloop Authentication,https://docs.activeloop.ai/quickstart#authentication,"To use Deep Lake features that require authentication (Deep Lake storage, Tensor Database storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register in the Deep Lake App and authenticate on the client using the methods in the link below:User Authentication",activeloop Creating Vector Stores in the Deep Lake Managed Tensor Database,https://docs.activeloop.ai/quickstart#creating-vector-stores-in-the-deep-lake-managed-tensor-database,"Deep Lake provides Managed Tensor Database that stores and runs queries on Deep Lake infrastructure, instead of the client. To use this service, specify runtime = {'tensor_db': True} when creating the Vector Store.# vector_store = VectorStore(# path = vector_store_path,# runtime = {'tensor_db': True}# )# vector_store.add(text = chunked_text, # embedding_function = embedding_function, # embedding_data = chunked_text, # metadata = [{'source': source_text}]*len(chunked_text)) # search_results = vector_store.search(embedding_data = prompt, # embedding_function = embedding_function)",activeloop Next Steps,https://docs.activeloop.ai/quickstart#next-steps,"Check out our Getting Started Guide for a comprehensive walk-through of Deep Lake Vector Stores. For scaling Deep Lake to production-level applications, check out our Managed Tensor Database and Support for Concurrent Writes.Congratulations, you've created a Vector Store and performed vector search using Deep Lake🤓",activeloop Visualizing your Vector Store,https://docs.activeloop.ai/quickstart#visualizing-your-vector-store,"Visualization is available for Vector Stores stored in or connected to Deep Lake. The vector store above is stored locally, so it cannot be visualized, but here's an example of visualization for a representative Vector Store.",activeloop Installing Deep Lake,https://docs.activeloop.ai/quickstart-dl#installing-deep-lake,"Deep Lake can be installed using pip. By default, Deep Lake does not install dependencies for video, google-cloud, compute engine, and other features. Details on all installation options are available here. !pip3 install deeplake",activeloop Reading Samples From a Deep Lake Dataset,https://docs.activeloop.ai/quickstart-dl#reading-samples-from-a-deep-lake-dataset,Data is not immediately read into memory because Deep Lake operates lazily. You can fetch data by calling the .numpy() or .data() methods:# Indexingimage = ds.images[0].numpy() # Fetch the first image and return a numpy arraylabels = ds.labels[0].data() # Fetch the labels in the first image# Slicingimg_list = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of numpy arraysOther metadata such as the mapping between numerical labels and their text counterparts can be accessed using:labels_list = ds.labels.info['class_names'],activeloop Fetching Your First Deep Lake Dataset,https://docs.activeloop.ai/quickstart-dl#fetching-your-first-deep-lake-dataset,"Let's load the Visdrone dataset, a rich dataset with many object detections per image. Datasets hosted by Activeloop are identified by the host organization id followed by the dataset name: activeloop/visdrone-det-train.import deeplakedataset_path = 'hub://activeloop/visdrone-det-train'ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally",activeloop Visualizing a Deep Lake Dataset,https://docs.activeloop.ai/quickstart-dl#visualizing-a-deep-lake-dataset,"Deep Lake enables users to visualize and interpret large datasets. The tensor layout for a dataset can be inspected using:ds.summary()The dataset can be visualized in the Deep Lake UI, or using an iframe in a Jupyter notebook:ds.visualize()Visualizing datasets in the Deep Lake UI will unlock more features and faster performance compared to visualization in Jupyter notebooks.",activeloop Creating Your Own Deep Lake Datasets,https://docs.activeloop.ai/quickstart-dl#creating-your-own-deep-lake-datasets,"You can access all of the features above and more with your own datasets! If your source data conforms to one of the formats below, you can ingest them directly with 1 line of code. The ingestion functions support source data from the cloud, as well as creation of Deep Lake datasets in the cloud.YOLOCOCOClassificationsFor example, a COCO format dataset can be ingested using:dataset_path = 's3://bucket_name_deeplake/dataset_name' # Destination for the Deep Lake datasetimages_folder = 's3://bucket_name_source/images_folder'annotations_files = ['s3://bucket_name_source/annotations.json'] # Can be a list of COCO jsons.ds = deeplake.ingest_coco(images_folder, annotations_files, dataset_path, src_creds = {...}, dest_creds = {...})For creating datasets that do not conform to one of the formats above, you can use our methods for manually creating datasets, tensors, and populating them with data.",activeloop Authentication,https://docs.activeloop.ai/quickstart-dl#authentication,"To use Deep Lake features that require authentication (Activeloop storage, Tensor Database storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register in the Deep Lake App and authenticate on the client using the methods in the link below:User Authentication",activeloop Next Steps,https://docs.activeloop.ai/quickstart-dl#next-steps,"Check out our Getting Started Guide for a comprehensive walk-through of Deep Lake. Also check out tutorials on Running Queries, Training Models, and Creating Datasets, as well as Playbooks about powerful use-cases that are enabled by Deep Lake.Congratulations, you've got Deep Lake working on your local machine🤓",activeloop Storing Datasets in Your Own Cloud,https://docs.activeloop.ai/storage-and-credentials#storing-datasets-in-your-own-cloud,"Deep Lake can be used as a pure OSS package without any registration or relationship with Activeloop. However, registering with Activeloop offers several benefits:Storage provided by ActiveloopAccess to the Tensor Database for performant vector searchAccess to Deep Lake App, which provides dataset visualization, querying, version control UI, dataset analytics, and other powerful featuresManaged credentials for Deep Lake datasets stored outside of ActiveloopWhen connecting data from your cloud using Managed Credentials, the data is never stored or cached in Deep Lake. All Deep Lake user interfaces (browser, python, etc.) fetch data directly from long-term storage.Authentication Using Managed CredentialsStorage OptionsStoring Deep Lake Data in Your Own Cloud",activeloop "Compute Engine offers high-performance implementations of compute-heavy Deep Lake features, such as distributed dataloading, large queries, and indexing. The engine is built in C++ and the user-interface is in Python.",https://docs.activeloop.ai/performance-features/introduction#compute-engine-offers-high-performance-implementations-of-compute-heavy-deep-lake-features-such-as-d,"The Deep Lake Compute Engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.",activeloop Features Optimized in the Compute Engine:,https://docs.activeloop.ai/performance-features/introduction#features-optimized-in-the-compute-engine,Performant DataloaderTensor Query Language (TQL)Index for ANN SearchManaged Tensor Database,activeloop How to use Deep Lake's performant Dataloader built and optimized in C++,https://docs.activeloop.ai/performance-features/performant-dataloader#how-to-use-deep-lakes-performant-dataloader-built-and-optimized-in-c++,"Deep Lake offers an optimized dataloader implementation built in C++, which is 1.5-3X faster than the pure-python implementation, and it supports distributed training. The C++ and Python dataloaders can be used interchangeably, and their syntax varies as shown below.",activeloop Pure-Python Dataloader,https://docs.activeloop.ai/performance-features/performant-dataloader#pure-python-dataloader,"train_loader = ds_train.pytorch(num_workers = 8, transform = transform, batch_size = 32, tensors=['images', 'labels'], shuffle = True)",activeloop C++ Dataloader,https://docs.activeloop.ai/performance-features/performant-dataloader#c++-dataloader,"The C++ dataloader is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.The Deep Lake query engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.",activeloop TensorFlow,https://docs.activeloop.ai/performance-features/performant-dataloader#tensorflow,"train_loader = ds.dataloader()\ .transform(transform)\ .batch(32)\ .shuffle(True)\ .offset(10000)\ .tensorflow(tensors=['images', 'labels'], num_workers = 8)",activeloop Further Information,https://docs.activeloop.ai/performance-features/performant-dataloader#further-information,Training ModelsTraining Reproducibility Using Deep Lake and Weights & Biases,activeloop PyTorch (returns PyTorch Dataloader),https://docs.activeloop.ai/performance-features/performant-dataloader#pytorch-returns-pytorch-dataloader,"train_loader = ds.dataloader()\ .transform(transform)\ .batch(32)\ .shuffle(True)\ .offset(10000)\ .pytorch(tensors=['images', 'labels'], num_workers = 8)",activeloop How to query datasets using the Deep Lake Tensor Query Language (TQL),https://docs.activeloop.ai/performance-features/querying-datasets#how-to-query-datasets-using-the-deep-lake-tensor-query-language-tql,"Querying datasets is a critical aspect of data science workflows that enables users to filter datasets and focus their work on the most relevant data. Deep Lake offers a highly-performant query engine built in C++ and optimized for the Deep Lake data format. The Deep Lake query engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.",activeloop Querying in the Vector Store Python API,https://docs.activeloop.ai/performance-features/querying-datasets#querying-in-the-vector-store-python-api,"view = vector_store.search(query = <query_string>, exec_option = 'compute_engine')",activeloop Query Syntax,https://docs.activeloop.ai/performance-features/querying-datasets#query-syntax,TQL Syntax,activeloop Querying in the low-level Python API,https://docs.activeloop.ai/performance-features/querying-datasets#querying-in-the-low-level-python-api,Queries can also be performed in the Python API using:view = ds.query(<query_string>),activeloop Saving and utilizing dataset query results in the low-level Python API,https://docs.activeloop.ai/performance-features/querying-datasets#saving-and-utilizing-dataset-query-results-in-the-low-level-python-api,"The query results (Dataset Views) can be saved in the UI as shown above, or if the view is generated in Python, it can be saved using the Python API below. Full details are available here.ds_view.save_view(message = 'Samples with monarchs')In order to maintain data lineage, Dataset Views are immutable and are connected to specific commits. Therefore, views can only be saved if the dataset has a commit and there are no uncommitted changes in the HEAD. You can check for this using ds.has_head_changes can be loaded in the python API and they can passed to ML frameworks just like regular datasets:ds_view = ds.load_view(view_id, optimize = True, num_workers = 2)for data in ds_view.pytorch(): # Training loop hereThe optimize parameter in ds.load_view(..., optimize = True) materializes the Dataset View into a new sub-dataset that is optimized for streaming. If the original dataset uses linked tensors, the data will be copied to Deep Lake format.Optimizing the Dataset View is critical for achieving rapid streaming.If the saved Dataset View is no longer needed, it can be deleted using:ds.delete_view(view_id)",activeloop How to Use Deep Memory to Improve Retrieval Accuracy in Your LLM Apps,https://docs.activeloop.ai/performance-features/deep-memory#how-to-use-deep-memory-to-improve-retrieval-accuracy-in-your-llm-apps,Deep Memory is a suite of tools that enables you to optimize your Vector Store for your use-case and achieve higher accuracy in your LLM apps.,activeloop Embedding Transformation,https://docs.activeloop.ai/performance-features/deep-memory#embedding-transformation,"Deep Memory computes a transformation that converts your embeddings into an embedding space that is tailored for your use case. This increases the accuracy of your Vector Search by up to 22%, which significantly impacts the user experience of your LLM applications. Furthermore, Deep Memory can also be used to decrease costs by reducing the amount of context (k) that must be injected into the LLM prompt to achieve a given accuracy, thereby reducing token usage.",activeloop How Deep Lake Implements an Index for ANN Search,https://docs.activeloop.ai/performance-features/index-for-ann-search#how-deep-lake-implements-an-index-for-ann-search,Deep Lake implements the Hierarchical Navigable Small World (HSNW) index for Approximate Nearest Neighbor (ANN) search. The index is based on the OSS Hsnwlib package with added optimizations. The implementation enables users to run queries on >35M embeddings in less than 1 second.,activeloop Unique aspects of Deep Lake's HSNW implementation,https://docs.activeloop.ai/performance-features/index-for-ann-search#unique-aspects-of-deep-lakes-hsnw-implementation,Rapid index creation with multi-threading optimized for Deep LakeEfficient memory management that reduces RAM usage,activeloop Memory Management in Deep Lake,https://docs.activeloop.ai/performance-features/index-for-ann-search#memory-management-in-deep-lake,RAM Cost >> On-disk Cost >> Object Storage CostMinimizing RAM usage and maximizing object store significantly reduces costs of running a Vector Database. Deep Lake has a unique implementation of memory allocation that minimizes RAM requirement without any performance penalty:Memory Architecture for the Deep Lake Vector Store,activeloop Limitations,https://docs.activeloop.ai/performance-features/index-for-ann-search#limitations,"The following limitations of the index are being implemented in upcoming releases:Index does not support incremental updates. If any update is made to the dataset, the index is re-created.If the search is performed using a combination of attribute and vector search, the index is not used and linear search is applied instead.",activeloop Using the Index,https://docs.activeloop.ai/performance-features/index-for-ann-search#using-the-index,"By default, the index is turned off in Deep Lake. To enable the index, during Vector Store initialization or loading, specify the Vector Store length threshold above which the index will be applied:vectorstore = VectorStore(path, index_params = {threshold: <threshold_int>})",activeloop LangChain and LlamaIndex,https://docs.activeloop.ai/performance-features/managed-database#langchain-and-llamaindex,"To use the Managed Vector Database in LangChain or Llama Index, specify dataset_path = hub://org_id/dataset_name and runtime = {'tensor_db': True} during Vector Store creation.",activeloop Overview of Deep Lake's Managed Tensor Database,https://docs.activeloop.ai/performance-features/managed-database#overview-of-deep-lakes-managed-tensor-database,"Deep Lake offers a serverless Managed Tensor Database that eliminates the complexity of self-hosting and substantially lowers costs. Currently, it only supports dataset queries, including vector search, but additional features for creating and modifying data being added in December 2023.Comparison of Deep Lake as a Managed Database vs Embedded Database",activeloop REST API,https://docs.activeloop.ai/performance-features/managed-database#rest-api,A standalone REST API is available for interacting with the Managed Database:REST API,activeloop Further Information:,https://docs.activeloop.ai/performance-features/managed-database#further-information,Migrating Datasets to the Tensor Database,activeloop Vector Store use-cases are implemented using an API that balances simplicity and customizability,https://docs.activeloop.ai/getting-started#vector-store-use-cases-are-implemented-using-an-api-that-balances-simplicity-and-customizability,Vector Store,activeloop Deep Learning use-cases are implemented using the low-level API that offers maximum customizability,https://docs.activeloop.ai/getting-started#deep-learning-use-cases-are-implemented-using-the-low-level-api-that-offers-maximum-customizability,Deep Learning,activeloop Deep Lake Tutorials Based on Use-Case:,https://docs.activeloop.ai/tutorials#deep-lake-tutorials-based-on-use-case,Vector Store TutorialsDeep Learning Tutorials,activeloop Playbooks are comprehensive examples of end-to-end workflows using Activeloop products,https://docs.activeloop.ai/playbooks#playbooks-are-comprehensive-examples-of-end-to-end-workflows-using-activeloop-products,"Querying, Training and Editing Datasets with Data LineageEvaluating Model PerformanceTraining Reproducibility Using Deep Lake and Weights & BiasesWorking with Videos",activeloop Import and Installation,https://docs.activeloop.ai/api-basics#import-and-installation,"By default, Deep Lake does not install dependencies for audio, video, google-cloud, and other features. Details on installation options are available here. !pip3 install deeplakeimport deeplake",activeloop Loading Deep Lake Datasets,https://docs.activeloop.ai/api-basics#loading-deep-lake-datasets,"Deep Lake datasets can be stored at a variety of storage locations using the appropriate dataset_path parameter below. We support S3, GCS, Activeloop storage, and are constantly adding to the list.# Load a Deep Lake Datasetds = deeplake.load('dataset_path', creds = {'optional'}, token = 'optional')",activeloop Deleting Datasets,https://docs.activeloop.ai/api-basics#deleting-datasets,"ds.delete()deeplake.delete('dataset_path', creds = {'optional'}, token = 'optional', token = 'optional')API deletions of Deep Lake Cloud datasets are immediate, whereas UI-initiated deletions are postponed by 5 minutes. Once deleted, dataset names can't be reused in the Deep Lake Cloud.",activeloop Creating Deep Lake Datasets,https://docs.activeloop.ai/api-basics#creating-deep-lake-datasets,"# Create an empty Deep Lake datasetds = deeplake.empty('dataset_path', creds = {'optional'}, token = 'optional')# Create an Deep Lake Dataset with the same tensors as another datasetds = deeplake.like(ds_object or 'dataset_path', creds = {'optional'}, token = 'optional')# Automatically create a Deep Lake Dataset from another data sourceds = deeplake.ingest(source_folder, deeplake_dataset_path, ... 'see API reference for details')ds = deeplake.ingest_coco(images_folder, 'annotations.json', deeplake_dataset_path, ... 'see API reference for details')ds = deeplake.ingest_yolo(data_directory, deeplake_dataset_path, class_names_file, ... 'see API reference for details')",activeloop Visualizing and Inspecting Datasets,https://docs.activeloop.ai/api-basics#visualizing-and-inspecting-datasets,ds.visualize()ds.summary(),activeloop Appending Data to Datasets,https://docs.activeloop.ai/api-basics#appending-data-to-datasets,"ds.append({'tensor_1': np.ones((1,4)), 'tensor_2': deeplake.read('image.jpg')})ds.my_group.append({'tensor_1': np.ones((1,4)), 'tensor_2': deeplake.read('image.jpg')})",activeloop Appending/Updating Data in Individual Tensors,https://docs.activeloop.ai/api-basics#appending-updating-data-in-individual-tensors,"# Append a single sampleds.my_tensor.append(np.ones((1,4)))ds.my_tensor.append(deeplake.read('image.jpg'))# Append multiple samples. The first axis in the # numpy array is assumed to be the sample axis for the tensords.my_tensor.extend(np.ones((5,1,4)))# Editing or adding data at a specific indexds.my_tensor[i] = deeplake.read('image.jpg')",activeloop Deleting data,https://docs.activeloop.ai/api-basics#deleting-data,# Removing samples by indexds.pop[i]# Delete all data in a tensords.<tensor_name>.clear()# Delete tensor and all of its datads.delete_tensor(<tensor_name>),activeloop Creating Tensors,https://docs.activeloop.ai/api-basics#creating-tensors,"# Specifying htype is recommended for maximizing performance.ds.create_tensor('my_tensor', htype = 'bbox')# Specifiying the correct compression is critical for images, videos, audio and # other rich data types. ds.create_tensor('songs', htype = 'audio', sample_compression = 'mp3')",activeloop Appending Empty Samples or Skipping Samples,https://docs.activeloop.ai/api-basics#appending-empty-samples-or-skipping-samples,"# Data appended as None will be returned as an empty arrayds.append('tensor_1': deeplake.read(...), 'tensor_2': None)ds.my_tensor.append(None)# Empty arrays can be explicitly appended if the length of the shape # of the empty array matches that of the other samplesds.boxes.append(np.zeros((0,4))",activeloop Connecting Deep Lake Datasets to ML Frameworks,https://docs.activeloop.ai/api-basics#connecting-deep-lake-datasets-to-ml-frameworks,"# PyTorch Dataloaderdataloader = ds.pytorch(batch_size = 16, transform = {'images': torchvision_tform, 'labels': None}, num_workers = 2, scheduler = 'threaded')# TensorFlow Datasetds_tensorflow = ds.tensorflow()# Enterprise Dataloaderdataloader = ds.dataloader().batch(batch_size = 64).pytorch(num_workers = 8)",activeloop Accessing Tensor Data,https://docs.activeloop.ai/api-basics#accessing-tensor-data,# Read the i-th tensor samplenp_array = ds.my_tensor[i].numpy()text = ds.my_text_tensor[i].data() # More comprehensive view of the databytes = ds.my_tensor[i].tobytes() # More comprehensive view of the data# Read the i-th dataset sample as a numpy arrayimage = ds[i].images.numpy()# Read the i-th labels as a numpy array or list of stringslabels_array = ds.labels[i].numpy()labels_array = ds.labels[i].data()['value'] # same as .numpy()labels_string_list = ds.labels[i].data()['text']# Read a tensor sample from a hierarchical groupnp_array = ds.my_group.my_tensor_1[i].numpy()np_array = ds.my_group.my_tensor_2[i].numpy()# Read multiple tensor samples into numpy arraynp_array = ds.my_tensor[0:10].numpy() # Read multiple tensor samples into a list of numpy arraysnp_array_list = ds.my_tensor[0:10].numpy(aslist=True),activeloop Creating Tensor Hierarchies,https://docs.activeloop.ai/api-basics#creating-tensor-hierarchies,ds.create_group('my_group')ds.my_group.create_tensor('my_tensor')ds.create_tensor('my_group/my_tensor') #Automatically creates the group 'my_group',activeloop Querying Datasets and Saving Dataset Views,https://docs.activeloop.ai/api-basics#querying-datasets-and-saving-dataset-views,"A full list of supported queries is shown here. view = ds.query('Select * where contains(labels, 'giraffe')')view.save_view(optimize = True)view = ds.load_view(id = 'query_id')# Return the original dataset indices that satisfied the query conditionindices = list(view.sample_indices)",activeloop Adding Tensor and Dataset-Level Metadata,https://docs.activeloop.ai/api-basics#adding-tensor-and-dataset-level-metadata,"# Add or update dataset metadatads.info.update(key1 = 'text', key2 = number)# Also can run ds.info.update({'key1'='value1', 'key2' = num_value})# Add or update tensor metadatads.my_tensor.info.update(key1 = 'text', key2 = number)# Delete metadatads.info.delete() #Delete all metadatads.info.delete('key1') #Delete 1 key in metadata",activeloop Copying datasets,https://docs.activeloop.ai/api-basics#copying-datasets,"# Fastest option - copies everything including version historyds = deeplake.deepcopy('src_dataset_path', 'dest_dataset_path', src_creds, dest_creds, token)# Slower option - copies only data on the last commitds = deeplake.copy('src_dataset_path', 'dest_dataset_path', src_creds, dest_creds, token)",activeloop Advanced,https://docs.activeloop.ai/api-basics#advanced,"# Load a Deep Lake Dataset if it already exists (same as deeplake.load), or initialize # a new Deep Lake Dataset if it does not already exist (same as deeplake.empty)ds = deeplake.dataset('dataset_path', creds = {'optional'}, token = 'optional')# Append multiple samples using a listds.my_tensor.extend([np.ones((1,4)), np.ones((3,4)), np.ones((2,4)# Fetch adjacent data in the chunk -> Increases speed when loading # sequantially or if a tensor's data fits in the cache.numeric_label = ds.labels[i].numpy(fetch_chunks = True)",activeloop Versioning Datasets,https://docs.activeloop.ai/api-basics#versioning-datasets,"# Commit datacommit_id = ds.commit('Added 100 images of trucks')# Print the commit loglog = ds.log()# Checkout a branch or commit ds.checkout('branch_name' or commit_id)# Create a new branchds.checkout('new_branch', create = True)# Examine differences between commitsds.diff()# Delete all changes since the previous commitds.reset()# Delete a branch and its commits - Only allowed for branches that have not been mergedds.delete_branch('branch_name')",activeloop Maximizing performance,https://docs.activeloop.ai/api-basics#maximizing-performance,Make sure to use the with context when making any updates to datasets. with ds: ds.create_tensor('my_tensor') for i in range(10): ds.my_tensor.append(i),activeloop How to use Deep Lake at Scale with best practices,https://docs.activeloop.ai/technical-details/best-practices#how-to-use-deep-lake-at-scale-with-best-practices,,activeloop Tensors,https://docs.activeloop.ai/technical-details/data-layout#tensors,"Deep Lake uses a columnar storage architecture, and the columns in Deep Lake are referred to as tensors. Data in the tensors can be added or modified, and the data in different tensors are independent of each other.",activeloop Hidden Tensors,https://docs.activeloop.ai/technical-details/data-layout#hidden-tensors,"When data is appended to Deep Lake, certain important information is broken up and duplicated in a separate tensor, so that the information can be accessed and queried without loading all of the data. Examples include the shape of a sample (i.e. width, height, and number of channels for an image), or the metadata from file headers that were passed to deeplake.read('filename').",activeloop Indexing and Samples,https://docs.activeloop.ai/technical-details/data-layout#indexing-and-samples,"Deep Lake datasets and their tensors are indexed, and data at a given index that spans multiple tensors are referred to as samples. Data at the same index are assumed to be related. For example, data in a bbox tensor at index 100 is assumed to be related to data in the tensor image at index 100.",activeloop Chunking,https://docs.activeloop.ai/technical-details/data-layout#chunking,"Most data in Deep Lake format is stored in chunks, which are a blobs of data of a pre-defined size. The purpose of chunking is to accelerate the streaming of data across networks by increasing the amount of data that is transferred per network request.Each tensors has its own chunks, and the default chunk size is 8MB. A single chunk consists of data from multiple indices when the individual data points (image, label, annotation, etc.) are smaller than the chunk size. Conversely, when individual data points are larger than the chunk size, the data is split among multiple chunks (tiling). Exceptions to chunking logic are video data. Videos that are larger than the specified chunk size are not broken into smaller pieces, because Deep Lake uses efficient libraries to stream and access subsets of videos, thus making it unnecessary to split them apart.",activeloop Groups,https://docs.activeloop.ai/technical-details/data-layout#groups,"Multiple tensor can be combined into groups. Groups do not fundamentally change the way data is stored, but they are useful for helping Activeloop Platform understand how different tensors are related.",activeloop Length of a Dataset,https://docs.activeloop.ai/technical-details/data-layout#length-of-a-dataset,"Deep Lake allows for ragged tensors (tensors of different length), so it is important to understand the terminology around dataset length:length (ds.len or len(ds)) - The length of the shortest tensor, as determined by its last index.minimum length (ds.min_len) - Same as lengthminimum length (ds.max_len) - The length of the longest tensor, as determined by its last index. By default, Deep Lake throws an error if a tensor is accessed at an index at which data (empty or non-empty) has not been added. In the example below, ds.bbox[3].numpy() would throw an error. To pad the unspecified data and create a virtual view where the missing samples are treated as empty data, use ds.max_view(). In the example below, the length of this virtual view would be 6.",activeloop "Understanding the Interaction Between Deep Lake's Versions, Queries, and Dataset Views.",https://docs.activeloop.ai/technical-details/version-control-and-querying#understanding-the-interaction-between-deep-lakes-versions-queries-and-dataset-views.,"Version control is the core of the Deep Lake data format, and it interacts with queries and view as follows:Datasets have commits and branches, and they can be traversed or merged using Deep Lake's Python API. Queries are applied on top of commits, and in order to save a query result as a view, the dataset cannot be in an uncommitted state (no changes were performed since the prior commit). Each saved view is associated with a particular commit, and the view itself contains information on which dataset indices satisfied the query condition.This logical approach was chosen in order to preserve data lineage. Otherwise, it would be possible to change data on which a query was executed, thereby potentially invalidating the saved view, since the indices that satisfied the query condition may no longer be correct after the dataset was changed. Please check out our Getting Stated Guide to learn how to use the Python API to version your data, run queries, and save views. An example workflow using version control and queries is shown below.",activeloop Version Control HEAD Commit,https://docs.activeloop.ai/technical-details/version-control-and-querying#version-control-head-commit,"Unlike Git, Deep Lake's dataset version control does not have a local staging area because all dataset updates are immediately synced with the permanent storage location (cloud or local). Therefore, any changes to a dataset are automatically stored in a HEAD commit on the current branch. This means that the uncommitted changes do not appear on other branches, and uncommitted changes are visible to all users.",activeloop How to visualize machine learning datasets,https://docs.activeloop.ai/technical-details/dataset-visualization#how-to-visualize-machine-learning-datasets,"Deep Lake has a web interface for visualizing, versioning, and querying machine learning datasets. It utilizes the Deep Lake format under-the-hood, and it can be connected to datasets stored in all Deep Lake storage locations.",activeloop Visualization can be performed in 3 ways:,https://docs.activeloop.ai/technical-details/dataset-visualization#visualization-can-be-performed-in-3-ways,1.In the Deep Lake UI (most feature-rich and performant option)2.In the python API using ds.visualize()3.In your own application using our integration options.,activeloop Requirements for correctly visualizing your own datasets,https://docs.activeloop.ai/technical-details/dataset-visualization#requirements-for-correctly-visualizing-your-own-datasets,Deep Lake makes assumptions about underlying data types and relationships between tensors in order to display the data correctly. Understanding the following concepts is necessary in order to use the visualizer: 1.Data Types (htypes)2.Relationships between tensors,activeloop Downsampling Data for Faster Visualization,https://docs.activeloop.ai/technical-details/dataset-visualization#downsampling-data-for-faster-visualization,"For faster visualization of images and masks, tensors can be downsampled during dataset creation. The downsampled data are stored in the dataset and are automatically rendered by the visualizer depending on the zoom level. To add downsampling to your tensors, specify the downsampling factor and the number of downsampling layers during tensor creation:# 3X downsampling per layer, 2X layersds.create_tensor('images', htype = 'image', downsampling = (3,2))Note: since downsampling requires decompression and recompression of data, it will slow down dataset ingestion.",activeloop Indexing,https://docs.activeloop.ai/technical-details/tensor-relationships#indexing,"Hub datasets and their tensors are indexed like ds[index] or ds.tensor_name[index], and data at the same index are assumed to be related. For example, a bounding_box at index 100 is assumed to apply to the image at index 100.",activeloop Relationships Between Tensors,https://docs.activeloop.ai/technical-details/tensor-relationships#relationships-between-tensors,"For datasets with multiple tensors, it is important to follow the conventions below in order for the visualizer to correctly infer how tensors are related.By default, in the absence of groups, the visualizer assumes that all tensors are related to each other. This works well for simple use cases. For example, it is correct to assume that the images, labels, and boxes tensors are related in the dataset below:ds-> images (htype = image)-> labels (htype = class_label)-> boxes (htype = bbox)However, if datasets are highly complex, assuming that all tensor are related may lead to visualization errors, because every tensor may not be related to every other tensor:ds-> images (htype = image)-> vehicle_labels (htype = class_label)-> vehicle_boxes (htype = bbox)-> people_labels (htype = class_label)-> people_masks (htype = binary_mask)In the example above, only some of the annotation tensors are related to each other: vehicle_labels -> vehicle_boxes: Boxes and labels describing cars, trucks, etc.people_labels -> people_masks: Binary masks and labels describing adults, toddlers, etc.The best method for disambiguating the relationships between tensors is to place them in groups, because the visualizer assumes that annotation tensors in different groups are not related.In the example above, the following groups could be used to disambiguate the annotations:ds-> images (htype = image)-> vehicles (group) -> vehicle_labels (htype = class_label) -> vehicle_boxes (htype = bbox)-> people (group) -> people_labels (htype = class_label) -> people_masks (htype = binary_mask)",activeloop HTML iframe (Alpha),https://docs.activeloop.ai/technical-details/visualizer-integration#html-iframe-alpha,"To embed into your html page, you can use our iframe integration:<iframe src='https://app.activeloop.ai/visualizer/iframe?url=hub://activeloop/imagenet-train' width='800px' height='600px'>iframe URL: Params:url - The url of the dataset vs - Visualizer state, which can be obtained from the platform url token - User token, for private datasets. If the value is ask then the UI will be populated for entering the token checkpoint - Dataset checkpoint query - Query string to apply on the dataset",activeloop How to embed the Activeloop visualizer into your own web applications,https://docs.activeloop.ai/technical-details/visualizer-integration#how-to-embed-the-activeloop-visualizer-into-your-own-web-applications,"Visualization engine allows the user to visualize, explore, and interact with Deep Lake datasets. In addition to using through the Activeloop UI or in Python, the Activeloop visualizer can also be embedded into your application.",activeloop Javascript API (Alpha),https://docs.activeloop.ai/technical-details/visualizer-integration#javascript-api-alpha,"To have more fine grained control, you can embed the visualizer using Javascript:<div id='container'></div><script src='https://app.activeloop.ai/visualizer/vis.js'></script><script> let container = document.getElementById('container') window.vis.visualize('hub://activeloop/imagenet-train', null, null, container, null)</script>or to visualize private datasets with authentication<div id='container'></div><script src='https://app.activeloop.ai/visualizer/vis.js'></script><script> let container = document.getElementById('container') window.vis.visualize('hub://org/private', null, null, container, { requireSignin: true })</script>InterfaceBelow you can find definitions of the arguments./// ds - Dataset url/// commit - optional commit id/// state - optional initial state of the visualizer/// container - HTML element serving as container for visualizer elements/// options - optional Visualization optionsstatic visualize( ds: string, commit: string | null = null, state: string | null = null, container: HTMLElement, options: VisOptions | null): Promise<Vis>;/// backlink - Show backlink to platform button/// singleSampleView - Enable single sample view through enter key/// requireSignin - Requires signin to get access token/// token - Token id/// gridMode - Canvas vs Grid/// queryString - Query to apply on the iframeexport type VisOptions = { backlink?: Boolean singleSampleView?: Boolean requireSignin?: Boolean token: string | null gridMode?: 'canvas' | 'grid' queryString?: string}This visualize returns Promise<Vis> which can be used to dynamically change the visualizer state. Vis supports only query functions for nowclass Vis{ /// Asynchronously runs a query and resolves the promise when query completed. /// In case of error in query, rejects the promise. query(queryString: string): Promise<void>}",activeloop How Shuffling Works in Deep Lake's PyTorch DataLoader,https://docs.activeloop.ai/technical-details/shuffling-in-dataloaders#how-shuffling-works-in-deep-lakes-pytorch-dataloader,"The Deep Lake shuffling algorithm is based upon a shuffle buffer that preloads a specified amount of data (in MB) determined by the buffer_size parameter in ds.pytorch(buffer_size = 2048). First, the dataloader randomly selects chunks from the applicable tensors until the shuffle buffer is full. Next, the indices in shuffle buffer are randomly sampled to construct the batches that are returned by the dataloader. As the data in the shuffle buffer is consumed, new chunks are randomly selected and added to the buffer.In the OSS dataloader, the shuffle buffer contains the decompressed, decoded, and transformed samples. When using the PyTorch dataloaders, this corresponds to torch tensors. In the Performant dataloader, the shuffle buffer contains the non-decompressed data in the format they are stored in. For images, this typically corresponds to compressed bytes in jpeg, png, or other compressions. Since compressed data is stored more efficiently than uncompressed data, there are typically more distinct samples of data in the Performant dataloader shuffle buffer compared to the OSS shuffle buffer. If many chunks in the buffer contain data from the same class, which may occur if data was uploaded in non-random order, the shuffle buffer may contain fewer unique classes than if the samples were chosen fully randomly based on index. The most extreme case of reduced randomness occurs when datasets are much larger than the shuffle buffer, when they have many classes, and when those classes occur in sequence within the dataset indices. One example dataset is Unshuffled ImageNet, which has 1000 classes, 1.2M images, 140GB of data, and approximately 140 images per 16MB chunk. When the images are uploaded in sequence, the plot below shows how many unique classes are returned by the loader vs the number of images that have been returned in total. It is evident that fully randomly sampling returns more unique values than the Deep Lake dataloader. If reduced randomness has an impact on model performance in your workflows, the recommended countermeasures are:Store the dataset in a shuffled fashion such that the data does not appear in order by class. This completely mitigates the randomness concerns at the output of the data loader.Store the dataset with a smaller chunk size. This increases randomness because the shuffle buffer selects more discreet chunks before filling up. The current default size is 8, and reducing chunk size to 4MB significantly increases randomness (see plot above) with only a modest slowdown in data transfer speed.Increase the size of the shuffle buffer. This mitigates the randomness concerns but may not completely alleviate them.",activeloop Providing Feedback,https://docs.activeloop.ai/technical-details/how-to-contribute#providing-feedback,We love feedback! Please join our Slack Community or raise an issue in Github.,activeloop Getting Started With Development,https://docs.activeloop.ai/technical-details/how-to-contribute#getting-started-with-development,"Clone the repository:git clone https://github.com/activeloopai/deeplake cd deeplake If you are using Linux, install environment dependencies:apt-get -y updateapt-get -y install git wget build-essential python-setuptools python3-dev libjpeg-dev libpng-dev zlib1g-devapt install build-essentialIf you are planning to work on videos, install codecs:apt-get install -y ffmpeg libavcodec-dev libavformat-dev libswscale-devInstall the package locally with plugins and development dependencies:pip install -r deeplake/requirements/plugins.txtpip install -r deeplake/requirements/tests.txtpip install -e .Run local tests to ensure everything is correct:pytest -x --local .",activeloop Using Docker (optional),https://docs.activeloop.ai/technical-details/how-to-contribute#using-docker-optional,You can use docker-compose for running testsdocker-compose -f ./bin/docker-compose.yaml up --build localand even work inside the docker by building the image and bashing into.docker build -t activeloop-deeplake:latest -f ./bin/Dockerfile.dev .docker run -it -v $(pwd):/app activeloop-deeplake:latest bash$ python3 -c 'import deeplake'Now changes done on your local files will be directly reflected into the package running inside the docker.,activeloop Linting,https://docs.activeloop.ai/technical-details/how-to-contribute#linting,"Deep Lake uses the black python linter. You can auto-format your code by running pip install black, and the run black . inside the directory you want to format.",activeloop Docstrings,https://docs.activeloop.ai/technical-details/how-to-contribute#docstrings,Deep Lake uses Google Docstrings. Please refer to this example to learn more.,activeloop Typing,https://docs.activeloop.ai/technical-details/how-to-contribute#typing,"Deep Lake uses static typing for function arguments/variables for better code readability. Deep Lake has a GitHub action that runs mypy ., which runs similar to pytest . to check for valid static typing. You can refer to mypy documentation for more information.",activeloop Prerequisites,https://docs.activeloop.ai/technical-details/how-to-contribute#prerequisites,Understand how to write pytest tests.Understand what a pytest fixture is.Understand what pytest parametrizations are.,activeloop Testing,https://docs.activeloop.ai/technical-details/how-to-contribute#testing,"Deep Lake uses pytest for tests. In order to make it easier to contribute, Deep Lake also has a set of custom options defined here.",activeloop Options,https://docs.activeloop.ai/technical-details/how-to-contribute#options,"To see a list of Deep Lake's custom pytest options, run this command: pytest -h | sed -En '/custom options:/,/\[pytest\] ini\-options/p'.",activeloop Fixtures,https://docs.activeloop.ai/technical-details/how-to-contribute#fixtures,"You can find more information on pytest fixtures here.memory_storage: If --memory-skip is provided, tests with this fixture will be skipped. Otherwise, the test will run with only a MemoryProvider.local_storage: If --local is not provided, tests with this fixture will be skipped. Otherwise, the test will run with only a LocalProvider.s3_storage: If --s3 is not provided, tests with this fixture will be skipped. Otherwise, the test will run with only an S3Provider.storage: All tests that use the storage fixture will be parametrized with the enabled StorageProviders (enabled via options defined below). If --cache-chains is provided, storage may also be a cache chain. Cache chains have the same interface as StorageProvider, but instead of just a single provider, it is multiple chained in a sequence, where the last provider in the chain is considered the actual storage.ds: The same as the storage fixture, but the storages that are parametrized are wrapped with a Dataset.Each StorageProvider/Dataset that is created for a test via a fixture will automatically have a root created, and it will be destroyed after the test. If you want to keep this data after the test run, you can use the --keep-storage option.",activeloop Fixture Examples,https://docs.activeloop.ai/technical-details/how-to-contribute#fixture-examples,"Single storage provider fixture:def test_memory(memory_storage): # Test will skip if `--memory-skip` is provided memory_storage['key'] = b'1234' # This data will only be stored in memorydef test_local(local_storage): # Test will skip if `--local` is not provided memory_storage['key'] = b'1234' # This data will only be stored locallydef test_local(s3_storage): # Test will skip if `--s3` is not provided # Test will fail if credentials are not provided memory_storage['key'] = b'1234' # This data will only be stored in s3Multiple storage providers/cache chains:from deeplake.core.tests.common import parametrize_all_storages, parametrize_all_caches, parametrize_all_storages_and_caches@parametrize_all_storagesdef test_storage(storage): # Storage will be parametrized with all enabled `StorageProvider`s pass@parametrize_all_cachesdef test_caches(storage): # Storage will be parametrized with all common caches containing enabled `StorageProvider`s pass@parametrize_all_storages_and_cachesdef test_storages_and_caches(storage): # Storage will be parametrized with all enabled `StorageProvider`s and common caches containing enabled `StorageProvider`s passDataset storage providers/cache chains:from deeplake.core.tests.common import parametrize_all_dataset_storages, parametrize_all_dataset_storages_and_caches@parametrize_all_dataset_storagesdef test_dataset(ds): # `ds` will be parametrized with 1 `Dataset` object per enabled `StorageProvider` pass@parametrize_all_dataset_storages_and_cachesdef test_dataset(ds): # `ds` will be parametrized with 1 `Dataset` object per enabled `StorageProvider` and all cache chains containing enabled `StorageProvider`s pass",activeloop Benchmarks,https://docs.activeloop.ai/technical-details/how-to-contribute#benchmarks,"Deep Lake uses pytest-benchmark for benchmarking, which is a plugin for pytest.",activeloop Here's a list of people who are building the future of data!,https://docs.activeloop.ai/technical-details/how-to-contribute#heres-a-list-of-people-who-are-building-the-future-of-data,Deep Lake would not be possible without the work of our community.Activeloop Deep Lake open-source contributors,activeloop How to Use Deep Lake as a Vector Store in LangChain,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#how-to-use-deep-lake-as-a-vector-store-in-langchain,Deep Lake can be used as a VectorStore in LangChain for building Apps that require filtering and vector search. In this tutorial we will show how to create a Deep Lake Vector Store in LangChain and use it to build a Q&A App about the Twitter OSS recommendation algorithm. This tutorial requires installation of:!pip3 install langchain deeplake openai tiktoken,activeloop Downloading and Preprocessing the Data,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#downloading-and-preprocessing-the-data,"First, let's import necessary packages and make sure the Activeloop and OpenAI keys are in the environmental variables ACTIVELOOP_TOKEN, OPENAI_API_KEY.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import DeepLakefrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import RetrievalQA, ConversationalRetrievalChainimport osNext, let's clone the Twitter OSS recommendation algorithm:!git clone https://github.com/twitter/the-algorithmNext, let's load all the files from the repo into a list:repo_path = '/the-algorithm'docs = []for dirpath, dirnames, filenames in os.walk(repo_path): for file in filenames: try: loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8') docs.extend(loader.load_and_split()) except Exception as e: print(e) pass",activeloop Use the Vector Store in a Q&A App,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#use-the-vector-store-in-a-q-and-a-app,"We can now use the VectorStore in Q&A app, where the embeddings will be used to filter relevant documents (texts) that are fed into an LLM in order to answer a question.If we were on another machine, we would load the existing Vector Store without recalculating the embeddings:db = DeepLake(dataset_path=dataset_path, read_only=True, embedding=embeddings)We have to create a retriever object and specify the search parameters.retriever = db.as_retriever()retriever.search_kwargs['distance_metric'] = 'cos'retriever.search_kwargs['k'] = 20Finally, let's create an RetrievalQA chain in LangChain and run it:model = ChatOpenAI(model='gpt-4') # 'gpt-3.5-turbo',qa = RetrievalQA.from_llm(model, retriever=retriever)qa.run('What programming language is most of the SimClusters written in?')This returns:We can tune k in the retriever depending on whether the prompt exceeds the model's token limit. Higher k increases the accuracy by including more data in the prompt.",activeloop Adding data to to an existing Vector Store,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#adding-data-to-to-an-existing-vector-store,"Data can be added to an existing Vector Store by loading it using its path and adding documents or texts. db = DeepLake(dataset_path=dataset_path, embedding=embeddings)# Don't run this here in order to avoid data duplication# db.add_documents(texts)",activeloop Creating the Deep Lake Vector Store,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#creating-the-deep-lake-vector-store,"First, we specify a path for storing the Deep Lake dataset containing the embeddings and their metadata. dataset_path = 'hub://<org-id>/twitter_algorithm'Next, we specify an OpenAI algorithm for creating the embeddings, and create the VectorStore. This process creates an embedding for each element in the texts lists and stores it in Deep Lake format at the specified path. embeddings = OpenAIEmbeddings()db = DeepLake.from_documents(texts, embeddings, dataset_path=dataset_path)The Deep Lake dataset serving as a VectorStore has 4 tensors including the embedding, its ids, metadata including the filename of the text, and the text itself. tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (23156, 1536) float32 None ids text (23156, 1) str None metadata json (23156, 1) str None text text (23156, 1) str None",activeloop Adding Hybrid Search to the Vector Store,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#adding-hybrid-search-to-the-vector-store,"Since embeddings search can be computationally expensive, you can simplify the search by filtering out data using an explicit search on top of the embeddings search. Suppose we want to answer to a question related to the trust and safety models. We can filter the filenames (source) in the metadata using a custom function that is added to the retriever:def filter(deeplake_sample): return 'trust_and_safety_models' in deeplake_sample['metadata'].data()['value']['source']retriever.search_kwargs['filter'] = filterqa = RetrievalQA.from_llm(model, retriever=retriever)qa.run('What do the trust and safety models do?')This returns:Filters can also be specified as a dictionary. For example, if the metadata tensor had a key year, we can filter based on that key using:# retriever.search_kwargs['filter'] = {'metadata': {'year': 2020}}",activeloop A note on chunking text files:,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#a-note-on-chunking-text-files,"Text files are typically split into chunks before creating embeddings. In general, more chunks increases the relevancy of data that is fed into the language model, since granular data can be selected with higher precision. However, since an embedding will be created for each chunk, more chunks increase the computational complexity.text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(docs)Chunks in the above context should not be confused with Deep Lake chunks!",activeloop Using Deep Lake in Applications that Require Concurrency,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#using-deep-lake-in-applications-that-require-concurrency,"For applications that require writing of data concurrently, users should set up a lock system to queue the write operations and prevent multiple clients from writing to the Deep Lake Vector Store at the same time. This can be done with a few lines of code in the example below:Concurrency Using Zookeeper Locks",activeloop Accessing the Low Level Deep Lake API (Advanced),https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#accessing-the-low-level-deep-lake-api-advanced,"When using a Deep Lake Vector Store in LangChain, the underlying Vector Store and its low-level Deep Lake dataset can be accessed via:# LangChain Vector Storedb = DeepLake(dataset_path=dataset_path)# Deep Lake Vector Store objectds = db.vectorstore# Deep Lake Dataset objectds = db.vectorstore.dataset",activeloop SelfQueryRetriever with Deep Lake,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#selfqueryretriever-with-deep-lake," Deep Lake supports the SelfQueryRetriever implementation in LangChain, which translates a user prompt into a metadata filters",activeloop SelfQueryRetriever with Deep Lake,https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain#selfqueryretriever-with-deep-lake,"This section of the tutorial requires installation of additional packages:First let's create a Deep Lake Vector Store with relevant data using the documents below.docs = [ Document( page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}, ), Document( page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2}, ), Document( page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}, ), Document( page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3}, ), Document( page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}, ), Document( page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={ 'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'rating': 9.9, }, ),]Since this feature uses Deep Lake's Tensor Query Language under the hood, the Vector Store must be stored in or connected to Deep Lake, which requires registration with Activeloop:org_id = <YOUR_ORG_ID> #By default, your username is an org_iddataset_path = f'hub://{org_id}/self_query'vectorstore = DeepLake.from_documents( docs, embeddings, dataset_path = dataset_path, overwrite = True,)Next, let's instantiate our retriever by providing information about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name='genre', description='The genre of the movie', type='string or list[string]', ), AttributeInfo( name='year', description='The year the movie was released', type='integer', ), AttributeInfo( name='director', description='The name of the movie director', type='string', ), AttributeInfo( name='rating', description='A 1-10 rating for the movie', type='float' ),]document_content_description = 'Brief summary of a movie'llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)And now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents('What are some movies about dinosaurs')Output:[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]Now we can run a query to find movies that are above a certain ranking:# This example only specifies a filterretriever.get_relevant_documents('I want to watch a movie rated higher than 8.5')Output:[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]Congrats! You just used the Deep Lake Vector Store in LangChain to create a Q&A App! 🎉 ",activeloop Installation,https://docs.deeplake.ai/en/latest/Installation.html#installation,"Deep Lake can be installed with pip Copy to clipboard Deep Lake has the following extras that you can choose to install according to your needs. Installation commands Install command Description Dependencies installed pip install 'deeplake[av]' Audio and video support via PyAV av pip install 'deeplake[visualizer]' Visualize Deep Lake datasets within notebooks. This is required for Dataset.visualize to work. IPython, flask pip install 'deeplake[gcp]' GCS support google-cloud-storage, google-auth, google-auth-oauthlib pip install 'deeplake[azure]' Azure Blob Storage support azure-storage-blob, azure-cli, azure-identity pip install 'deeplake[medical]' DICOM and NIfTI data support pydicom, nibabel pip install 'deeplake[gdrive]' Google Drive support google-api-python-client, oauth2client, google-auth, google-auth-oauthlib pip install 'deeplake[point_cloud]' Support for LiDAR point cloud data laspy pip install 'deeplake[all]' Installs all of the above",activeloop Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#datasets," Creating Datasets deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest_classification Ingest a dataset of images from a local folder to a Deep Lake Dataset. deeplake.ingest_coco Ingest images and annotations in COCO format to a Deep Lake Dataset. deeplake.ingest_yolo Ingest images and annotations (bounding boxes or polygons) in YOLO format to a Deep Lake Dataset. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format. Loading Datasets deeplake.load Loads an existing dataset Deleting and Renaming Datasets deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path. Copying Datasets deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history. Dataset Operations Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.update Update existing samples in the dataset with new values. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.connect Connect a Deep Lake cloud dataset through a deeplake path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset. Dataset.random_split Splits the dataset into non-overlapping Dataset objects of given lengths. Dataset Visualization Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset Credentials Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Updates the name and/or management status of a creds key. Dataset.get_creds_keys Returns the set of creds keys added to the dataset. Dataset Properties Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor. Dataset Version Control Dataset.commit Stores a snapshot of the current state of the dataset",activeloop Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#datasets,"Dataset.diff Returns/displays the differences between commits/branches . Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset. Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset. Dataset Views A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here. Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated. Example >>> import deeplake >>> # load dataset >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> # filter dataset >>> zeros = ds.filter('labels == 0') >>> # save view >>> zeros.save_view(id='zeros') >>> # load_view >>> zeros = ds.load_view(id='zeros') >>> len(zeros) 5923 Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Loading Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#loading-datasets,deeplake.load Loads an existing dataset,activeloop Creating Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#creating-datasets,deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest_classification Ingest a dataset of images from a local folder to a Deep Lake Dataset. deeplake.ingest_coco Ingest images and annotations in COCO format to a Deep Lake Dataset. deeplake.ingest_yolo Ingest images and annotations (bounding boxes or polygons) in YOLO format to a Deep Lake Dataset. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format.,activeloop Dataset Visualization,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-visualization,Dataset.visualize Visualizes the dataset in the Jupyter notebook.,activeloop Dataset Credentials,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-credentials,Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Updates the name and/or management status of a creds key. Dataset.get_creds_keys Returns the set of creds keys added to the dataset.,activeloop Dataset Operations,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-operations,Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.update Update existing samples in the dataset with new values. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.connect Connect a Deep Lake cloud dataset through a deeplake path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset. Dataset.random_split Splits the dataset into non-overlapping Dataset objects of given lengths.,activeloop Copying Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#copying-datasets,deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history.,activeloop Deleting and Renaming Datasets,https://docs.deeplake.ai/en/latest/Datasets.html#deleting-and-renaming-datasets,deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path.,activeloop Dataset Version Control,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-version-control,Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset. Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset.,activeloop Dataset Views,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-views,"A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here.Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated.Example Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Dataset Properties,https://docs.deeplake.ai/en/latest/Datasets.html#dataset-properties,"Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor.",activeloop Vector Store,https://docs.deeplake.ai/en/latest/Vector-Store.html#vector-store,"Creating a Deep Lake Vector Store VectorStore.__init__ Creates an empty VectorStore or loads an existing one if it exists at the specified path. Vector Store Operations VectorStore.add Adding elements to deeplake vector store. VectorStore.search VectorStore search method that combines embedding search, metadata search, and custom TQL search. VectorStore.delete Delete the data in the Vector Store. VectorStore.delete_by_path Deleted the Vector Store at the specified path. VectorStore.update_embedding Recompute existing embeddings of the VectorStore, that match either query, filter, ids or row_ids. Vector Store Properties VectorStore.summary Prints a summary of the dataset VectorStore.tensors Returns the list of tensors present in the dataset VectorStore.__len__ Length of the dataset",activeloop Creating a Deep Lake Vector Store,https://docs.deeplake.ai/en/latest/Vector-Store.html#creating-a-deep-lake-vector-store,VectorStore.__init__ Creates an empty VectorStore or loads an existing one if it exists at the specified path.,activeloop Vector Store Properties,https://docs.deeplake.ai/en/latest/Vector-Store.html#vector-store-properties,VectorStore.summary Prints a summary of the dataset VectorStore.tensors Returns the list of tensors present in the dataset VectorStore.__len__ Length of the dataset,activeloop Vector Store Operations,https://docs.deeplake.ai/en/latest/Vector-Store.html#vector-store-operations,"VectorStore.add Adding elements to deeplake vector store. VectorStore.search VectorStore search method that combines embedding search, metadata search, and custom TQL search. VectorStore.delete Delete the data in the Vector Store. VectorStore.delete_by_path Deleted the Vector Store at the specified path. VectorStore.update_embedding Recompute existing embeddings of the VectorStore, that match either query, filter, ids or row_ids.",activeloop VectorStore.DeepMemory,https://docs.deeplake.ai/en/latest/Vector-Store.html#vectorstore-deepmemory,"Creating a Deep Memory if Deep Memory is available on your plan, it will be automatically initialized when you create a Vector Store. DeepMemory.__init__ Based Deep Memory class to train and evaluate models on DeepMemory managed service. Deep Memory Operations DeepMemory.train Train a model on DeepMemory managed service. DeepMemory.cancel Cancel a training job on DeepMemory managed service. DeepMemory.delete Delete a training job on DeepMemory managed service. Deep Memory Properties DeepMemory.status Get the status of a training job on DeepMemory managed service. DeepMemory.list_jobs List all training jobs on DeepMemory managed service.",activeloop Creating a Deep Memory,https://docs.deeplake.ai/en/latest/Vector-Store.html#creating-a-deep-memory,"if Deep Memory is available on your plan, it will be automatically initialized when you create a Vector Store. DeepMemory.__init__ Based Deep Memory class to train and evaluate models on DeepMemory managed service.",activeloop Deep Memory Operations,https://docs.deeplake.ai/en/latest/Vector-Store.html#deep-memory-operations,DeepMemory.train Train a model on DeepMemory managed service. DeepMemory.cancel Cancel a training job on DeepMemory managed service. DeepMemory.delete Delete a training job on DeepMemory managed service.,activeloop Deep Memory Properties,https://docs.deeplake.ai/en/latest/Vector-Store.html#deep-memory-properties,DeepMemory.status Get the status of a training job on DeepMemory managed service. DeepMemory.list_jobs List all training jobs on DeepMemory managed service.,activeloop Tensors,https://docs.deeplake.ai/en/latest/Tensors.html#tensors,Creating Tensors Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it. Deleting and Renaming Tensors Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name Adding and deleting samples Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values. Retrieving samples Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index. Tensor Properties Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added. Info Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor. Video features Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Deleting and Renaming Tensors,https://docs.deeplake.ai/en/latest/Tensors.html#deleting-and-renaming-tensors,Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name,activeloop Adding and deleting samples,https://docs.deeplake.ai/en/latest/Tensors.html#adding-and-deleting-samples,Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values.,activeloop Tensor Properties,https://docs.deeplake.ai/en/latest/Tensors.html#tensor-properties,Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added.,activeloop Creating Tensors,https://docs.deeplake.ai/en/latest/Tensors.html#creating-tensors,Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it.,activeloop Retrieving samples,https://docs.deeplake.ai/en/latest/Tensors.html#retrieving-samples,Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index.,activeloop Video features,https://docs.deeplake.ai/en/latest/Tensors.html#video-features,Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Info,https://docs.deeplake.ai/en/latest/Tensors.html#info,Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor.,activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes," Htype is the class of a tensor: image, bounding box, generic tensor, etc.The htype of a tensor can be specified at its creation Copy to clipboard If not specified, the tensor's htype defaults to 'generic'.Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of Deep Lake datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: Htype configs HTYPE DTYPE COMPRESSION generic None None image uint8 Required arg image.rgb uint8 Required arg image.gray uint8 Required arg video uint8 Required arg audio float64 Required arg class_label uint32 None bbox float32 None bbox.3d float32 None intrinsics float32 None segment_mask uint32 None binary_mask bool None keypoints_coco int32 None point int32 None polygon float32 None text str None json Any None list List None dicom None dcm nifti None Required arg point_cloud None las mesh None ply instance_label uint32 None embedding None None link str None sequence None None Image Htype Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appended to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard Bounding Box Htype Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of conventions such as those used in YOLO, COCO, Pascal-VOC and others . In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.boxes.info.update(coords = {'type': 'pixel', 'mode': 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard 3D Bounding Box Htype In order for 3D bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a 3d bbox tensor Note In order for 3D bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. In addition, for projecting 3D bounding boxes onto 2D data (such as an image), the intrinsics tensor must exist in the dataset, or the intrinsics matrix must be specified in the ds.img_tensor.info dictionary, where the key is 'intrinsics' and the value is the matrix. A 3d bbox tensor can be created using >>> ds.create_tensor('3d_boxes', htype='bbox.3d', coords={'mode': 'center'}) Copy to clipboard Optional args: coords: A dictionary with key 'mode'. mode: Specifies the convention for the bbox coordinates",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"Creating a segment_mask tensor A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings . tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'. Binary Mask Htype Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard COCO Keypoints Htype Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as: [x1, y1, v1, x2, y2, v2, …, xk, yk, vk] The visibility v can be one of three values: 0 keypoint not in image. 1 keypoint in image but not visible. 2 keypoint in image and visible. Creating a keypoints_coco tensor A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard Video Htype Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8 . Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard Audio Htype Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard Class Label Htype Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"tensor.info.keypoints will be set to this list . connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer. Point Htype Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard Polygon Htype Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons Polygons can be appended as a list of list of tuples or np.ndarray",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"'center': [center_x, center_y, center_z, size_x, size_y, size_z, rot_x, rot_y, rot_z] Sample dimensions: (# bounding boxes, 9) size_x - is the length of the bounding box along x direction size_y - is the width of the bounding box along y direction size_z - is the height of the bounding box along z direction rot_x - rotation angle along x axis, given in degrees rot_y - rotation angle along y axis, given in degrees rot_z - rotation angle along z axis, given in degrees 'vertex': 8 3D vertices - [[x0, y0, z0], [x1, y1, z1], [x2, y2, z2], ….., [x7, y7, z7]] Sample dimensions: (# bounding boxes, 8, 3) The vertex order is of the following form: 4_____________________ 5 /| /| / | / | / | / | /___|_________________/ | 0| | | 1 | | | | | | | | | | | | | | |_________________|___| | / 7 | / 6 | / | / | / | / |/_____________________|/ 3 2 Copy to clipboard dtype: Defaults to float32 . sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard Note rotation angles are specified in degrees, not radians Appending 3d bounding boxes Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98, 22, 36, 44, 18, 0, 36, 0]]) >>> ds.3d_boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77, 22, 36, 44, 18, 0, 28, 0], [462, 123, 238, 98, 26, 34, 24, 19, 0, -50, 0], [688, 108, 279, 116, 12, 32, 14, 38, 0, 30, 0]]) >>> boxes.shape (9, 4) >>> ds.3d_boxes.append(boxes) Copy to clipboard Intrinsics Htype Sample dimensions: (# intrinsics matrices, 3, 3) The intrinsic matrix represents a projective transformation from the 3-D camera's coordinates into the 2-D image coordinates. The intrinsic parameters include the focal length, the optical center, also known as the principal point. The camera intrinsic matrix, K, is defined as: [fx0cx0fycy001] [cx,cy] - Optical center (the principal point), in pixels. [fx,fy] - Focal length in pixels. fx=F/px fy=F/py F - Focal length in world units, typically expressed in millimeters. (px,py) - Size of the pixel in world units. Creating an intrinsics tensor An intrinsics tensor can be created using >>> ds.create_tensor('intrinsics', htype='intrinsics') Copy to clipboard Optional args: sample_compression or chunk_compression. dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending intrinsics matrices >>> intrinsic_params = np.zeros((3, 3)) >>> ds.intrinsics.append(intrinsic_params) Copy to clipboard Segmentation Mask Htype Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"Different meshes can have different number of points . Creating a mesh tensor A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes Examples Appending a ply file containing a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard Embedding Htype Sample dimensions: (# elements in the embedding,) Creating an embedding tensor An embedding tensor can be created using >>> ds.create_tensor('embedding', htype='embedding') Copy to clipboard Supported compressions: >>> ['lz4', None] Copy to clipboard Appending embedding samples Embedding samples can be of type np.ndarray. Examples Appending Deep Lake embedding sample >>> ds.embedding.append(np.random.uniform(low=-1, high=1, size=(1024))) Copy to clipboard Extending with Deep Lake embeddding samples >>> ds.embedding.extend([np.random.uniform(low=-1, high=1, size=(1024)) for i in range(10)]) Copy to clipboard Sequence htype A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([deeplake.read('img01.jpg'), deeplake.read('img02.jpg')]) Copy to clipboard Link htype Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Nifti Htype Sample dimensions: (# height, # width, # slices) or (# height, # width, # slices, # time unit) in case of time-series data . Creating a nifti tensor A nifti tensor can be created using >>> ds.create_tensor('patients', htype='nifti', sample_compression='nii.gz') Copy to clipboard Supported compressions: >>> ['nii.gz', 'nii', None] Copy to clipboard Appending nifti data Nifti samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw nifti data. Therefore, array of raw frames can only be appended to tensors with None compression. Examples >>> ds.patients.append(deeplake.read('data/patient0.nii.gz')) Copy to clipboard >>> ds.patients.extend([deeplake.read(f'data/patient{i}.nii.gz') for i in range(10)]) Copy to clipboard Point Cloud Htype Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each point cloud is a list / array of points. All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard Mesh Htype Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points",activeloop Htypes,https://docs.deeplake.ai/en/latest/Htypes.html#htypes,"Examples >>> ds = deeplake.dataset('......') Copy to clipboard Adding credentials to the dataset You can add the names of the credentials you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard and populate the added names with credentials dictionaries >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload . For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. >>> ds.add_creds_key('my_s3_creds', managed=True) >>> ds.add_creds_key('my_gcs_creds', managed=True) Copy to clipboard Create a link tensor >>> ds.create_tensor('img', htype='link[image]', sample_compression='jpg') Copy to clipboard Populate the tensor with links >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='my_s3_key')) >>> ds.img.append(deeplake.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(deeplake.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(deeplake.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(deeplake.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key :bluebold:`Accessing the data` Copy to clipboard >>> for i in range(5): ... ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = deeplake.link('./data/cat.jpeg') Copy to clipboard",activeloop Image Htype,https://docs.deeplake.ai/en/latest/Htypes.html#image-htype,"Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appended to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard",activeloop Creating an image tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-an-image-tensor,An image tensor can be created using Copy to clipboard OR Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Video Htype,https://docs.deeplake.ai/en/latest/Htypes.html#video-htype,"Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard",activeloop Appending image samples,https://docs.deeplake.ai/en/latest/Htypes.html#appending-image-samples,"Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). ExamplesAppending pixel data with array Copy to clipboard Appening Deep Lake image sample Copy to clipboard You can append multiple samples at the same time using extend(). Copy to clipboard NoteIf the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression.",activeloop image.rgb and image.gray htypes,https://docs.deeplake.ai/en/latest/Htypes.html#image-rgb-and-image-gray-htypes,"image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appended to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format.image.rgb and image.gray tensors can be created using Copy to clipboard Copy to clipboard",activeloop Appending video samples,https://docs.deeplake.ai/en/latest/Htypes.html#appending-video-samples,"Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. ExamplesAppending Deep Lake video sample Copy to clipboard Extending with multiple videos Copy to clipboard",activeloop Audio Htype,https://docs.deeplake.ai/en/latest/Htypes.html#audio-htype,"Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard",activeloop Creating a video tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-video-tensor,A video tensor can be created using Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Appending audio samples,https://docs.deeplake.ai/en/latest/Htypes.html#appending-audio-samples,"Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. ExamplesAppending Deep Lake audio sample Copy to clipboard Extending with Deep Lake audio samples Copy to clipboard",activeloop Class Label Htype,https://docs.deeplake.ai/en/latest/Htypes.html#class-label-htype,"Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard",activeloop Creating a class label tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-class-label-tensor,"A class label tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use.",activeloop Creating an audio tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-an-audio-tensor,An audio tensor can be created using Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: Copy to clipboard,activeloop Bounding Box Htype,https://docs.deeplake.ai/en/latest/Htypes.html#bounding-box-htype,"Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of conventions such as those used in YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.boxes.info.update(coords = {'type': 'pixel', 'mode': 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard",activeloop Creating a bbox tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-bbox-tensor,"A bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH).",activeloop Appending bounding boxes,https://docs.deeplake.ai/en/latest/Htypes.html#appending-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Appending class labels,https://docs.deeplake.ai/en/latest/Htypes.html#appending-class-labels,"Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. ExamplesAppending index Copy to clipboard Extending with list of indices Copy to clipboard Appending text labels Copy to clipboard",activeloop 3D Bounding Box Htype,https://docs.deeplake.ai/en/latest/Htypes.html#d-bounding-box-htype,"In order for 3D bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a 3d bbox tensor Note In order for 3D bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. In addition, for projecting 3D bounding boxes onto 2D data (such as an image), the intrinsics tensor must exist in the dataset, or the intrinsics matrix must be specified in the ds.img_tensor.info dictionary, where the key is 'intrinsics' and the value is the matrix. A 3d bbox tensor can be created using >>> ds.create_tensor('3d_boxes', htype='bbox.3d', coords={'mode': 'center'}) Copy to clipboard Optional args: coords: A dictionary with key 'mode'. mode: Specifies the convention for the bbox coordinates. 'center': [center_x, center_y, center_z, size_x, size_y, size_z, rot_x, rot_y, rot_z] Sample dimensions: (# bounding boxes, 9) size_x - is the length of the bounding box along x direction size_y - is the width of the bounding box along y direction size_z - is the height of the bounding box along z direction rot_x - rotation angle along x axis, given in degrees rot_y - rotation angle along y axis, given in degrees rot_z - rotation angle along z axis, given in degrees 'vertex': 8 3D vertices - [[x0, y0, z0], [x1, y1, z1], [x2, y2, z2], ….., [x7, y7, z7]] Sample dimensions: (# bounding boxes, 8, 3) The vertex order is of the following form: 4_____________________ 5 /| /| / | / | / | / | /___|_________________/ | 0| | | 1 | | | | | | | | | | | | | | |_________________|___| | / 7 | / 6 | / | / | / | / |/_____________________|/ 3 2 Copy to clipboard dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard Note rotation angles are specified in degrees, not radians Appending 3d bounding boxes Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98, 22, 36, 44, 18, 0, 36, 0]]) >>> ds.3d_boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77, 22, 36, 44, 18, 0, 28, 0], [462, 123, 238, 98, 26, 34, 24, 19, 0, -50, 0], [688, 108, 279, 116, 12, 32, 14, 38, 0, 30, 0]]) >>> boxes.shape (9, 4) >>> ds.3d_boxes.append(boxes) Copy to clipboard",activeloop Creating a 3d bbox tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-3d-bbox-tensor,"NoteIn order for 3D bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. In addition, for projecting 3D bounding boxes onto 2D data (such as an image), the intrinsics tensor must exist in the dataset, or the intrinsics matrix must be specified in the ds.img_tensor.info dictionary, where the key is 'intrinsics' and the value is the matrix.A 3d bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with key 'mode'. mode: Specifies the convention for the bbox coordinates. 'center': [center_x, center_y, center_z, size_x, size_y, size_z, rot_x, rot_y, rot_z] Sample dimensions: (# bounding boxes, 9) size_x - is the length of the bounding box along x direction size_y - is the width of the bounding box along y direction size_z - is the height of the bounding box along z direction rot_x - rotation angle along x axis, given in degrees rot_y - rotation angle along y axis, given in degrees rot_z - rotation angle along z axis, given in degrees 'vertex': 8 3D vertices - [[x0, y0, z0], [x1, y1, z1], [x2, y2, z2], ….., [x7, y7, z7]] Sample dimensions: (# bounding boxes, 8, 3) The vertex order is of the following form: 4_____________________ 5 /| /| / | / | / | / | /___|_________________/ | 0| | | 1 | | | | | | | | | | | | | | |_________________|___| | / 7 | / 6 | / | / | / | / |/_____________________|/ 3 2 Copy to clipboard dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard Noterotation angles are specified in degrees, not radians",activeloop Creating an intrinsics tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-an-intrinsics-tensor,An intrinsics tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression. dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Intrinsics Htype,https://docs.deeplake.ai/en/latest/Htypes.html#intrinsics-htype,"Sample dimensions: (# intrinsics matrices, 3, 3) The intrinsic matrix represents a projective transformation from the 3-D camera's coordinates into the 2-D image coordinates. The intrinsic parameters include the focal length, the optical center, also known as the principal point. The camera intrinsic matrix, K, is defined as:[fx0cx0fycy001] [cx,cy] - Optical center (the principal point), in pixels. [fx,fy] - Focal length in pixels. fx=F/px fy=F/py F - Focal length in world units, typically expressed in millimeters. (px,py) - Size of the pixel in world units. Creating an intrinsics tensor An intrinsics tensor can be created using >>> ds.create_tensor('intrinsics', htype='intrinsics') Copy to clipboard Optional args: sample_compression or chunk_compression. dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending intrinsics matrices >>> intrinsic_params = np.zeros((3, 3)) >>> ds.intrinsics.append(intrinsic_params) Copy to clipboard",activeloop Segmentation Mask Htype,https://docs.deeplake.ai/en/latest/Htypes.html#segmentation-mask-htype,"Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a segment_mask tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-segment-mask-tensor,"A segment_mask tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Appending intrinsics matrices,https://docs.deeplake.ai/en/latest/Htypes.html#appending-intrinsics-matrices,Copy to clipboard,activeloop Appending segmentation masks,https://docs.deeplake.ai/en/latest/Htypes.html#appending-segmentation-masks,"Segmentation masks can be appended as np.ndarray. Examples Copy to clipboard NoteSince each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Binary Mask Htype,https://docs.deeplake.ai/en/latest/Htypes.html#binary-mask-htype,"Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard",activeloop Appending binary masks,https://docs.deeplake.ai/en/latest/Htypes.html#appending-binary-masks,Binary masks can be appended as np.ndarray. ExamplesAppending a binary mask with 5 objects Copy to clipboard,activeloop Creating a binary_mask tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-binary-mask-tensor,"A binary_mask tensor can be created using Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop COCO Keypoints Htype,https://docs.deeplake.ai/en/latest/Htypes.html#coco-keypoints-htype,"Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as:[x1, y1, v1, x2, y2, v2, …, xk, yk, vk]The visibility v can be one of three values: 0 keypoint not in image. 1 keypoint in image but not visible. 2 keypoint in image and visible. Creating a keypoints_coco tensor A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Appending 3d bounding boxes,https://docs.deeplake.ai/en/latest/Htypes.html#appending-3d-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Point Htype,https://docs.deeplake.ai/en/latest/Htypes.html#point-htype,"Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard",activeloop Creating a point tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-point-tensor,A point tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard,activeloop Appending point samples,https://docs.deeplake.ai/en/latest/Htypes.html#appending-point-samples,Points can be appended as np.ndarray or list. ExamplesAppending 2 2-D points Copy to clipboard Appending 2 3-D points Copy to clipboard,activeloop Creating a keypoints_coco tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-keypoints-coco-tensor,A keypoints_coco tensor can be created using Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. Copy to clipboard,activeloop Polygon Htype,https://docs.deeplake.ai/en/latest/Htypes.html#polygon-htype,"Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard",activeloop Appending polygons,https://docs.deeplake.ai/en/latest/Htypes.html#appending-polygons,Polygons can be appended as a list of list of tuples or np.ndarray. ExamplesAppending polygons with 2-D points Copy to clipboard Appending polygons with 3-D points Copy to clipboard Appending polygons with numpy arrays Copy to clipboard Copy to clipboard,activeloop Nifti Htype,https://docs.deeplake.ai/en/latest/Htypes.html#nifti-htype,"Sample dimensions: (# height, # width, # slices) or (# height, # width, # slices, # time unit) in case of time-series data. Creating a nifti tensor A nifti tensor can be created using >>> ds.create_tensor('patients', htype='nifti', sample_compression='nii.gz') Copy to clipboard Supported compressions: >>> ['nii.gz', 'nii', None] Copy to clipboard Appending nifti data Nifti samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw nifti data. Therefore, array of raw frames can only be appended to tensors with None compression. Examples >>> ds.patients.append(deeplake.read('data/patient0.nii.gz')) Copy to clipboard >>> ds.patients.extend([deeplake.read(f'data/patient{i}.nii.gz') for i in range(10)]) Copy to clipboard",activeloop Creating a nifti tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-nifti-tensor,A nifti tensor can be created using Copy to clipboard Supported compressions: Copy to clipboard,activeloop Appending nifti data,https://docs.deeplake.ai/en/latest/Htypes.html#appending-nifti-data,"Nifti samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw nifti data. Therefore, array of raw frames can only be appended to tensors with None compression. Examples Copy to clipboard Copy to clipboard",activeloop Point Cloud Htype,https://docs.deeplake.ai/en/latest/Htypes.html#point-cloud-htype,"Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each point cloud is a list / array of points. All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a point cloud tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-point-cloud-tensor,A point cloud tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Appending point clouds,https://docs.deeplake.ai/en/latest/Htypes.html#appending-point-clouds,Point clouds can be appended as a np.ndarray. ExamplesAppending point clouds with numpy arrays Copy to clipboard Or we can use deeplake.read() method to add samples Copy to clipboard,activeloop Mesh Htype,https://docs.deeplake.ai/en/latest/Htypes.html#mesh-htype,"Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes Examples Appending a ply file containing a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a mesh tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-mesh-tensor,A mesh tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Creating a polygon tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-a-polygon-tensor,A polygon tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Appending keypoints,https://docs.deeplake.ai/en/latest/Htypes.html#appending-keypoints,"Keypoints can be appended as np.ndarray or list. ExamplesAppending keypoints sample with 3 keypoints and 4 objects Copy to clipboard WarningIn order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Embedding Htype,https://docs.deeplake.ai/en/latest/Htypes.html#embedding-htype,"Sample dimensions: (# elements in the embedding,) Creating an embedding tensor An embedding tensor can be created using >>> ds.create_tensor('embedding', htype='embedding') Copy to clipboard Supported compressions: >>> ['lz4', None] Copy to clipboard Appending embedding samples Embedding samples can be of type np.ndarray. Examples Appending Deep Lake embedding sample >>> ds.embedding.append(np.random.uniform(low=-1, high=1, size=(1024))) Copy to clipboard Extending with Deep Lake embeddding samples >>> ds.embedding.extend([np.random.uniform(low=-1, high=1, size=(1024)) for i in range(10)]) Copy to clipboard",activeloop Creating an embedding tensor,https://docs.deeplake.ai/en/latest/Htypes.html#creating-an-embedding-tensor,An embedding tensor can be created using Copy to clipboard Supported compressions: Copy to clipboard,activeloop Appending embedding samples,https://docs.deeplake.ai/en/latest/Htypes.html#appending-embedding-samples,Embedding samples can be of type np.ndarray. ExamplesAppending Deep Lake embedding sample Copy to clipboard Extending with Deep Lake embeddding samples Copy to clipboard,activeloop Sequence htype,https://docs.deeplake.ai/en/latest/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Link htype,https://docs.deeplake.ai/en/latest/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples Copy to clipboard Adding credentials to the datasetYou can add the names of the credentials you want to use (not needed for http/local urls) Copy to clipboard and populate the added names with credentials dictionaries Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload.For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. Copy to clipboard Create a link tensor Copy to clipboard Populate the tensor with links Copy to clipboard Copy to clipboard Updating a sample Copy to clipboard",activeloop Appending meshes,https://docs.deeplake.ai/en/latest/Htypes.html#appending-meshes,ExamplesAppending a ply file containing a mesh data to tensor Copy to clipboard Copy to clipboard,activeloop Compressions,https://docs.deeplake.ai/en/latest/Compressions.html#compressions,"Deep Lake can read, compress, decompress and recompress data to different formats. The supported htype-compression configurations are given below. Sample Type Htype Compressions Image image bmp, dib, gif, ico, jpeg, jpeg2000, pcx, png, ppm, sgi, tga, tiff, webp, wmf, xbm, eps, fli, im, msp, mpo, apng Video video mp4, mkv, avi Audio audio flac, mp3, wav Dicom dicom dcm Point Cloud point_cloud las Mesh mesh ply Other bbox, text, list, json, generic, etc. lz4 Sample Compression If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed. Note For audio and video, we don't support compressing raw frames but only reading compressed audio and video data. Examples: >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Structure of sample-wise compressed tensor. Chunk Compression If chunk compression is specified when creating tensors, added samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise. Note Chunk-wise compression is not supported for audio, video and point_cloud htypes. Examples: >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') >>> ds.create_tensor('boxes', htype='bbox', chunk_compression='lz4') Copy to clipboard Structure of chunk-wise compressed tensor. Note See deeplake.read() to learn how to read data from files and populate these tensors.",activeloop Sample Compression,https://docs.deeplake.ai/en/latest/Compressions.html#sample-compression,"If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed.NoteFor audio and video, we don't support compressing raw frames but only reading compressed audio and video data.Examples: Copy to clipboard Structure of sample-wise compressed tensor.",activeloop Chunk Compression,https://docs.deeplake.ai/en/latest/Compressions.html#chunk-compression,"If chunk compression is specified when creating tensors, added samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise.NoteChunk-wise compression is not supported for audio, video and point_cloud htypes.Examples: Copy to clipboard Structure of chunk-wise compressed tensor. NoteSee deeplake.read() to learn how to read data from files and populate these tensors.",activeloop PyTorch and Tensorflow Support,https://docs.deeplake.ai/en/latest/Pytorch-and-Tensorflow-Support.html#pytorch-and-tensorflow-support,Deep Lake datasets can be easily converted to Torch dataloaders or Tensorflow datasets for training. Dataset.pytorch Converts the dataset into a pytorch Dataloader. Dataset.tensorflow Converts the dataset into a tensorflow compatible format.,activeloop Utility Functions,https://docs.deeplake.ai/en/latest/Utility-Functions.html#utility-functions,General Functions exists Checks if a dataset exists at the given path. Making Deep Lake Samples read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data. link_tiled Utility that stores links to multiple images that act as tiles and together form a big image. Parallelism compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop Making Deep Lake Samples,https://docs.deeplake.ai/en/latest/Utility-Functions.html#making-deep-lake-samples,read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data. link_tiled Utility that stores links to multiple images that act as tiles and together form a big image.,activeloop Parallelism,https://docs.deeplake.ai/en/latest/Utility-Functions.html#parallelism,compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop General Functions,https://docs.deeplake.ai/en/latest/Utility-Functions.html#general-functions,exists Checks if a dataset exists at the given path.,activeloop Weights and Biases,https://docs.deeplake.ai/en/latest/Weights-and-Biases.html#module-deeplake.integrations.wandb.wandb,"Deep Lake's Weights and Biases integration allows you to track and improve reproducibility of your machine learning experiments. Deep Lake will automatically push all information required to reproduce the snapshot of the data like your dataset's URI, commit ID, and view IDs of any views that you have used in your training workflow.Learn more about Weights and Biases here. Logging Dataset Creation If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. >>> run = wandb.init(project='deeplake_wandb', job_type='dataset_upload') >>> ds = deeplake.empty('hub://fayazrahman4u/my_dataset') # create dataset >>> ds.create_tensor('images', htype='image', sample_compression='jpg') # create a tensor >>> ds.images.append(deeplake.read('files/images/dog.jpg')) # add a sample >>> ds.commit('creation') # commit -> trigger logging >>> run.finish() Copy to clipboard Note If you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes. Note If you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state. Logging Dataset Read A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. >>> run = wandb.init(project='deeplake_wandb', job_type='torch dataloader') >>> train_loader = ds.pytorch() >>> run.finish() Copy to clipboard >>> run = wandb.init(project='deeplake_wandb', job_type='iteration') >>> for sample in ds: >>> print(sample['images'].shape) >>> run.finish() Copy to clipboard",activeloop Logging Dataset Creation,https://docs.deeplake.ai/en/latest/Weights-and-Biases.html#logging-dataset-creation,"If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. Copy to clipboard NoteIf you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes.NoteIf you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state.",activeloop Logging Dataset Read,https://docs.deeplake.ai/en/latest/Weights-and-Biases.html#logging-dataset-read,A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. Copy to clipboard Copy to clipboard,activeloop Dataloader,https://docs.deeplake.ai/en/latest/Dataloader.html#dataloader," Train your models using the new high performance C++ dataloader. See the dataloader method on how to create dataloaders from your datasets: Dataset.dataloader Returns a DeepLakeDataLoader object. DeepLakeDataLoader class deeplake.enterprise.DeepLakeDataLoader batch(batch_size: int, drop_last: bool = False) Returns a batched DeepLakeDataLoader object. Parameters batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .batch() has already been called. close() Shuts down the workers and releases the resources. numpy(num_workers: int = 0, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called. offset(off: int = 0) Returns a shifted DeepLakeDataLoader object. Parameters off (int) – index that the dataloadee will start to iterate. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .offset() has already been called. pytorch(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data",activeloop Dataloader,https://docs.deeplake.ai/en/latest/Dataloader.html#dataloader,"If None, the number of threads is automatically determined . Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called. Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch(decode_method={'images': 'pil'}) # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard query(query_string: str) Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: Union[str, list, tuple, ndarray], replace: Optional[bool] = True, size: Optional[int] = None) Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True)",activeloop Dataloader,https://docs.deeplake.ai/en/latest/Dataloader.html#dataloader,"size – Optional[int] The length of the result view . (default: len(dataset)) Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights. >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048) Returns a shuffled DeepLakeDataLoader object. Parameters shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True tensorflow(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called",activeloop Dataloader,https://docs.deeplake.ai/en/latest/Dataloader.html#dataloader,"Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ .. . .batch(batch_size)\ ... .shuffle()\ ... .tensorflow() # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard transform(transform: Union[Callable, Dict[str, Optional[Callable]]], **kwargs: Dict) Returns a transformed DeepLakeDataLoader object. Parameters transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data. kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .transform() has already been called.",activeloop DeepLakeDataLoader,https://docs.deeplake.ai/en/latest/Dataloader.html#deeplakedataloader," class deeplake.enterprise.DeepLakeDataLoader batch(batch_size: int, drop_last: bool = False) Returns a batched DeepLakeDataLoader object. Parameters batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .batch() has already been called. close() Shuts down the workers and releases the resources. numpy(num_workers: int = 0, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called. offset(off: int = 0) Returns a shifted DeepLakeDataLoader object. Parameters off (int) – index that the dataloadee will start to iterate. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .offset() has already been called. pytorch(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker",activeloop DeepLakeDataLoader,https://docs.deeplake.ai/en/latest/Dataloader.html#deeplakedataloader,"Defaults to 2 . distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called. Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch(decode_method={'images': 'pil'}) # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard query(query_string: str) Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: Union[str, list, tuple, ndarray], replace: Optional[bool] = True, size: Optional[int] = None) Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True). size – Optional[int] The length of the result view. (default: len(dataset)) Returns A DeepLakeDataLoader object",activeloop DeepLakeDataLoader,https://docs.deeplake.ai/en/latest/Dataloader.html#deeplakedataloader,"Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ .. . .batch(batch_size)\ ... .shuffle()\ ... .tensorflow() # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard transform(transform: Union[Callable, Dict[str, Optional[Callable]]], **kwargs: Dict) Returns a transformed DeepLakeDataLoader object. Parameters transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data. kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .transform() has already been called.",activeloop DeepLakeDataLoader,https://docs.deeplake.ai/en/latest/Dataloader.html#deeplakedataloader,"Return type DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights . >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048) Returns a shuffled DeepLakeDataLoader object. Parameters shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True tensorflow(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None, persistent_workers: bool = False) Returns a DeepLakeDataLoader object. Parameters num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns A DeepLakeDataLoader object. Return type DeepLakeDataLoader Raises ValueError – If .pytorch() or .tensorflow() or .numpy() has already been called",activeloop Sampler,https://docs.deeplake.ai/en/latest/Sampler.html#sampler,"The sampler applies weighted sampling on the dataset and returns the sampled view. It creates a discrete distribution with given weights and randomly picks samples based on it. The resulting view is generated in such a way that when creating a dataloader from the view and training on it, the performance impact is minimal. See the sample_by method on how to use this feature: Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied.",activeloop Tensor Query Language,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#tensor-query-language,"For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number . May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT. Examples Querying for images containing 0 in MNIST Train Dataset with ds.query. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> result = ds.query('select * where labels == 0') >>> len(result) 5923 Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/coco-train') >>> result = ds.query('(select * where contains(categories, 'car')) union (select * where contains(categories, 'motorcycle'))') >>> len(result) 14376 Copy to clipboard",activeloop Tensor Query Language,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#tensor-query-language," This page describes the Tensor Query Language (TQL), an SQL-like language used for Querying in Activeloop Platform as well as in ds.query in our Python API. To use queries, install deeplake with pip install deeplake[enterprise]. Syntax SELECT TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes. TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions",activeloop Syntax,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#syntax," SELECT TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes. TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions",activeloop SELECT,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#select,"TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: Copy to clipboard Each part of the SELECT statement can be omitted.FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context",activeloop Syntax,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#syntax,"For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number . May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop WHERE,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#where,"WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error.",activeloop Expressions,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#expressions,"TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value.The value can be numeric scalar or array as well as string value.String literal should be provided within single quotes (') and can be used on class_label, json and text tensors.For class labels it will get corresponding numeric value from the class_names list and do numeric comparison.For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array.Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes.TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions. For example, Copy to clipboard From SQL we also support the following two keywords: BETWEEN Copy to clipboard IN Copy to clipboard",activeloop Functions,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#functions,"There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number. May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes.",activeloop ORDER BY,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#order-by,"ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string.ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default.",activeloop LIMIT OFFSET,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#limit-offset,"LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL.",activeloop Examples,https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#examples,Querying for images containing 0 in MNIST Train Dataset with ds.query. Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. Copy to clipboard,activeloop "UNION, INTERSECT, EXCEPT",https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html#union-intersect-except,"Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop Random Split,https://docs.deeplake.ai/en/latest/Random-Split.html#random-split,"Splits the dataset into non overlapping new datasets of given lengths. The resulting datasets are generated in such a way that when creating a dataloader from the view and training on it, the performance impact is minimal. Using the outputs of this function with .pytorch method of dataset (instead of .dataloader) may result in poor performance. See the random_split method on how to use this feature: Dataset.random_split Splits the dataset into non-overlapping Dataset objects of given lengths.",activeloop Deep Memory API,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-api," class deeplake.core.vectorstore.deep_memory.DeepMemory __init__(dataset: Dataset, client: DeepMemoryBackendClient, logger: Logger, embedding_function: Optional[Any] = None, token: Optional[str] = None, creds: Optional[Dict[str, Any]] = None) Based Deep Memory class to train and evaluate models on DeepMemory managed service. Parameters dataset (Dataset) – deeplake dataset object. client (DeepMemoryBackendClient) – Client to interact with the DeepMemory managed service. Defaults to None. logger (logging.Logger) – Logger object. embedding_function (Optional[Any], optional) – Embedding funtion class used to convert queries/documents to embeddings. Defaults to None. token (Optional[str], optional) – API token for the DeepMemory managed service. Defaults to None. creds (Optional[Dict[str, Any]], optional) – Credentials to access the dataset. Defaults to None. Raises ImportError – if indra is not installed cancel(job_id: str) Cancel a training job on DeepMemory managed service. Examples >>> cancelled: bool = vectorstore.deep_memory.cancel(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was cancelled successfully, False otherwise. Return type bool delete(job_id: str) Delete a training job on DeepMemory managed service. Examples >>> deleted: bool = vectorstore.deep_memory.delete(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was deleted successfully, False otherwise. Return type bool evaluate(relevance: List[List[Tuple[str, int]]], queries: List[str], embedding_function: Optional[Callable[[...], List[ndarray]]] = None, embedding: Optional[Union[List[ndarray], List[List[float]]]] = None, top_k: List[int] = [1, 3, 5, 10, 50, 100], qvs_params: Optional[Dict[str, Any]] = None) → Dict[str, Dict[str, float]] Evaluate a model using the DeepMemory managed service. Examples # 1. Evaluate a model using an embedding function: relevance = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] queries = ['What is the capital of India?', 'What is the capital of France?'] embedding_function = openai_embedding.embed_documents vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding_function=embedding_function, ) # 2. Evaluate a model with precomputed embeddings: embeddings = [[-1.2, 12, …], …] vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, ) # 3. Evaluate a model with precomputed embeddings and log queries: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={'log_queries': True}, ) # 4. Evaluate with precomputed embeddings, log queries, and a custom branch: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={ 'log_queries': True, 'branch': 'queries', } ) Parameters queries (List[str]) – Queries for model evaluation. relevance (List[List[Tuple[str, int]]]) – Relevant documents and scores for each query. - Outer list: matches the queries. - Inner list: pairs of doc_id and relevance score. - doc_id: Document ID from the corpus dataset, found in the id tensor. - relevance_score: Between 0 (not relevant) and 1 (relevant). embedding (Optional[np.ndarray], optional) – Query embeddings. Defaults to None. embedding_function (Optional[Callable[..., List[np.ndarray]]], optional) – Function to convert queries into embeddings. Defaults to None. top_k (List[int], optional) – Ranks for model evaluation. Defaults to [1, 3, 5, 10, 50, 100]. qvs_params (Optional[Dict], optional) – Parameters to initialize the queries vectorstore. When specified, creates a new vectorstore to track evaluation queries, the Deep Memory response, and the naive vector search results. Defaults to None",activeloop Deep Memory API,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-api,"Returns Recalls for each rank . Return type Dict[str, Dict[str, float]] Raises ImportError – If indra is not installed. ValueError – If no embedding_function is provided either during initialization or evaluation. list_jobs(debug=False) List all training jobs on DeepMemory managed service. status(job_id: str) Get the status of a training job on DeepMemory managed service. Examples >>> vectorstore.deep_memory.status(job_id) -------------------------------------------------------------- | 6508464cd80cab681bfcfff3 | -------------------------------------------------------------- | status | pending | -------------------------------------------------------------- | progress | None | -------------------------------------------------------------- | results | not available yet | -------------------------------------------------------------- Copy to clipboard Parameters job_id (str) – job_id of the training job. train(queries: List[str], relevance: List[List[Tuple[str, int]]], embedding_function: Optional[Callable[[str], ndarray]] = None, token: Optional[str] = None) → str Train a model on DeepMemory managed service. Examples >>> queries: List[str] = ['What is the capital of India?', 'What is the capital of France?'] >>> relevance: List[List[Tuple[str, int]]] = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] >>> # doc_id_1, doc_id_2, doc_id_3 are the ids of the documents in the corpus dataset that is relevant to the queries. It is stored in the `id` tensor of the corpus dataset. >>> job_id: str = vectorstore.deep_memory.train(queries, relevance) Copy to clipboard Parameters queries (List[str]) – List of queries to train the model on. relevance (List[List[Tuple[str, int]]]) – List of relevant documents for each query with their respective relevance score. The outer list corresponds to the queries and the inner list corresponds to the doc_id, relevence_score pair for each query. doc_id is the document id in the corpus dataset. It is stored in the id tensor of the corpus dataset. relevence_score is the relevance score of the document for the query. The range is between 0 and 1, where 0 stands for not relevant and 1 stands for relevant. embedding_function (Optional[Callable[[str], np.ndarray]], optional) – Embedding funtion used to convert queries to embeddings. Defaults to None. token (str, optional) – API token for the DeepMemory managed service. Defaults to None. Returns job_id of the training job. Return type str Raises ValueError – if embedding_function is not specified either during initialization or during training. Syntax This page describes ds.query. DeepMemory is a deep learning model that is trained on the dataset to improve the search results, by aligning queries with the corpus dataset. It gives up to +22% of recall improvement on an eval dataset. To use deep_memory, please subscribe to our waitlist. Training To start training you should first create a vectostore object, and then preprocess the data and use deep memory with it: >>> from deeplake import VectorStore >>> db = VectorStore( ... path='hub://{$ORG_ID}/{$DATASET_ID}', ... token=token, # or you can be signed in with CLI ... runtime={'tensor_db': True}, ... embedding_function=embedding_function, # function that takes converts texts into embeddings, it is optional and can be provided later ..",activeloop Deep Memory API,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-api,") Copy to clipboard To train a deepmemory model you need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents . Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related). queries should be a list of strings. >>> job_id = db.deep_memory.train( ... corpus: List[List[Tuple[str, float]]] = corpus, ... queries: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... ) Copy to clipboard Tracking the training progress job_id is string, which can be used to track the training progress. You can use db.deep_memory.status(job_id) to get the status of the job. when the model is still in pending state (not started yet) you will see the following: >>> db.deep_memory.status(job_id) ————————————————————– | 6508464cd80cab681bfcfff3 | ————————————————————– | status | pending | ————————————————————– | progress | None | ————————————————————– | results | not available yet | ————————————————————– After some time the model will start training and you will see the following: >>> db.deep_memory.status(job_id) -------------------------------------------------------------- | 6508464cd80cab681bfcfff3 | -------------------------------------------------------------- | status | training | -------------------------------------------------------------- | progress | eta: 2.5 seconds | | | recall@10: 0.62% (+0.62%) | -------------------------------------------------------------- | results | not available yet | -------------------------------------------------------------- Copy to clipboard If you want to get all training jobs you can use db.deep_memory.list_jobs() which will show all jobs that happened on this dataset. >>> db.deep_memory.list_jobs() ID STATUS RESULTS PROGRESS 65198efcd28df3238c49a849 completed recall@10: 0.62% (+0.62%) eta: 2.5 seconds recall@10: 0.62% (+0.62%) 651a4d41d05a21a5a6a15f67 completed recall@10: 0.62% (+0.62%) eta: 2.5 seconds recall@10: 0.62% (+0.62%) Copy to clipboard Deep Memory Evaluation Once the training is completed, you can use db.deep_memory.evaluate to evaluate the model performance on the custom dataset. Once again you would need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents. Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related). queries should be a list of strings. >>> recalls = db.deep_memory.evaluate( ... corpus: List[List[Tuple[str, float]]] = corpus, ... queries: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... qvs_params = {'enbabled': True} ..",activeloop Syntax,https://docs.deeplake.ai/en/latest/Deep-Memory.html#syntax," This page describes ds.query. DeepMemory is a deep learning model that is trained on the dataset to improve the search results, by aligning queries with the corpus dataset. It gives up to +22% of recall improvement on an eval dataset. To use deep_memory, please subscribe to our waitlist. Training To start training you should first create a vectostore object, and then preprocess the data and use deep memory with it: >>> from deeplake import VectorStore >>> db = VectorStore( ... path='hub://{$ORG_ID}/{$DATASET_ID}', ... token=token, # or you can be signed in with CLI ... runtime={'tensor_db': True}, ... embedding_function=embedding_function, # function that takes converts texts into embeddings, it is optional and can be provided later ... ) Copy to clipboard To train a deepmemory model you need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents. Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related). queries should be a list of strings. >>> job_id = db.deep_memory.train( ... corpus: List[List[Tuple[str, float]]] = corpus, ... queries: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... ) Copy to clipboard Tracking the training progress job_id is string, which can be used to track the training progress. You can use db.deep_memory.status(job_id) to get the status of the job. when the model is still in pending state (not started yet) you will see the following: >>> db.deep_memory.status(job_id) ————————————————————– | 6508464cd80cab681bfcfff3 | ————————————————————– | status | pending | ————————————————————– | progress | None | ————————————————————– | results | not available yet | ————————————————————– After some time the model will start training and you will see the following: >>> db.deep_memory.status(job_id) -------------------------------------------------------------- | 6508464cd80cab681bfcfff3 | -------------------------------------------------------------- | status | training | -------------------------------------------------------------- | progress | eta: 2.5 seconds | | | recall@10: 0.62% (+0.62%) | -------------------------------------------------------------- | results | not available yet | -------------------------------------------------------------- Copy to clipboard If you want to get all training jobs you can use db.deep_memory.list_jobs() which will show all jobs that happened on this dataset. >>> db.deep_memory.list_jobs() ID STATUS RESULTS PROGRESS 65198efcd28df3238c49a849 completed recall@10: 0.62% (+0.62%) eta: 2.5 seconds recall@10: 0.62% (+0.62%) 651a4d41d05a21a5a6a15f67 completed recall@10: 0.62% (+0.62%) eta: 2.5 seconds recall@10: 0.62% (+0.62%) Copy to clipboard Deep Memory Evaluation Once the training is completed, you can use db.deep_memory.evaluate to evaluate the model performance on the custom dataset. Once again you would need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents",activeloop Syntax,https://docs.deeplake.ai/en/latest/Deep-Memory.html#syntax,"Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related) . queries should be a list of strings. >>> recalls = db.deep_memory.evaluate( ... corpus: List[List[Tuple[str, float]]] = corpus, ... queries: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... qvs_params = {'enbabled': True} ... ) Copy to clipboard recalls is a dictionary with the following keys: with_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values without_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values qvs_params when specified creates a separate vectorstore that tracks all evaluation queries and documents, so that you can use it to compare the performance of deep_memory to naive vector search. By default, it is turned off. If enabled the dataset will be created at hub://{$ORG_ID}/{$DATASET_ID}_eval_queries Deep Memory Search After the model is trained you also can search using it: >>> results = db.search( ... embedding_data: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... k = 4, # number of results to return ... deep_memory = True, # use deep memory model ... ) Copy to clipboard",activeloop Deep Memory API,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-api,") Copy to clipboard recalls is a dictionary with the following keys: with_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values without_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values qvs_params when specified creates a separate vectorstore that tracks all evaluation queries and documents, so that you can use it to compare the performance of deep_memory to naive vector search . By default, it is turned off. If enabled the dataset will be created at hub://{$ORG_ID}/{$DATASET_ID}_eval_queries Deep Memory Search After the model is trained you also can search using it: >>> results = db.search( ... embedding_data: List[str] = queries, ... embedding_function = embedding_function, # function that takes converts texts into embeddings, it is optional and can be skipped if provided during initialization ... k = 4, # number of results to return ... deep_memory = True, # use deep memory model ... ) Copy to clipboard",activeloop Deep Memory Evaluation,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-evaluation,"Once the training is completed, you can use db.deep_memory.evaluate to evaluate the model performance on the custom dataset. Once again you would need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents. Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related). queries should be a list of strings. Copy to clipboard recalls is a dictionary with the following keys: with_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values without_model contains a dictionary with recall metrics for the naive vector search on the custom dataset for different k values qvs_params when specified creates a separate vectorstore that tracks all evaluation queries and documents, so that you can use it to compare the performance of deep_memory to naive vector search. By default, it is turned off. If enabled the dataset will be created at hub://{$ORG_ID}/{$DATASET_ID}_eval_queries",activeloop Tracking the training progress,https://docs.deeplake.ai/en/latest/Deep-Memory.html#tracking-the-training-progress,"job_id is string, which can be used to track the training progress. You can use db.deep_memory.status(job_id) to get the status of the job.when the model is still in pending state (not started yet) you will see the following: >>> db.deep_memory.status(job_id) ————————————————————– | 6508464cd80cab681bfcfff3 | ————————————————————– | status | pending | ————————————————————– | progress | None | ————————————————————– | results | not available yet | ————————————————————–After some time the model will start training and you will see the following: Copy to clipboard If you want to get all training jobs you can use db.deep_memory.list_jobs() which will show all jobs that happened on this dataset. Copy to clipboard",activeloop Training,https://docs.deeplake.ai/en/latest/Deep-Memory.html#training,"To start training you should first create a vectostore object, and then preprocess the data and use deep memory with it: Copy to clipboard To train a deepmemory model you need to preprocess the dataset so that, corpus, will become a list of list of tuples, where outer list corresponds to the query and inner list to the relevant documents. Each tuple should contain the document id (id tensor from the corpus dataset) and the relevance score (range is 0-1, where 0 represents unrelated document and 1 related). queries should be a list of strings. Copy to clipboard",activeloop Deep Memory Search,https://docs.deeplake.ai/en/latest/Deep-Memory.html#deep-memory-search,After the model is trained you also can search using it: Copy to clipboard,activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake," The deeplake package provides a database which stores data as compressed chunked arrays that can be stored anywhere and later streamed to deep learning models. deeplake.dataset(path: Union[str, Path], runtime: Optional[Dict] = None, read_only: Optional[bool] = None, overwrite: bool = False, public: bool = False, memory_cache_size: int = 2000, local_cache_size: int = 0, creds: Optional[Union[str, Dict]] = None, token: Optional[str] = None, org_id: Optional[str] = None, verbose: bool = True, access_method: str = 'stream', unlink: bool = False, reset: bool = False, check_integrity: bool = True, lock_enabled: Optional[bool] = True, lock_timeout: Optional[int] = 0, index_params: Optional[Dict[str, Union[int, str]]] = None) Returns a Dataset object referencing either a new or existing dataset. Examples >>> ds = deeplake.dataset('hub://username/dataset') >>> ds = deeplake.dataset('s3://mybucket/my_dataset') >>> ds = deeplake.dataset('./datasets/my_dataset', overwrite=True) Copy to clipboard Loading to a specfic version: >>> ds = deeplake.dataset('hub://username/dataset@new_branch') >>> ds = deeplake.dataset('hub://username/dataset@3e49cded62b6b335c74ff07e97f8451a37aca7b2) Copy to clipboard >>> my_commit_id = '3e49cded62b6b335c74ff07e97f8451a37aca7b2' >>> ds = deeplake.dataset(f'hub://username/dataset@{my_commit_id}') Copy to clipboard Parameters path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. Loading to a specific version: You can also specify a commit_id or branch to load the dataset to that version directly by using the @ symbol. The path will then be of the form hub://username/dataset@{branch} or hub://username/dataset@{commit_id}. See examples above. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to True. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path . - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. org_id (str, Optional) – Organization id to be used for enabling high-performance features. Only applicable for local datasets. verbose (bool) – If True, logs will be printed. Defaults to True. lock_timeout (int) – Number of seconds to wait before throwing a LockException. If None, wait indefinitely lock_enabled (bool) – If true, the dataset manages a write lock. NOTE: Only set to False if you are managing concurrent access externally. index_params – Optional[Dict[str, Union[int, str]]]: Index parameters used while creating vector store, passed down to dataset. Returns Dataset created using the arguments provided. Return type Dataset Raises DatasetHandlerError – If a Dataset already exists at the given path and overwrite is False. UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token ValueError – If version is specified in the path Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter. deeplake.like(dest: Union[str, Path], src: Union[str, Dataset, Path], runtime: Optional[Dict] = None, tensors: Optional[List[str]] = None, overwrite: bool = False, creds: Optional[Union[dict, str]] = None, token: Optional[str] = None, org_id: Optional[str] = None, public: bool = False, verbose: bool = True) → Dataset Creates a new dataset by copying the source dataset's structure to a new location. No samples are copied, only the meta/info for the dataset and it's tensors. Parameters dest – Empty Dataset or Path where the new dataset will be created. src (Union[str, Dataset]) – Path or dataset object that will be used as the template for the new dataset. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. tensors (List[str], optional) – Names of tensors (and groups) to be replicated. If not specified all tensors in source dataset are considered. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Currently only works with s3 paths . - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. org_id (str, Optional) – Organization id to be used for enabling high-performance features. Only applicable for local datasets. verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist. The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. unlink (bool) – Downloads linked samples if set to True. Only applicable if access_method is download or local. Defaults to False. reset (bool) – If the specified dataset cannot be loaded due to a corrupted HEAD state of the branch being loaded, setting reset=True will reset HEAD changes and load the previous version. check_integrity (bool) – If the param is True it will do integrity check during dataset loading otherwise the check is not performed lock_timeout (int) – Number of seconds to wait before throwing a LockException. If None, wait indefinitely lock_enabled (bool) – If true, the dataset manages a write lock. NOTE: Only set to False if you are managing concurrent access externally index_params – Optional[Dict[str, Union[int, str]]] = None : The index parameters used while creating vector store is passed down to dataset. Returns Dataset created using the arguments provided",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"summary (bool) – If True, a summary of skipped files will be printed after completion . Defaults to True. num_workers (int) – The number of workers to use for ingestion. Set to 0 by default. shuffle (bool) – Shuffles the input data prior to ingestion. Since data arranged in folders by class is highly non-random, shuffling is important in order to produce optimal results when training. Defaults to True. token (Optional[str]) – The token to use for accessing the dataset. connect_kwargs (Optional[Dict]) – If specified, the dataset will be connected to Deep Lake, and connect_kwargs will be passed to Dataset.connect. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function see deeplake.empty(). Returns New dataset object with structured dataset. Return type Dataset Raises InvalidPathException – If the source directory does not exist. SamePathException – If the source and destination path are same. AutoCompressionError – If the source director is empty or does not contain a valid extension. InvalidFileExtension – If the most frequent file extension is found to be ‘None' during auto-compression. Note Currently only local source paths and image classification datasets / csv files are supported for automatic ingestion. Supported filetypes: png/jpeg/jpg/csv. All files and sub-directories with unsupported filetypes are ignored. Valid source directory structures for image classification look like: data/ img0.jpg img1.jpg ... Copy to clipboard or: data/ class0/ cat0.jpg ... class1/ dog0.jpg ... ... Copy to clipboard or: data/ train/ class0/ img0.jpg ... ... val/ class0/ img0.jpg ... ... ... Copy to clipboard Classes defined as sub-directories can be accessed at ds['test/labels'].info.class_names. Support for train and test sub directories is present under ds['train/images'], ds['train/labels'] and ds['test/images'], ds['test/labels']. Mapping filenames to classes from an external file is currently not supported. deeplake.ingest_coco(images_directory: Union[str, Path], annotation_files: Union[str, Path, List[str]], dest: Union[str, Path], key_to_tensor_mapping: Optional[Dict] = None, file_to_group_mapping: Optional[Dict] = None, ignore_one_group: bool = True, ignore_keys: Optional[List[str]] = None, image_params: Optional[Dict] = None, image_creds_key: Optional[str] = None, src_creds: Optional[Union[str, Dict]] = None, dest_creds: Optional[Union[str, Dict]] = None, inspect_limit: int = 1000000, progressbar: bool = True, shuffle: bool = False, num_workers: int = 0, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset Ingest images and annotations in COCO format to a Deep Lake Dataset. The source data can be stored locally or in the cloud. Examples >>> # Ingest local data in COCO format to a Deep Lake dataset stored in Deep Lake storage. >>> ds = deeplake.ingest_coco( >>> '<path/to/images/directory>', >>> ['path/to/annotation/file1.json', 'path/to/annotation/file2.json'], >>> dest='hub://org_id/dataset', >>> key_to_tensor_mapping={'category_id': 'labels', 'bbox': 'boxes'}, >>> file_to_group_mapping={'file1.json': 'group1', 'file2.json': 'group2'}, >>> ignore_keys=['area', 'image_id', 'id'], >>> num_workers=4, >>> ) >>> # Ingest data from your cloud into another Deep Lake dataset in your cloud, and connect that dataset to the Deep Lake backend",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Return type Dataset Raises IngestionError – If either key_to_tensor_mapping or file_to_group_mapping are not one-to-one . deeplake.ingest_yolo(data_directory: Union[str, Path], dest: Union[str, Path], class_names_file: Optional[Union[str, Path]] = None, annotations_directory: Optional[Union[str, Path]] = None, allow_no_annotation: bool = False, image_params: Optional[Dict] = None, label_params: Optional[Dict] = None, coordinates_params: Optional[Dict] = None, src_creds: Optional[Union[str, Dict]] = None, dest_creds: Optional[Union[str, Dict]] = None, image_creds_key: Optional[str] = None, inspect_limit: int = 1000, progressbar: bool = True, shuffle: bool = False, num_workers: int = 0, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset Ingest images and annotations (bounding boxes or polygons) in YOLO format to a Deep Lake Dataset. The source data can be stored locally or in the cloud. Examples >>> # Ingest local data in YOLO format to a Deep Lake dataset stored in Deep Lake storage. >>> ds = deeplake.ingest_yolo( >>> 'path/to/data/directory', >>> dest='hub://org_id/dataset', >>> allow_no_annotation=True, >>> token='my_activeloop_token', >>> num_workers=4, >>> ) >>> # Ingest data from your cloud into another Deep Lake dataset in your cloud, and connect that dataset to the Deep Lake backend. >>> ds = deeplake.ingest_yolo( >>> 's3://bucket/data_directory', >>> dest='s3://bucket/dataset_name', >>> image_params={'name': 'image_links', 'htype': 'link[image]'}, >>> image_creds_key='my_s3_managed_credentials', >>> src_creds=aws_creds, # Can also be inferred from environment >>> dest_creds=aws_creds, # Can also be inferred from environment >>> connect_kwargs={'creds_key': 'my_s3_managed_credentials', 'org_id': 'org_id'}, >>> num_workers=4, >>> ) Copy to clipboard Parameters data_directory (str, pathlib.Path) – The path to the directory containing the data (images files and annotation files(see ‘annotations_directory' input for specifying annotations in a separate directory). dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://org_id/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line), or pass in a token using the ‘token' parameter. an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. class_names_file – Path to the file containing the class names on separate lines. This is typically a file titled classes.names. annotations_directory (Optional[Union[str, pathlib.Path]]) – Path to directory containing the annotations. If specified, the ‘data_directory' will not be examined for annotations. allow_no_annotation (bool) – Flag to determine whether missing annotations files corresponding to an image should be treated as empty annoations. Set to False by default. image_params (Optional[Dict]) – A dictionary containing parameters for the images tensor. label_params (Optional[Dict]) – A dictionary containing parameters for the labels tensor. coordinates_params (Optional[Dict]) – A dictionary containing parameters for the ccoordinates tensor. This tensor either contains bounding boxes or polygons. src_creds (Optional[Union[str, Dict]]) – Credentials to access the source data",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Return type Dataset Raises AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified token is invalid TokenPermissionError – When there are permission or other errors related to token CheckoutError – If version address specified in the path cannot be found DatasetCorruptError – If loading the dataset failed due to corruption and reset is not True ValueError – If version is specified in the path when creating a dataset or If the org id is provided but dataset is ot local, or If the org id is provided but dataset is ot local ReadOnlyModeError – If reset is attempted in read-only mode LockedException – When attempting to open a dataset for writing when it is locked by another machine Exception – Re-raises caught exception if reset cannot fix the issue Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter . Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded. Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.empty(path: Union[str, Path], runtime: Optional[dict] = None, overwrite: bool = False, public: bool = False, memory_cache_size: int = 2000, local_cache_size: int = 0, creds: Optional[Union[str, Dict]] = None, token: Optional[str] = None, org_id: Optional[str] = None, lock_enabled: Optional[bool] = True, lock_timeout: Optional[int] = 0, verbose: bool = True, index_params: Optional[Dict[str, Union[int, str]]] = None) → Dataset Creates an empty dataset Parameters path (str, pathlib.Path) – The full path to the dataset. It can be: a Deep Lake cloud path of the form hub://org_id/dataset_name. Requires registration with Deep Lake. an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. runtime (dict) – Parameters for creating a dataset in the Deep Lake Tensor Database. Only applicable for paths of the form hub://org_id/dataset_name and runtime must be {'tensor_db': True}. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,">>> ds = deeplake.ingest_coco( >>> 's3://bucket/images/directory', >>> 's3://bucket/annotation/file1.json', >>> dest='s3://bucket/dataset_name', >>> ignore_one_group=True, >>> ignore_keys=['area', 'image_id', 'id'], >>> image_settings={'name': 'images', 'htype': 'link[image]', 'sample_compression': 'jpeg'}, >>> image_creds_key='my_s3_managed_credentials', >>> src_creds=aws_creds, # Can also be inferred from environment >>> dest_creds=aws_creds, # Can also be inferred from environment >>> connect_kwargs={'creds_key': 'my_s3_managed_credentials', 'org_id': 'org_id'}, >>> num_workers=4, >>> ) Copy to clipboard Parameters images_directory (str, pathlib.Path) – The path to the directory containing images . annotation_files (str, pathlib.Path, List[str]) – Path to JSON annotation files in COCO format. dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://org_id/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line), or pass in a token using the ‘token' parameter. an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. key_to_tensor_mapping (Optional[Dict]) – A one-to-one mapping between COCO keys and Dataset tensor names. file_to_group_mapping (Optional[Dict]) – A one-to-one mapping between COCO annotation file names and Dataset group names. ignore_one_group (bool) – Skip creation of group in case of a single annotation file. Set to False by default. ignore_keys (List[str]) – A list of COCO keys to ignore. image_params (Optional[Dict]) – A dictionary containing parameters for the images tensor. image_creds_key (Optional[str]) – The name of the managed credentials to use for accessing the images in the linked tensor (is applicable). src_creds (Optional[Union[str, Dict]]) – Credentials to access the source data. If not provided, will be inferred from the environment. dest_creds (Optional[Union[str, Dict]]) – The string ENV or a dictionary containing credentials used to access the destination path of the dataset. inspect_limit (int) – The maximum number of samples to inspect in the annotations json, in order to generate the set of COCO annotation keys. Set to 1000000 by default. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. shuffle (bool) – Shuffles the input data prior to ingestion. Set to False by default. num_workers (int) – The number of workers to use for ingestion. Set to 0 by default. token (Optional[str]) – The token to use for accessing the dataset and/or connecting it to Deep Lake. connect_kwargs (Optional[Dict]) – If specified, the dataset will be connected to Deep Lake, and connect_kwargs will be passed to Dataset.connect. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.empty(). Returns The Dataset created from images and COCO annotations",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"dest_creds (Optional[Union[str, Dict]]) – The string ENV or a dictionary containing credentials used to access the destination path of the dataset . kaggle_credentials (dict) – A dictionary containing kaggle credentials {'username':'YOUR_USERNAME', 'key': 'YOUR_KEY'}. If None, environment variables/the kaggle.json file will be used if available. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. summary (bool) – Generates ingestion summary. Set to True by default. shuffle (bool) – Shuffles the input data prior to ingestion. Since data arranged in folders by class is highly non-random, shuffling is important in order to produce optimal results when training. Defaults to True. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.dataset(). Returns New dataset object with structured dataset. Return type Dataset Raises SamePathException – If the source and destination path are same. Note Currently only local source paths and image classification datasets are supported for automatic ingestion. deeplake.ingest_dataframe(src, dest: Union[str, Path], column_params: Optional[Dict] = None, src_creds: Optional[Union[str, Dict]] = None, dest_creds: Optional[Union[str, Dict]] = None, creds_key: Optional[Dict] = None, progressbar: bool = True, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) Convert pandas dataframe to a Deep Lake Dataset. The contents of the dataframe can be parsed literally, or can be treated as links to local or cloud files. Examples >>> # Ingest local data in COCO format to a Deep Lake dataset stored in Deep Lake storage. Copy to clipboard >>> ds = deeplake.ingest_coco( >>> '<path/to/images/directory>', >>> ['path/to/annotation/file1.json', 'path/to/annotation/file2.json'], >>> dest='hub://org_id/dataset', >>> key_to_tensor_mapping={'category_id': 'labels', 'bbox': 'boxes'}, >>> file_to_group_mapping={'file1.json': 'group1', 'file2.json': 'group2'}, >>> ignore_keys=['area', 'image_id', 'id'], >>> num_workers=4, >>> ) >>> # Ingest data from your cloud into another Deep Lake dataset in your cloud, and connect that dataset to the Deep Lake backend. Copy to clipboard >>> # Ingest data from a DataFrame into a Deep Lake dataset stored in Deep Lake storage. >>> ds = deeplake.ingest_dataframe( >>> df, >>> dest='hub://org_id/dataset', >>> ) >>> # Ingest data from a DataFrame into a Deep Lake dataset stored in Deep Lake storage. The filenames in `df_column_with_cloud_paths` will be used as the filenames for loading data into the dataset. >>> ds = deeplake.ingest_dataframe( >>> df, >>> dest='hub://org_id/dataset', >>> column_params={'df_column_with_cloud_paths': {'name': 'images', 'htype': 'image'}}, >>> src_creds=aws_creds >>> ) >>> # Ingest data from a DataFrame into a Deep Lake dataset stored in Deep Lake storage. The filenames in `df_column_with_cloud_paths` will be used as the filenames for linked data in the dataset. >>> ds = deeplake.ingest_dataframe( >>> df, >>> dest='hub://org_id/dataset', >>> column_params={'df_column_with_cloud_paths': {'name': 'image_links', 'htype': 'link[image]'}}, >>> creds_key='my_s3_managed_credentials' >>> ) >>> # Ingest data from a DataFrame into a Deep Lake dataset stored in your cloud, and connect that dataset to the Deep Lake backend. The filenames in `df_column_with_cloud_paths` will be used as the filenames for linked data in the dataset",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"If not provided, will be inferred from the environment . dest_creds (Optional[Union[str, Dict]]) – The string ENV or a dictionary containing credentials used to access the destination path of the dataset. image_creds_key (Optional[str]) – creds_key for linked tensors, applicable if the htype for the images tensor is specified as ‘link[image]' in the ‘image_params' input. inspect_limit (int) – The maximum number of annotations to inspect, in order to infer whether they are bounding boxes of polygons. This in put is ignored if the htype is specfied in the ‘coordinates_params'. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. shuffle (bool) – Shuffles the input data prior to ingestion. Set to False by default. num_workers (int) – The number of workers to use for ingestion. Set to 0 by default. token (Optional[str]) – The token to use for accessing the dataset and/or connecting it to Deep Lake. connect_kwargs (Optional[Dict]) – If specified, the dataset will be connected to Deep Lake, and connect_kwargs will be passed to Dataset.connect. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.empty(). Returns The Dataset created from the images and YOLO annotations. Return type Dataset Raises IngestionError – If annotations are not found for all the images and ‘allow_no_annotation' is False deeplake.ingest_kaggle(tag: str, src: Union[str, Path], dest: Union[str, Path], exist_ok: bool = False, images_compression: str = 'auto', dest_creds: Optional[Union[str, Dict]] = None, kaggle_credentials: Optional[dict] = None, progressbar: bool = True, summary: bool = True, shuffle: bool = True, **dataset_kwargs) → Dataset Download and ingest a kaggle dataset and store it as a structured dataset to destination. Parameters tag (str) – Kaggle dataset tag. Example: 'coloradokb/dandelionimages' points to https://www.kaggle.com/coloradokb/dandelionimages src (str, pathlib.Path) – Local path to where the raw kaggle dataset will be downlaoded to. dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. exist_ok (bool) – If the kaggle dataset was already downloaded and exist_ok is True, ingestion will proceed without error. images_compression (str) – For image classification datasets, this compression will be used for the images tensor. If images_compression is 'auto', compression will be automatically determined by the most common extension in the directory",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"}), .. . validation: Dataset({ ... features: ['data'] ... }), ... test: Dataset({ ... features: ['data'] ... }), ... } Copy to clipboard it will be converted to a Deep Lake Dataset with tensors ['train/data', 'validation/data', 'test/data']. Features of the type Sequence(feature=Value(dtype='string')) are not supported. Columns of such type are skipped. deeplake.load(path: Union[str, Path], read_only: Optional[bool] = None, memory_cache_size: int = 2000, local_cache_size: int = 0, creds: Optional[Union[dict, str]] = None, token: Optional[str] = None, org_id: Optional[str] = None, verbose: bool = True, access_method: str = 'stream', unlink: bool = False, reset: bool = False, check_integrity: bool = True, lock_timeout: Optional[int] = 0, lock_enabled: Optional[bool] = True, index_params: Optional[Dict[str, Union[int, str]]] = None) → Dataset Loads an existing dataset Examples >>> ds = deeplake.load('hub://username/dataset') >>> ds = deeplake.load('s3://mybucket/my_dataset') >>> ds = deeplake.load('./datasets/my_dataset', overwrite=True) Copy to clipboard Loading to a specfic version: >>> ds = deeplake.load('hub://username/dataset@new_branch') >>> ds = deeplake.load('hub://username/dataset@3e49cded62b6b335c74ff07e97f8451a37aca7b2) Copy to clipboard >>> my_commit_id = '3e49cded62b6b335c74ff07e97f8451a37aca7b2' >>> ds = deeplake.load(f'hub://username/dataset@{my_commit_id}') Copy to clipboard Parameters path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. Loading to a specific version: You can also specify a commit_id or branch to load the dataset to that version directly by using the @ symbol. The path will then be of the form hub://username/dataset@{branch} or hub://username/dataset@{commit_id}. See examples above. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,">>> ds = deeplake.ingest_dataframe( >>> df, >>> dest='s3://bucket/dataset_name', >>> column_params={'df_column_with_cloud_paths': {'name': 'image_links', 'htype': 'link[image]'}}, >>> creds_key='my_s3_managed_credentials' >>> connect_kwargs={'creds_key': 'my_s3_managed_credentials', 'org_id': 'org_id'}, >>> ) Copy to clipboard Parameters src (pd.DataFrame) – The pandas dataframe to be converted . dest (str, pathlib.Path) – A Dataset or The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. column_params (Optional[Dict]) – A dictionary containing parameters for the tensors corresponding to the dataframe columns. src_creds (Optional[Union[str, Dict]]) – Credentials to access the source data. If not provided, will be inferred from the environment. dest_creds (Optional[Union[str, Dict]]) – The string ENV or a dictionary containing credentials used to access the destination path of the dataset. creds_key (Optional[str]) – creds_key for linked tensors, applicable if the htype any tensor is specified as ‘link[…]' in the ‘column_params' input. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. token (Optional[str]) – The token to use for accessing the dataset. connect_kwargs (Optional[Dict]) – A dictionary containing arguments to be passed to the dataset connect method. See Dataset.connect(). **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.empty(). Returns New dataset created from the dataframe. Return type Dataset Raises Exception – If src is not a valid pandas dataframe object. deeplake.ingest_huggingface(src, dest, use_progressbar=True, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset Converts Hugging Face datasets to Deep Lake format. Parameters src (hfDataset, DatasetDict) – Hugging Face Dataset or DatasetDict to be converted. Data in different splits of a DatasetDict will be stored under respective tensor groups. dest (Dataset, str, pathlib.Path) – Destination dataset or path to it. use_progressbar (bool) – Defines if progress bar should be used to show conversion progress. token (Optional[str]) – The token to use for accessing the dataset and/or connecting it to Deep Lake. connect_kwargs (Optional[Dict]) – If specified, the dataset will be connected to Deep Lake, and connect_kwargs will be passed to Dataset.connect. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.empty(). Returns The destination Deep Lake dataset. Return type Dataset Raises ValueError – If dest is not a path or a Deep Lake Dataset. Note if DatasetDict looks like: >>> { ... train: Dataset({ ... features: ['data'] ..",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones . token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. org_id (str, Optional) – Organization id to be used for enabling high-performance features. Only applicable for local datasets. verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist. The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. unlink (bool) – Downloads linked samples if set to True. Only applicable if access_method is download or local. Defaults to False. reset (bool) – If the specified dataset cannot be loaded due to a corrupted HEAD state of the branch being loaded, setting reset=True will reset HEAD changes and load the previous version. check_integrity (bool) – If the param is True it will do integrity check during dataset loading otherwise the check is not performed Returns Dataset loaded using the arguments provided. Return type Dataset Raises DatasetHandlerError – If a Dataset does not exist at the given path",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Examples >>> deeplake.rename('hub://username/image_ds', 'hub://username/new_ds') >>> deeplake.rename('s3://mybucket/my_ds', 's3://mybucket/renamed_ds') Copy to clipboard Parameters old_path (str, pathlib.Path) – The path to the dataset to be renamed . new_path (str, pathlib.Path) – Path to the dataset after renaming. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. Returns The renamed Dataset. Return type Dataset Raises DatasetHandlerError – If a Dataset does not exist at the given path or if new path is to a different directory. deeplake.copy(src: Union[str, Path, Dataset], dest: Union[str, Path], runtime: Optional[dict] = None, tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, dest_creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, **kwargs) Copies dataset at src to dest. Version control history is not included. Parameters src (str, Dataset, pathlib.Path) – The Dataset or the path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at dest, it will be overwritten. Defaults to False. src_creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. num_workers (int) – The number of workers to use for copying",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"verbose (bool) – If True, logs will be printed . Defaults to True. **kwargs – Additional keyword arguments Returns New dataset object. Return type Dataset Raises DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. TypeError – If source is not a dataset. UnsupportedParameterException – If parameter that is no longer supported is beeing called. DatasetCorruptError – If loading source dataset fails with DatasetCorruptedError deeplake.connect(src_path: str, creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None) → Dataset Connects dataset at src_path to Deep Lake via the provided path. Examples >>> # Connect an s3 dataset >>> ds = deeplake.connect(src_path='s3://bucket/dataset', dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token') >>> # or >>> ds = deeplake.connect(src_path='s3://bucket/dataset', org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters src_path (str) – Cloud path to the source dataset. Can be: an s3 path like s3://bucket/path/to/dataset. a gcs path like gcs://bucket/path/to/dataset. an azure path like az://account_name/container/path/to/dataset. creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns The connected Deep Lake dataset. Return type Dataset Raises InvalidSourcePathError – If the src_path is not a valid s3, gcs or azure path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. TokenPermissionError – If the user does not have permission to create a dataset in the specified organization. deeplake.exists(path: Union[str, Path], creds: Optional[Union[str, Dict]] = None, token: Optional[str] = None) → bool Checks if a dataset exists at the given path. Parameters path (str, pathlib.Path) – the path which needs to be checked. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token CheckoutError – If version address specified in the path cannot be found DatasetCorruptError – If loading the dataset failed due to corruption and reset is not True ReadOnlyModeError – If reset is attempted in read-only mode LockedException – When attempting to open a dataset for writing when it is locked by another machine ValueError – If org_id is specified for a non-local dataset Exception – Re-raises caught exception if reset cannot fix the issue ValueError – If the org id is provided but the dataset is not local Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded . Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.delete(path: Union[str, Path], force: bool = False, large_ok: bool = False, creds: Optional[Union[dict, str]] = None, token: Optional[str] = None, verbose: bool = False) → None Deletes a dataset at a given path. Parameters path (str, pathlib.Path) – The path to the dataset to be deleted. force (bool) – Delete data regardless of whether it looks like a deeplake dataset. All data at the path will be removed if set to True. large_ok (bool) – Delete datasets larger than 1GB. Disabled by default. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. Raises DatasetHandlerError – If a Dataset does not exist at the given path and force = False. UserNotLoggedInException – When user is not logged in. NotImplementedError – When attempting to delete a managed view. ValueError – If version is specified in the path Warning This is an irreversible operation. Data once deleted cannot be recovered. deeplake.rename(old_path: Union[str, Path], new_path: Union[str, Path], creds: Optional[Union[dict, str]] = None, token: Optional[str] = None) → Dataset Renames dataset at old_path to new_path",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Defaults to 0 . When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). **kwargs (dict) – Additional keyword arguments Returns New dataset object. Return type Dataset Raises DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. UnsupportedParameterException – If a parameter that is no longer supported is specified. DatasetCorruptError – If loading source dataset fails with DatasetCorruptedError. deeplake.deepcopy(src: Union[str, Path, Dataset], dest: Union[str, Path], runtime: Optional[Dict] = None, tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, dest_creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False, verbose: bool = True, **kwargs) Copies dataset at src to dest including version control history. Parameters src (str, pathlib.Path, Dataset) – The Dataset or the path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. src_creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Returns A boolean confirming whether the dataset exists or not at the given path . Raises ValueError – If version is specified in the path deeplake.read(path: Union[str, Path], verify: bool = False, creds: Optional[Dict] = None, compression: Optional[str] = None, storage: Optional[StorageProvider] = None) → Sample Utility that reads raw data from supported files into Deep Lake format. Recompresses data into format required by the tensor if permitted by the tensor htype. Simply copies the data in the file if file format matches sample_compression of the tensor, thus maximizing upload speeds. Examples >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('path/to/cat.jpg')) >>> ds.images.shape (1, 399, 640, 3) Copy to clipboard >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.videos.append(deeplake.read('path/to/video.mp4')) >>> ds.videos.shape (1, 136, 720, 1080, 3) Copy to clipboard >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('https://picsum.photos/200/300')) >>> ds.images[0].shape (300, 200, 3) Copy to clipboard Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Nifti: 'nii', 'nii.gz' Copy to clipboard Parameters path (str) – Path to a supported file. verify (bool) – If True, contents of the file are verified. creds (optional, Dict) – Credentials for s3, gcp and http urls. compression (optional, str) – Format of the file. Only required if path does not have an extension. storage (optional, StorageProvider) – Storage provider to use to retrieve remote files. Useful if multiple files are being read from same storage to minimize overhead of creating a new provider. Returns Sample object. Call sample.array to get the np.ndarray. Return type Sample Note No data is actually loaded until you try to get a property of the returned Sample. This is useful for passing along to Tensor.append and Tensor.extend. deeplake.link(path: str, creds_key: Optional[str] = None) → LinkedSample Utility that stores a link to raw data. Used to add data to a Deep Lake Dataset without copying it. See Link htype. Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Nifti: 'nii', 'nii.gz' Copy to clipboard Parameters path (str) – Path to a supported file. creds_key (optional, str) – The credential key to use to read data for this sample. The actual credentials are fetched from the dataset. Returns LinkedSample object that stores path and creds. Return type LinkedSample Examples >>> ds = deeplake.dataset('test/test_ds') >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpeg') >>> ds.images.append(deeplake.link('https://picsum.photos/200/300')) Copy to clipboard See more examples here. deeplake.link_tiled(path_array: ndarray, creds_key: Optional[str] = None) → LinkedTiledSample Utility that stores links to multiple images that act as tiles and together form a big image. These images must all have the exact same dimensions. Used to add data to a Deep Lake Dataset without copying it. See Link htype",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Copy to clipboard Parameters path_array (np.ndarray) – N dimensional array of paths to the data, with paths corresponding to respective tiles . The array must have dtype=object and have string values. Each string must point to an image file with the same dimensions. creds_key (optional, str) – The credential key to use to read data for this sample. The actual credentials are fetched from the dataset. Returns LinkedTiledSample object that stores path_array and creds. Return type LinkedTiledSample Examples >>> ds = deeplake.dataset('test/test_ds') >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpeg') >>> arr = np.empty((10, 10), dtype=object) >>> for j, i in itertools.product(range(10), range(10)): ... arr[j, i] = f's3://my_bucket/my_image_{j}_{i}.jpeg' ... >>> ds.images.append(deeplake.link_tiled(arr, creds_key='my_s3_key')) >>> # If all images are 1000x1200x3, we now have a 10000x12000x3 image in our dataset. Copy to clipboard deeplake.tiled(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Union[str, dtype] = dtype('uint8')) Allocates an empty sample of shape sample_shape, broken into tiles of shape tile_shape (except for edge tiles). Example >>> with ds: ... ds.create_tensor('image', htype='image', sample_compression='png') ... ds.image.append(deeplake.tiled(sample_shape=(1003, 1103, 3), tile_shape=(10, 10, 3))) ... ds.image[0][-217:, :212, 1:] = np.random.randint(0, 256, (217, 212, 2), dtype=np.uint8) Copy to clipboard Parameters sample_shape (Tuple[int, ...]) – Full shape of the sample. tile_shape (Optional, Tuple[int, ...]) – The sample will be will stored as tiles where each tile will have this shape (except edge tiles). If not specified, it will be computed such that each tile is close to half of the tensor's max_chunk_size (after compression). dtype (Union[str, np.dtype]) – Dtype for the sample array. Default uint8. Returns A PartialSample instance which can be appended to a Tensor. Return type PartialSample deeplake.compute(fn, name: Optional[str] = None) → Callable[[...], ComputeFunction] Compute is a decorator for functions. The functions should have atleast 2 argument, the first two will correspond to sample_in and samples_out. There can be as many other arguments as required. The output should be appended/extended to the second argument in a deeplake like syntax. Any value returned by the fn will be ignored. Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function. It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised . It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. ignore_errors (bool): If True, input samples that causes transform to fail will be skipped and the errors will be ignored if possible. It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError: All other exceptions raised if there are problems while running the pipeline.",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised . It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool): If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool): If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False. ignore_errors (bool): If True, input samples that causes transform to fail will be skipped and the errors will be ignored if possible. Note pad_data_in is only applicable if data_in is a Deep Lake dataset. It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ds_out provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError: All other exceptions raised if there are problems while running the pipeline. deeplake.compose(functions: List[ComputeFunction]) Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Example: pipeline = deeplake.compose([my_fn(a=3), another_function(b=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function. It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written",activeloop deeplake,https://docs.deeplake.ai/en/latest/deeplake.html#module-deeplake,"creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path . - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. org_id (str, Optional) – Organization id to be used for enabling high-performance features. Only applicable for local datasets. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. verbose (bool) – If True, logs will be printed. Defaults to True. Returns New dataset object. Return type Dataset Raises ValueError – If org_id is specified for a non-local dataset. deeplake.ingest_classification(src: Union[str, Path], dest: Union[str, Path], image_params: Optional[Dict] = None, label_params: Optional[Dict] = None, dest_creds: Optional[Union[str, Dict]] = None, progressbar: bool = True, summary: bool = True, num_workers: int = 0, shuffle: bool = True, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset Ingest a dataset of images from a local folder to a Deep Lake Dataset. Images should be stored in subfolders by class name. Parameters src (str, pathlib.Path) – Local path to where the unstructured dataset of images is stored or path to csv file. dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://org_id/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. image_params (Optional[Dict]) – A dictionary containing parameters for the images tensor. label_params (Optional[Dict]) – A dictionary containing parameters for the labels tensor. dest_creds (Optional[Union[str, Dict]]) – The string ENV or a dictionary containing credentials used to access the destination path of the dataset. progressbar (bool) – Enables or disables ingestion progress bar. Defaults to True",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore," class deeplake.core.vectorstore.deeplake_vectorstore.VectorStore(path: ~typing.Union[str, ~pathlib.Path], tensor_params: ~typing.List[~typing.Dict[str, object]] = [{'name': 'text', 'htype': 'text', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}, {'name': 'metadata', 'htype': 'json', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}, {'name': 'embedding', 'htype': 'embedding', 'dtype': <class 'numpy.float32'>, 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': True, 'max_chunk_size': 64000000}, {'name': 'id', 'htype': 'text', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}], embedding_function: ~typing.Optional[~typing.Any] = None, read_only: ~typing.Optional[bool] = None, ingestion_batch_size: int = 1000, index_params: ~typing.Optional[~typing.Dict[str, ~typing.Union[int, str]]] = None, num_workers: int = 0, exec_option: str = 'auto', token: ~typing.Optional[str] = None, overwrite: bool = False, verbose: bool = True, runtime: ~typing.Optional[~typing.Dict] = None, creds: ~typing.Optional[~typing.Union[str, ~typing.Dict]] = None, org_id: ~typing.Optional[str] = None, logger: ~logging.Logger = <Logger deeplake.core.vectorstore.deeplake_vectorstore (INFO)>, branch: str = 'main', **kwargs: ~typing.Any) Bases: object Base class for VectorStore __init__(path: ~typing.Union[str, ~pathlib.Path], tensor_params: ~typing.List[~typing.Dict[str, object]] = [{'name': 'text', 'htype': 'text', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}, {'name': 'metadata', 'htype': 'json', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}, {'name': 'embedding', 'htype': 'embedding', 'dtype': <class 'numpy.float32'>, 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': True, 'max_chunk_size': 64000000}, {'name': 'id', 'htype': 'text', 'create_id_tensor': False, 'create_sample_info_tensor': False, 'create_shape_tensor': False}], embedding_function: ~typing.Optional[~typing.Any] = None, read_only: ~typing.Optional[bool] = None, ingestion_batch_size: int = 1000, index_params: ~typing.Optional[~typing.Dict[str, ~typing.Union[int, str]]] = None, num_workers: int = 0, exec_option: str = 'auto', token: ~typing.Optional[str] = None, overwrite: bool = False, verbose: bool = True, runtime: ~typing.Optional[~typing.Dict] = None, creds: ~typing.Optional[~typing.Union[str, ~typing.Dict]] = None, org_id: ~typing.Optional[str] = None, logger: ~logging.Logger = <Logger deeplake.core.vectorstore.deeplake_vectorstore (INFO)>, branch: str = 'main', **kwargs: ~typing.Any) → None Creates an empty VectorStore or loads an existing one if it exists at the specified path. Examples >>> # Create a vector store with default tensors >>> data = VectorStore( ... path = './my_vector_store', ... ) >>> # Create a vector store in the Deep Lake Managed Tensor Database >>> data = VectorStore( ... path = 'hub://org_id/dataset_name', ... runtime = {'tensor_db': True}, ... ) >>> # Create a vector store with custom tensors >>> data = VectorStore( ... path = './my_vector_store', ... tensor_params = [{'name': 'text', 'htype': 'text'}, ... {'name': 'embedding_1', 'htype': 'embedding'}, ... {'name': 'embedding_2', 'htype': 'embedding'}, ... {'name': 'source', 'htype': 'text'}, ... {'name': 'metadata', 'htype': 'json'} ... ] ... ) Copy to clipboard Parameters path (str, pathlib.Path) – The full path for storing to the Deep Lake Vector Store. It can be: a Deep Lake cloud path of the form hub://org_id/dataset_name. Requires registration with Deep Lake. an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. tensor_params (List[Dict[str, dict]], optional) – List of dictionaries that contains information about tensors that user wants to create. See create_tensor in Deep Lake API docs for more information",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"- If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file . Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. runtime (Dict, optional) – Parameters for creating the Vector Store in Deep Lake's Managed Tensor Database. Not applicable when loading an existing Vector Store. To create a Vector Store in the Managed Tensor Database, set runtime = {'tensor_db': True}. branch (str) – Branch name to use for the Vector Store. Defaults to 'main'. **kwargs (Any) – Additional keyword arguments. Danger Setting overwrite to True will delete all of your data if the Vector Store exists! Be very careful when setting this parameter. add(embedding_function: Optional[Union[Callable, List[Callable]]] = None, embedding_data: Optional[Union[List, List[List]]] = None, embedding_tensor: Optional[Union[str, List[str]]] = None, return_ids: bool = False, rate_limiter: Dict = {'batch_byte_size': 10000, 'bytes_per_minute': 1800000.0, 'enabled': False}, **tensors) → Optional[List[str]] Adding elements to deeplake vector store. Tensor names are specified as parameters, and data for each tensor is specified as parameter values. All data must of equal length. Examples >>> # Dummy data >>> texts = ['Hello', 'World'] >>> embeddings = [[1, 2, 3], [4, 5, 6]] >>> metadatas = [{'timestamp': '01:20'}, {'timestamp': '01:22'}] >>> emebdding_fn = lambda x: [[1, 2, 3]] * len(x) >>> embedding_fn_2 = lambda x: [[4, 5]] * len(x) >>> # Directly upload embeddings >>> deeplake_vector_store.add( ... text = texts, ... embedding = embeddings, ... metadata = metadatas, ... ) >>> # Upload embedding via embedding function >>> deeplake_vector_store.add( ... text = texts, ... metadata = metadatas, ... embedding_function = embedding_fn, ... embedding_data = texts, ... ) >>> # Upload embedding via embedding function to a user-defined embedding tensor >>> deeplake_vector_store.add( ... text = texts, ... metadata = metadatas, ... embedding_function = embedding_fn, ... embedding_data = texts, ... embedding_tensor = 'embedding_1', ... ) >>> # Multiple embedding functions (user defined embedding tensors must be specified) >>> deeplake_vector_store.add( ... embedding_tensor = ['embedding_1', 'embedding_2'] ... embedding_function = [embedding_fn, embedding_fn_2], ... embedding_data = [texts, texts], ... ) >>> # Alternative syntax for multiple embedding functions >>> deeplake_vector_store.add( ... text = texts, ... metadata = metadatas, ... embedding_tensor_1 = (embedding_fn, texts), ... embedding_tensor_2 = (embedding_fn_2, texts), ... ) >>> # Add data to fully custom tensors >>> deeplake_vector_store.add( ... tensor_A = [1, 2], ... tensor_B = ['a', 'b'], ... tensor_C = ['some', 'data'], ... embedding_function = embedding_fn, ... embedding_data = texts, ... embedding_tensor = 'embedding_1', ..",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"Defaults to DEFAULT_VECTORSTORE_TENSORS . embedding_function (Optional[Any], optional) – Function or class that converts the embeddable data into embeddings. Input to embedding_function is a list of data and output is a list of embeddings. Defaults to None. read_only (bool, optional) – Opens dataset in read-only mode if True. Defaults to False. num_workers (int) – Number of workers to use for parallel ingestion. ingestion_batch_size (int) – Batch size to use for parallel ingestion. index_params (Dict[str, Union[int, str]]) – Dictionary containing information about vector index that will be created. Defaults to None, which will utilize DEFAULT_VECTORSTORE_INDEX_PARAMS from deeplake.constants. The specified key-values override the default ones. - threshold: The threshold for the dataset size above which an index will be created for the embedding tensor. When the threshold value is set to -1, index creation is turned off. Defaults to -1, which turns off the index. distance_metric: This key specifies the method of calculating the distance between vectors when creating the vector database (VDB) index. It can either be a string that corresponds to a member of the DistanceType enumeration, or the string value itself. If no value is provided, it defaults to 'L2'. 'L2' corresponds to DistanceType.L2_NORM. 'COS' corresponds to DistanceType.COSINE_SIMILARITY. additional_params: Additional parameters for fine-tuning the index. exec_option (str) – Default method for search execution. It could be either 'auto', 'python', 'compute_engine' or 'tensor_db'. Defaults to 'auto'. If None, it's set to 'auto'. - auto- Selects the best execution method based on the storage location of the Vector Store. It is the default option. - python - Pure-python implementation that runs on the client and can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged because it can lead to memory issues. - compute_engine - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. - tensor_db - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {'tensor_db': True} during dataset creation. token (str, optional) – Activeloop token, used for fetching user credentials. This is Optional, tokens are normally autogenerated. Defaults to None. overwrite (bool) – If set to True this overwrites the Vector Store if it already exists. Defaults to False. verbose (bool) – Whether to print summary of the dataset created. Defaults to True. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,") Copy to clipboard Parameters embedding_function (Optional[Callable]) – embedding function used to convert embedding_data into embeddings . Input to embedding_function is a list of data and output is a list of embeddings. Overrides the embedding_function specified when initializing the Vector Store. embedding_data (Optional[List]) – Data to be converted into embeddings using the provided embedding_function. Defaults to None. embedding_tensor (Optional[str]) – Tensor where results from the embedding function will be stored. If None, the embedding tensor is automatically inferred (when possible). Defaults to None. return_ids (bool) – Whether to return added ids as an ouput of the method. Defaults to False. rate_limiter (Dict) – Rate limiter configuration. Defaults to {'enabled': False, 'bytes_per_minute': MAX_BYTES_PER_MINUTE, 'batch_byte_size': TARGET_BYTE_SIZE}. **tensors – Keyword arguments where the key is the tensor name, and the value is a list of samples that should be uploaded to that tensor. Returns List of ids if return_ids is set to True. Otherwise, None. Return type Optional[List[str]] checkout(branch: str = 'main') → None Checkout the Vector Store to a specific branch. Parameters branch (str) – Branch name to checkout. Defaults to 'main'. commit(allow_empty: bool = True) → None Commits the Vector Store. Parameters allow_empty (bool) – Whether to allow empty commits. Defaults to True. delete(row_ids: Optional[List[str]] = None, ids: Optional[List[str]] = None, filter: Optional[Union[Dict, Callable]] = None, query: Optional[str] = None, exec_option: Optional[str] = None, delete_all: Optional[bool] = None) → bool Delete the data in the Vector Store. Does not delete the tensor definitions. To delete the vector store completely, first run VectorStore.delete_by_path(). Examples >>> # Delete using ids: >>> data = vector_store.delete(ids) >>> # Delete data using filter >>> data = vector_store.delete( ... filter = {'json_tensor_name': {'key: value'}, 'json_tensor_name_2': {'key_2: value_2'}}, ... ) >>> # Delete data using TQL >>> data = vector_store.delete( ... query = 'select * where ..... <add TQL syntax>', ... exec_option = 'compute_engine', ... ) Copy to clipboard Parameters ids (Optional[List[str]]) – List of unique ids. Defaults to None. row_ids (Optional[List[str]]) – List of absolute row indices from the dataset. Defaults to None. filter (Union[Dict, Callable], optional) – Filter for finding samples for deletion. - Dict - Key-value search on tensors of htype json, evaluated on an AND basis (a sample must satisfy all key-value filters to be True) Dict = {'tensor_name_1': {'key': value}, 'tensor_name_2': {'key': value}} - Function - Any function that is compatible with deeplake.filter. query (Optional[str]) – TQL Query string for direct evaluation for finding samples for deletion, without application of additional filters. exec_option (Optional[str]) – Method for search execution. It could be either 'python', 'compute_engine' or 'tensor_db'. Defaults to None, which inherits the option from the Vector Store initialization. - python - Pure-python implementation that runs on the client and can be used for data stored anywhere",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"WARNING: using this option with big datasets is discouraged because it can lead to memory issues . - compute_engine - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. - tensor_db - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {'tensor_db': True} during dataset creation. delete_all (Optional[bool]) – Whether to delete all the samples and version history of the dataset. Defaults to None. Returns Returns True if deletion was successful, otherwise it raises a ValueError. Return type bool Raises ValueError – If neither ids, filter, query, nor delete_all are specified, or if an invalid exec_option is provided. static delete_by_path(path: Union[str, Path], token: Optional[str] = None, force: bool = False, creds: Optional[Union[str, Dict]] = None) → None Deleted the Vector Store at the specified path. Parameters path (str, pathlib.Path) – The full path to the Deep Lake Vector Store. token (str, optional) – Activeloop token, used for fetching user credentials. This is optional, as tokens are normally autogenerated. Defaults to None. creds (dict, str, optional) – The string ENV or a dictionary containing credentials used to access the dataset at the path. - If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. - It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. - If ‘ENV' is passed, credentials are fetched from the environment variables. This is also the case when creds is not passed for cloud datasets. For datasets connected to hub cloud, specifying ‘ENV' will override the credentials fetched from Activeloop and use local ones. force (bool) – delete the path in a forced manner without rising an exception. Defaults to True. Danger This method permanently deletes all of your data if the Vector Store exists! Be very careful when using this method. search(embedding_data: Optional[Union[str, List[str]]] = None, embedding_function: Optional[Callable] = None, embedding: Optional[Union[List[float], ndarray]] = None, k: int = 4, distance_metric: Optional[str] = None, query: Optional[str] = None, filter: Optional[Union[Dict, Callable]] = None, exec_option: Optional[str] = None, embedding_tensor: str = 'embedding', return_tensors: Optional[List[str]] = None, return_view: bool = False, deep_memory: bool = False) → Union[Dict, Dataset] VectorStore search method that combines embedding search, metadata search, and custom TQL search. Examples >>> # Search using an embedding >>> data = vector_store.search( ... embedding = [1, 2, 3], ... exec_option = 'python', ... ) >>> # Search using an embedding function and data for embedding >>> data = vector_store.search( ... embedding_data = 'What does this chatbot do?', ..",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"embedding_function = query_embedding_fn, .. . exec_option = 'compute_engine', ... ) >>> # Add a filter to your search >>> data = vector_store.search( ... embedding = np.ones(3), ... exec_option = 'python', ... filter = {'json_tensor_name': {'key: value'}, 'json_tensor_name_2': {'key_2: value_2'},...}, # Only valid for exec_option = 'python' ... ) >>> # Search using TQL >>> data = vector_store.search( ... query = 'select * where ..... <add TQL syntax>', ... exec_option = 'tensor_db', # Only valid for exec_option = 'compute_engine' or 'tensor_db' ... ) Copy to clipboard Parameters embedding (Union[np.ndarray, List[float]], optional) – Embedding representation for performing the search. Defaults to None. The embedding_data and embedding cannot both be specified. embedding_data (List[str]) – Data against which the search will be performed by embedding it using the embedding_function. Defaults to None. The embedding_data and embedding cannot both be specified. embedding_function (Optional[Callable], optional) – function for converting embedding_data into embedding. Only valid if embedding_data is specified. Input to embedding_function is a list of data and output is a list of embeddings. k (int) – Number of elements to return after running query. Defaults to 4. distance_metric (str) – Distance metric to use for sorting the data. Avaliable options are: 'L1', 'L2', 'COS', 'MAX'. Defaults to None, which uses same distance metric specified in index_params. If there is no index, it performs linear search using DEFAULT_VECTORSTORE_DISTANCE_METRIC. query (Optional[str]) – TQL Query string for direct evaluation, without application of additional filters or vector search. filter (Union[Dict, Callable], optional) – Additional filter evaluated prior to the embedding search. Dict - Key-value search on tensors of htype json, evaluated on an AND basis (a sample must satisfy all key-value filters to be True) Dict = {'tensor_name_1': {'key': value}, 'tensor_name_2': {'key': value}} Function - Any function that is compatible with Dataset.filter. exec_option (Optional[str]) – Method for search execution. It could be either 'python', 'compute_engine' or 'tensor_db'. Defaults to None, which inherits the option from the Vector Store initialization. python - Pure-python implementation that runs on the client and can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged because it can lead to memory issues. compute_engine - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. tensor_db - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {'tensor_db': True} during dataset creation. embedding_tensor (str) – Name of tensor with embeddings. Defaults to 'embedding'. return_tensors (Optional[List[str]]) – List of tensors to return data for",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"Defaults to None, which returns data for all tensors except the embedding tensor (in order to minimize payload) . To return data for all tensors, specify return_tensors = '*'. return_view (bool) – Return a Deep Lake dataset view that satisfied the search parameters, instead of a dictionary with data. Defaults to False. If True return_tensors is set to '*' beucase data is lazy-loaded and there is no cost to including all tensors in the view. deep_memory (bool) – Whether to use the Deep Memory model for improving search results. Defaults to False if deep_memory is not specified in the Vector Store initialization. If True, the distance metric is set to 'deepmemory_distance', which represents the metric with which the model was trained. The search is performed using the Deep Memory model. If False, the distance metric is set to 'COS' or whatever distance metric user specifies. Raises ValueError – When invalid parameters are specified. ValueError – when deep_memory is True. Deep Memory is only available for datasets stored in the Deep Lake Managed Database for paid accounts. DeepMemoryWaitingListError – if user is not waitlisted to use deep_memory. Returns Dictionary where keys are tensor names and values are the results of the search Return type Dict summary() Prints a summary of the dataset tensors() Returns the list of tensors present in the dataset update_embedding(row_ids: Optional[List[str]] = None, ids: Optional[List[str]] = None, filter: Optional[Union[Dict, Callable]] = None, query: Optional[str] = None, exec_option: Optional[str] = None, embedding_function: Optional[Union[Callable, List[Callable]]] = None, embedding_source_tensor: Union[str, List[str]] = 'text', embedding_tensor: Optional[Union[str, List[str]]] = None) Recompute existing embeddings of the VectorStore, that match either query, filter, ids or row_ids. Examples >>> # Update using ids: >>> data = vector_store.update( ... ids, ... embedding_source_tensor = 'text', ... embedding_tensor = 'embedding', ... embedding_function = embedding_function, ... ) >>> # Update data using filter and several embedding_tensors, several embedding_source_tensors >>> # and several embedding_functions: >>> data = vector_store.update( ... embedding_source_tensor = ['text', 'metadata'], ... embedding_function = ['text_embedding_function', 'metadata_embedding_function'], ... filter = {'json_tensor_name': {'key: value'}, 'json_tensor_name_2': {'key_2: value_2'}}, ... embedding_tensor = ['text_embedding', 'metadata_embedding'] ... ) >>> # Update data using TQL, if new embedding function is not specified the embedding_function used >>> # during initialization will be used >>> data = vector_store.update( ... embedding_source_tensor = 'text', ... query = 'select * where ..... <add TQL syntax>', ... exec_option = 'compute_engine', ... embedding_tensor = 'embedding_tensor', ... ) Copy to clipboard Parameters row_ids (Optional[List[str]], optional) – Row ids of the elements for replacement. Defaults to None. ids (Optional[List[str]], optional) – hash ids of the elements for replacement. Defaults to None. filter (Optional[Union[Dict, Callable]], optional) – Filter for finding samples for replacement",activeloop deeplake.VectorStore,https://docs.deeplake.ai/en/latest/deeplake.VectorStore.html#deeplake-vectorstore,"- Dict - Key-value search on tensors of htype json, evaluated on an AND basis (a sample must satisfy all key-value filters to be True) Dict = {'tensor_name_1': {'key': value}, 'tensor_name_2': {'key': value}} - Function - Any function that is compatible with deeplake.filter query (Optional[str], optional) – TQL Query string for direct evaluation for finding samples for deletion, without application of additional filters . Defaults to None. exec_option (Optional[str]) – Method for search execution. It could be either 'python', 'compute_engine' or 'tensor_db'. Defaults to None, which inherits the option from the Vector Store initialization. - python - Pure-python implementation that runs on the client and can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged because it can lead to memory issues. - compute_engine - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. - tensor_db - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {'tensor_db': True} during dataset creation. embedding_function (Optional[Union[Callable, List[Callable]]], optional) – function for converting embedding_source_tensor into embedding. Only valid if embedding_source_tensor is specified. Defaults to None. embedding_source_tensor (Union[str, List[str]], optional) – Name of tensor with data that needs to be converted to embeddings. Defaults to text. embedding_tensor (Optional[Union[str, List[str]]], optional) – Name of the tensor with embeddings. Defaults to None.",activeloop deeplake.core,https://docs.deeplake.ai/en/latest/deeplake.core.html#deeplake-core,deeplake.core.sample Sample deeplake.core.linked_sample LinkedSample deeplake.core.partial_sample PartialSample deeplake.core.link_tiled_sample LinkedTiledSample deeplake.core.storage Base Storage Provider LRU Cache S3 Storage Provider Google Cloud Storage Provider Google Drive Storage Provider Local Storage Provider Memory Provider deeplake.core.index IndexEntry Index merge_slices() slice_at_int() slice_length(),activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset," Dataset class deeplake.core.dataset.Dataset add_creds_key(creds_key: str, managed: bool = False) Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False) Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str The current branch of the dataset property branches Lists all the branches of the dataset. Returns List of branches. checkout(address: str, create: bool = False, reset: bool = False) → Optional[str] Checks out to a specific commit_id or branch. If create = True, creates a new branch with name address. Parameters address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. reset (bool) – If checkout fails due to a corrupted HEAD state of the branch, setting reset=True will reset HEAD changes and attempt the checkout again. Returns The commit_id of the dataset after checkout. Return type Optional[str] Raises CheckoutError – If address could not be found. ReadOnlyModeError – If branch creation or reset is attempted in read-only mode. DatasetCorruptError – If checkout failed due to dataset corruption and reset is not True. Exception – If the dataset is a filtered view",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout . clear_cache() Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client Returns the client of the dataset. commit(message: Optional[str] = None, allow_empty=False) → str Stores a snapshot of the current state of the dataset. Parameters message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns the commit id of the saved commit that can be used to access the snapshot. Return type str Raises Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: Optional[str] The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict] Lists all the commits leading to the current dataset state. Returns List of dictionaries containing commit information. connect(creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None) Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises InvalidSourcePathError – If the dataset's path is not a valid s3, gcs or azure path",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed . InvalidTensorNameError – If name is in dataset attributes. NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns New Tensor object. Return type Tensor dataloader(ignore_errors: bool = False, verbose: bool = False) Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Parameters ignore_errors (bool) – If True, the data loader will ignore errors appeared during data iteration otherwise it will collect the statistics and report appeared errors. Default value is False verbose (bool) – If True, the data loader will dump verbose logs of it's steps. Default value is False Returns A deeplake.enterprise.DeepLakeDataLoader object. Return type DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard delete(large_ok=False) Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path . TokenPermissionError – If the user does not have permission to create a dataset in the specified organization. copy(dest: Union[str, Path], runtime: Optional[dict] = None, tensors: Optional[List[str]] = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False) Copies this dataset or dataset view to dest. Version control history is not included. Parameters dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns New dataset object. Return type Dataset Raises DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset Creates a tensor group. Intermediate groups in the path are also created. Parameters name – The name of the group to create. exist_ok – If True, the group is created if it does not exist. If False, an error is raised if the group already exists. Defaults to False. Returns The created group. Raises TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False. Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: Union[str, dtype] = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, verbose: bool = True, downsampling: Optional[Tuple[int, int]] = None, tiling_threshold: Optional[int] = None, **kwargs) Creates a new tensor in the dataset",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters name (str) – The name of the tensor to be created . htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. If False, links will be added without verification but note that create_shape_tensor and create_sample_info_tensor will be set to False. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. verbose (bool) – Shows warnings if True. downsampling (tuple[int, int]) – If not None, the tensor will be downsampled by the provided factors. For example, (2, 5) will downsample the tensor by a factor of 2 in both dimensions and create 5 layers of downsampled tensors. Only support for image and mask htypes. tiling_threshold (Optional, int) – In bytes. Tiles large images if their size exceeds this threshold. Set to -1 to disable tiling. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns The new tensor, which can be accessed by dataset[name] or dataset.name. Return type Tensor Raises TensorAlreadyExistsError – If the tensor already exists and exist_ok is False",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"0 indicates in-place for-loop evaluation, multiprocessing is used otherwise . scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns View of Dataset with elements that satisfy filter function. Example Following filters are identical and return dataset view where all the samples have label equals to 2. >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard fix_vc() Rebuilds version control info. To be used when the version control info is corrupted. flush() Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict Get details of a particular commit. Parameters commit_id (str) – commit id of the commit. Returns Dictionary of details with keys - commit, author, time, message. Return type Dict Raises KeyError – If given commit_id is was not found in the dataset. get_creds_keys() → Set[str] Returns the set of creds keys added to the dataset. These are used to fetch external data in linked tensors get_managed_creds_keys() → List[str] Returns the list of creds keys added to the dataset that are managed by Activeloop platform. These are used to fetch external data in linked tensors. get_view(id: str) → ViewEntry Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters id (str) – id of required view. Returns ViewEntry Raises KeyError – If no such view exists. get_views(commit_id: Optional[str] = None) → List[ViewEntry] Returns list of views stored in this Dataset. Parameters commit_id (str, optional) – Commit from which views should be returned. If not specified, views from all commits are returned. Returns List of ViewEntry instances. Return type List[ViewEntry] property groups: Dict[str, Dataset] All sub groups in this group property has_head_changes Returns True if currently at head node and uncommitted changes are present. property info Returns the information about the dataset. property is_head_node Returns True if the current commit is the head node of the branch and False otherwise. property is_view: bool Returns True if this dataset is a view and False otherwise",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Parameters large_ok (bool) – Delete datasets larger than 1 GB . Defaults to False. Raises DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. delete_branch(name: str) → None Deletes the branch and cleans up any unneeded data. Branches can only be deleted if there are no sub-branches and if it has never been merged into another branch. Parameters name (str) – The branch to delete. Raises CommitError – If branch could not be found. ReadOnlyModeError – If branch deletion is attempted in read-only mode. Exception – If you have the given branch currently checked out. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.delete_branch('alt') Copy to clipboard delete_group(name: str, large_ok: bool = False) Delete a tensor group from the dataset. Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns None Raises TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False) Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns None Raises TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str) Deletes the view with given view id. Parameters id (str) – Id of the view to delete. Raises KeyError – if view with given id does not exist. diff(id_1: Optional[str] = None, id_2: Optional[str] = None, as_dict=False) → Optional[Dict] Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns Optional[Dict] Raises ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"If only id_2 is provided, a ValueError will be raised . If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated. Note A dictionary of the differences between the commits/branches is returned if as_dict is True. The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False, ignore_errors: bool = False, progressbar: bool = False) Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. ignore_errors (bool) – Skip samples that cause errors while extending, if set to True. progressbar (bool) – Displays a progress bar if set to True. Raises KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. SampleExtendError – If the extend failed while appending a sample. Exception – Error while attempting to rollback appends. filter(function: Union[Callable, str], num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: Optional[str] = None, result_ds_args: Optional[dict] = None) Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"load_view(id: str, optimize: Optional[bool] = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: Optional[bool] = True) Loads the view and returns the Dataset by id . Equivalent to ds.get_view(id).load(). Parameters id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns The loaded view. Return type Dataset Raises KeyError – if view with given id does not exist. log() Displays the details of all the past commits. property max_len Return the maximum length of the tensor. property max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: Optional[str] = None, delete_removed_tensors: bool = False, force: bool = False) Merges the target_id into the current dataset. Parameters target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset . Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. cache_size (int) – The size of the cache per tensor in MBs. Defaults to max(maximum chunk size of tensor, 32 MB). Returns A torch.utils.data.DataLoader object. Raises EmptyTensorError – If one or more tensors being passed to pytorch are empty. Note Pytorch does not support uint16, uint32, uint64 dtypes. These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str, runtime: Optional[Dict] = None, return_data: bool = False) Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object runtime (Optional[Dict]) – Runtime parameters for query execution. Supported keys: {'tensor_db': True or False}. return_data (bool) – Defaults to False. Whether to return raw data along with the view. Raises ValueError – if return_data is True and runtime is not {'tensor_db': true} Returns A Dataset object. Return type Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard random_split(lengths: Sequence[Union[int, float]]) Splits the dataset into non-overlapping Dataset objects of given lengths. If a list of fractions that sum up to 1 is given, the lengths will be computed automatically as floor(frac * len(dataset)) for each fraction provided. The split generated is only performant with enterprise dataloader which can be installed with pip install deeplake[enterprise]. After computing the lengths, if there are any remainders, 1 count will be distributed in round-robin fashion to the lengths until there are no remainders left",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Example >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('labels', htype='class_label') >>> ds.labels.extend([0, 1, 2, 1, 3]) >>> len(ds) 5 >>> train_ds, val_ds = ds.random_split([0.8, 0.2]) >>> len(train_ds) 4 >>> len(val_ds) 1 >>> train_ds, val_ds = ds.random_split([3, 2]) >>> len(train_ds) 3 >>> len(val_ds) 2 >> train_loader = train_ds.pytorch(batch_size=2, shuffle=True) >> val_loader = val_ds.pytorch(batch_size=2, shuffle=False) Copy to clipboard Parameters lengths (Sequence[Union[int, float]]) – lengths or fractions of splits to be produced . Returns a tuple of datasets of the given lengths. Return type Tuple[Dataset, …] Raises ValueError – If the sum of the lengths is not equal to the length of the dataset. ValueError – If the dataset has variable length tensors. ValueError – If lengths are floats and one or more of them are not between 0 and 1. property read_only Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: Optional[Union[str, List[str]]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True) Rewrites the underlying chunks to make their sizes optimal. This is usually needed in cases where a lot of updates have been made to the data. Parameters tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: Union[str, Path]) Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters path (str, pathlib.Path) – New path to the dataset. Raises RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None Renames group with name name to new_name Parameters name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor Renames tensor with name name to new_name Parameters name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns Renamed tensor. Return type Tensor Raises TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"The optimization process will take some time, depending on the size of the data . You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. num_workers (int) – Number of workers to be used for optimization process. Applicable only if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ignore_errors (bool) – Skip samples that cause errors while saving views. Only applicable if optimize=True. Defaults to False. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns Path to the saved VDS. Return type str Raises ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. TypeError – If id is not of type str. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). set_token(new_token: str) Method to set a new token size_approx() Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary(force: bool = False) Prints a summary of the dataset. Parameters force (bool) – Dataset views with more than 10000 samples might take a long time to summarize. If force=True, the summary will be printed regardless. An error will be raised otherwise. Raises ValueError – If the dataset view might take a long time to summarize and force=False tensorflow(tensors: Optional[Sequence[str]] = None, tobytes: Union[bool, Sequence[str]] = False, fetch_chunks: bool = True) Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor] All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token Get attached token of the dataset update(sample: Dict[str, Any]) Update existing samples in the dataset with new values",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"reset(force: bool = False) Resets the uncommitted changes present in the branch . Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root Returns the root dataset of a group. sample_by(weights: Union[str, list, tuple], replace: Optional[bool] = True, size: Optional[int] = None) Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view. Defaults to True size – Optional[int] The length of the result view. Defaults to length of the dataset. Returns A deeplake.Dataset object. Return type Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard property sample_indices Returns all the indices pointed to by this dataset view. save_view(message: Optional[str] = None, path: Optional[Union[str, Path]] = None, id: Optional[str] = None, optimize: bool = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, ignore_errors: bool = False, **ds_args) → str Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) >>> # or >>> ds.populate_creds('my_s3_key', from_environment=True) Copy to clipboard pytorch(transform: Optional[Callable] = None, tensors: Optional[Sequence[str]] = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Optional[Callable] = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, progressbar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Optional[Dict[str, Any]] = None, decode_method: Optional[Dict[str, str]] = None, cache_size: int = 32000000, *args, **kwargs) Converts the dataset into a pytorch Dataloader . Parameters *args – Additional args to be passed to torch_dataset **kwargs – Additional kwargs to be passed to torch_dataset transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False. Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False progressbar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Defaults to False . property client Returns the client of the dataset. connect(*args, **kwargs) Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises InvalidSourcePathError – If the dataset's path is not a valid s3, gcs or azure path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. TokenPermissionError – If the user does not have permission to create a dataset in the specified organization. delete(large_ok=False) Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. get_managed_creds_keys() → Set[str] Returns the set of creds keys added to the dataset that are managed by Activeloop platform. These are used to fetch external data in linked tensors. property is_actually_cloud: bool Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path) Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters path (str, pathlib.Path) – New path to the dataset. Raises RenameError – If path points to a different directory. property token Get attached token of the dataset update_creds_key(creds_key: str, new_creds_key: Optional[str] = None, managed: Optional[bool] = None) Updates the name and/or management status of a creds key. Parameters creds_key (str) – The key whose management status is to be changed. new_creds_key (str, optional) – The new key to replace the old key. If not provided, the old key will be used. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises ValueError – If the dataset is not connected to activeloop platform. ValueError – If both new_creds_key and managed are None. KeyError – If the creds key is not present in the dataset",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> ds[0].update({'images': deeplake.read('new_image.png'), 'labels': 1}) Copy to clipboard >>> new_images = [deeplake.read(f'new_image_{i}.png') for i in range(3)] >>> ds[:3].update({'images': new_images, 'labels': [1, 2, 3]}) Copy to clipboard Parameters sample (dict) – Dictionary with tensor names as keys and samples as values . Raises ValueError – If partial update of a sample is attempted. Exception – Error while attempting to rollback updates. update_creds_key(creds_key: str, new_creds_key: Optional[str] = None, managed: Optional[bool] = None) Updates the name and/or management status of a creds key. Parameters creds_key (str) – The key whose name and/or management status is to be changed. new_creds_key (str, optional) – The new key to replace the old key. If not provided, the old key will be used. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises ValueError – If the dataset is not connected to activeloop platform. ValueError – If both new_creds_key and managed are None. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Rename the key and change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.update_creds_key('my_s3_key', 'my_managed_key', True) Copy to clipboard visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None) Visualizes the dataset in the Jupyter notebook. Parameters width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. DeepLakeCloudDataset class deeplake.core.dataset.DeepLakeCloudDataset Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored in or connected to Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False) Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch . If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta Returns the metadata of the dataset. property min_len Return the minimum length of the tensor. property min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property no_view_dataset Returns the same dataset without slicing. property num_samples: int Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent Returns the parent of this group. Returns None if this is the root dataset. property pending_commit_id: str The commit_id of the next commit that will be made to the dataset. If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: Optional[int] = None) Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises IndexError – If the index is out of range. populate_creds(creds_key: str, creds: Optional[dict] = None, from_environment: bool = False) Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data",activeloop deeplake.core.dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake-core-dataset,"Exception – All other errors such as during population of managed creds . Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Rename the key and change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.update_creds_key('my_s3_key', 'my_managed_key', True) Copy to clipboard visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None) Visualizes the dataset in the Jupyter notebook. Parameters width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. ViewEntry class deeplake.core.dataset.ViewEntry Represents a view saved inside a dataset. delete() Deletes the view. property id: str Returns id of the view. load(verbose=True) Loads the view and returns the Dataset. Parameters verbose (bool) – If True, logs will be printed. Defaults to True. Returns Loaded dataset view. Return type Dataset property message: str Returns the message with which the view was saved. optimize(tensors: Optional[List[str]] = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True) Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns ViewEntry Raises Exception – When query view cannot be optimized.",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout . clear_cache() Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client Returns the client of the dataset. commit(message: Optional[str] = None, allow_empty=False) → str Stores a snapshot of the current state of the dataset. Parameters message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns the commit id of the saved commit that can be used to access the snapshot. Return type str Raises Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: Optional[str] The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict] Lists all the commits leading to the current dataset state. Returns List of dictionaries containing commit information. connect(creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None) Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises InvalidSourcePathError – If the dataset's path is not a valid s3, gcs or azure path",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset," class deeplake.core.dataset.Dataset add_creds_key(creds_key: str, managed: bool = False) Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False) Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str The current branch of the dataset property branches Lists all the branches of the dataset. Returns List of branches. checkout(address: str, create: bool = False, reset: bool = False) → Optional[str] Checks out to a specific commit_id or branch. If create = True, creates a new branch with name address. Parameters address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. reset (bool) – If checkout fails due to a corrupted HEAD state of the branch, setting reset=True will reset HEAD changes and attempt the checkout again. Returns The commit_id of the dataset after checkout. Return type Optional[str] Raises CheckoutError – If address could not be found. ReadOnlyModeError – If branch creation or reset is attempted in read-only mode. DatasetCorruptError – If checkout failed due to dataset corruption and reset is not True. Exception – If the dataset is a filtered view",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path . TokenPermissionError – If the user does not have permission to create a dataset in the specified organization. copy(dest: Union[str, Path], runtime: Optional[dict] = None, tensors: Optional[List[str]] = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False) Copies this dataset or dataset view to dest. Version control history is not included. Parameters dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. runtime (dict) – Parameters for Activeloop DB Engine. Only applicable for hub:// paths. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns New dataset object. Return type Dataset Raises DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset Creates a tensor group. Intermediate groups in the path are also created. Parameters name – The name of the group to create. exist_ok – If True, the group is created if it does not exist. If False, an error is raised if the group already exists. Defaults to False. Returns The created group. Raises TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False. Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: Union[str, dtype] = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, verbose: bool = True, downsampling: Optional[Tuple[int, int]] = None, tiling_threshold: Optional[int] = None, **kwargs) Creates a new tensor in the dataset",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Parameters large_ok (bool) – Delete datasets larger than 1 GB . Defaults to False. Raises DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. delete_branch(name: str) → None Deletes the branch and cleans up any unneeded data. Branches can only be deleted if there are no sub-branches and if it has never been merged into another branch. Parameters name (str) – The branch to delete. Raises CommitError – If branch could not be found. ReadOnlyModeError – If branch deletion is attempted in read-only mode. Exception – If you have the given branch currently checked out. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.delete_branch('alt') Copy to clipboard delete_group(name: str, large_ok: bool = False) Delete a tensor group from the dataset. Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns None Raises TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False) Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns None Raises TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str) Deletes the view with given view id. Parameters id (str) – Id of the view to delete. Raises KeyError – if view with given id does not exist. diff(id_1: Optional[str] = None, id_2: Optional[str] = None, as_dict=False) → Optional[Dict] Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns Optional[Dict] Raises ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters name (str) – The name of the tensor to be created . htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. If False, links will be added without verification but note that create_shape_tensor and create_sample_info_tensor will be set to False. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. verbose (bool) – Shows warnings if True. downsampling (tuple[int, int]) – If not None, the tensor will be downsampled by the provided factors. For example, (2, 5) will downsample the tensor by a factor of 2 in both dimensions and create 5 layers of downsampled tensors. Only support for image and mask htypes. tiling_threshold (Optional, int) – In bytes. Tiles large images if their size exceeds this threshold. Set to -1 to disable tiling. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns The new tensor, which can be accessed by dataset[name] or dataset.name. Return type Tensor Raises TensorAlreadyExistsError – If the tensor already exists and exist_ok is False",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed . InvalidTensorNameError – If name is in dataset attributes. NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns New Tensor object. Return type Tensor dataloader(ignore_errors: bool = False, verbose: bool = False) Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Parameters ignore_errors (bool) – If True, the data loader will ignore errors appeared during data iteration otherwise it will collect the statistics and report appeared errors. Default value is False verbose (bool) – If True, the data loader will dump verbose logs of it's steps. Default value is False Returns A deeplake.enterprise.DeepLakeDataLoader object. Return type DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard delete(large_ok=False) Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"0 indicates in-place for-loop evaluation, multiprocessing is used otherwise . scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns View of Dataset with elements that satisfy filter function. Example Following filters are identical and return dataset view where all the samples have label equals to 2. >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard fix_vc() Rebuilds version control info. To be used when the version control info is corrupted. flush() Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict Get details of a particular commit. Parameters commit_id (str) – commit id of the commit. Returns Dictionary of details with keys - commit, author, time, message. Return type Dict Raises KeyError – If given commit_id is was not found in the dataset. get_creds_keys() → Set[str] Returns the set of creds keys added to the dataset. These are used to fetch external data in linked tensors get_managed_creds_keys() → List[str] Returns the list of creds keys added to the dataset that are managed by Activeloop platform. These are used to fetch external data in linked tensors. get_view(id: str) → ViewEntry Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters id (str) – id of required view. Returns ViewEntry Raises KeyError – If no such view exists. get_views(commit_id: Optional[str] = None) → List[ViewEntry] Returns list of views stored in this Dataset. Parameters commit_id (str, optional) – Commit from which views should be returned. If not specified, views from all commits are returned. Returns List of ViewEntry instances. Return type List[ViewEntry] property groups: Dict[str, Dataset] All sub groups in this group property has_head_changes Returns True if currently at head node and uncommitted changes are present. property info Returns the information about the dataset. property is_head_node Returns True if the current commit is the head node of the branch and False otherwise. property is_view: bool Returns True if this dataset is a view and False otherwise",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"load_view(id: str, optimize: Optional[bool] = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: Optional[bool] = True) Loads the view and returns the Dataset by id . Equivalent to ds.get_view(id).load(). Parameters id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns The loaded view. Return type Dataset Raises KeyError – if view with given id does not exist. log() Displays the details of all the past commits. property max_len Return the maximum length of the tensor. property max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: Optional[str] = None, delete_removed_tensors: bool = False, force: bool = False) Merges the target_id into the current dataset. Parameters target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) >>> # or >>> ds.populate_creds('my_s3_key', from_environment=True) Copy to clipboard pytorch(transform: Optional[Callable] = None, tensors: Optional[Sequence[str]] = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Optional[Callable] = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, progressbar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Optional[Dict[str, Any]] = None, decode_method: Optional[Dict[str, str]] = None, cache_size: int = 32000000, *args, **kwargs) Converts the dataset into a pytorch Dataloader . Parameters *args – Additional args to be passed to torch_dataset **kwargs – Additional kwargs to be passed to torch_dataset transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False. Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False progressbar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Example >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('labels', htype='class_label') >>> ds.labels.extend([0, 1, 2, 1, 3]) >>> len(ds) 5 >>> train_ds, val_ds = ds.random_split([0.8, 0.2]) >>> len(train_ds) 4 >>> len(val_ds) 1 >>> train_ds, val_ds = ds.random_split([3, 2]) >>> len(train_ds) 3 >>> len(val_ds) 2 >> train_loader = train_ds.pytorch(batch_size=2, shuffle=True) >> val_loader = val_ds.pytorch(batch_size=2, shuffle=False) Copy to clipboard Parameters lengths (Sequence[Union[int, float]]) – lengths or fractions of splits to be produced . Returns a tuple of datasets of the given lengths. Return type Tuple[Dataset, …] Raises ValueError – If the sum of the lengths is not equal to the length of the dataset. ValueError – If the dataset has variable length tensors. ValueError – If lengths are floats and one or more of them are not between 0 and 1. property read_only Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: Optional[Union[str, List[str]]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True) Rewrites the underlying chunks to make their sizes optimal. This is usually needed in cases where a lot of updates have been made to the data. Parameters tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: Union[str, Path]) Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters path (str, pathlib.Path) – New path to the dataset. Raises RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None Renames group with name name to new_name Parameters name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor Renames tensor with name name to new_name Parameters name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns Renamed tensor. Return type Tensor Raises TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch . If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta Returns the metadata of the dataset. property min_len Return the minimum length of the tensor. property min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property no_view_dataset Returns the same dataset without slicing. property num_samples: int Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent Returns the parent of this group. Returns None if this is the root dataset. property pending_commit_id: str The commit_id of the next commit that will be made to the dataset. If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: Optional[int] = None) Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises IndexError – If the index is out of range. populate_creds(creds_key: str, creds: Optional[dict] = None, from_environment: bool = False) Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset . Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy' Default behaviour. Returns samples as numpy arrays. 'tobytes' Returns raw bytes of the samples. 'pil' Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. cache_size (int) – The size of the cache per tensor in MBs. Defaults to max(maximum chunk size of tensor, 32 MB). Returns A torch.utils.data.DataLoader object. Raises EmptyTensorError – If one or more tensors being passed to pytorch are empty. Note Pytorch does not support uint16, uint32, uint64 dtypes. These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str, runtime: Optional[Dict] = None, return_data: bool = False) Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object runtime (Optional[Dict]) – Runtime parameters for query execution. Supported keys: {'tensor_db': True or False}. return_data (bool) – Defaults to False. Whether to return raw data along with the view. Raises ValueError – if return_data is True and runtime is not {'tensor_db': true} Returns A Dataset object. Return type Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard random_split(lengths: Sequence[Union[int, float]]) Splits the dataset into non-overlapping Dataset objects of given lengths. If a list of fractions that sum up to 1 is given, the lengths will be computed automatically as floor(frac * len(dataset)) for each fraction provided. The split generated is only performant with enterprise dataloader which can be installed with pip install deeplake[enterprise]. After computing the lengths, if there are any remainders, 1 count will be distributed in round-robin fashion to the lengths until there are no remainders left",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"If only id_2 is provided, a ValueError will be raised . If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated. Note A dictionary of the differences between the commits/branches is returned if as_dict is True. The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False, ignore_errors: bool = False, progressbar: bool = False) Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. ignore_errors (bool) – Skip samples that cause errors while extending, if set to True. progressbar (bool) – Displays a progress bar if set to True. Raises KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. SampleExtendError – If the extend failed while appending a sample. Exception – Error while attempting to rollback appends. filter(function: Union[Callable, str], num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: Optional[str] = None, result_ds_args: Optional[dict] = None) Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"reset(force: bool = False) Resets the uncommitted changes present in the branch . Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root Returns the root dataset of a group. sample_by(weights: Union[str, list, tuple], replace: Optional[bool] = True, size: Optional[int] = None) Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view. Defaults to True size – Optional[int] The length of the result view. Defaults to length of the dataset. Returns A deeplake.Dataset object. Return type Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard property sample_indices Returns all the indices pointed to by this dataset view. save_view(message: Optional[str] = None, path: Optional[Union[str, Path]] = None, id: Optional[str] = None, optimize: bool = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, ignore_errors: bool = False, **ds_args) → str Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view",activeloop DeepLakeCloudDataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplakeclouddataset," class deeplake.core.dataset.DeepLakeCloudDataset Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored in or connected to Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False) Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform. Defaults to False. property client Returns the client of the dataset. connect(*args, **kwargs) Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises InvalidSourcePathError – If the dataset's path is not a valid s3, gcs or azure path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. TokenPermissionError – If the user does not have permission to create a dataset in the specified organization. delete(large_ok=False) Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. get_managed_creds_keys() → Set[str] Returns the set of creds keys added to the dataset that are managed by Activeloop platform. These are used to fetch external data in linked tensors. property is_actually_cloud: bool Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path) Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters path (str, pathlib.Path) – New path to the dataset. Raises RenameError – If path points to a different directory",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"The optimization process will take some time, depending on the size of the data . You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. num_workers (int) – Number of workers to be used for optimization process. Applicable only if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ignore_errors (bool) – Skip samples that cause errors while saving views. Only applicable if optimize=True. Defaults to False. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns Path to the saved VDS. Return type str Raises ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. TypeError – If id is not of type str. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). set_token(new_token: str) Method to set a new token size_approx() Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary(force: bool = False) Prints a summary of the dataset. Parameters force (bool) – Dataset views with more than 10000 samples might take a long time to summarize. If force=True, the summary will be printed regardless. An error will be raised otherwise. Raises ValueError – If the dataset view might take a long time to summarize and force=False tensorflow(tensors: Optional[Sequence[str]] = None, tobytes: Union[bool, Sequence[str]] = False, fetch_chunks: bool = True) Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor] All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token Get attached token of the dataset update(sample: Dict[str, Any]) Update existing samples in the dataset with new values",activeloop Dataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#dataset,"Examples >>> ds[0].update({'images': deeplake.read('new_image.png'), 'labels': 1}) Copy to clipboard >>> new_images = [deeplake.read(f'new_image_{i}.png') for i in range(3)] >>> ds[:3].update({'images': new_images, 'labels': [1, 2, 3]}) Copy to clipboard Parameters sample (dict) – Dictionary with tensor names as keys and samples as values . Raises ValueError – If partial update of a sample is attempted. Exception – Error while attempting to rollback updates. update_creds_key(creds_key: str, new_creds_key: Optional[str] = None, managed: Optional[bool] = None) Updates the name and/or management status of a creds key. Parameters creds_key (str) – The key whose name and/or management status is to be changed. new_creds_key (str, optional) – The new key to replace the old key. If not provided, the old key will be used. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises ValueError – If the dataset is not connected to activeloop platform. ValueError – If both new_creds_key and managed are None. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Rename the key and change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.update_creds_key('my_s3_key', 'my_managed_key', True) Copy to clipboard visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None) Visualizes the dataset in the Jupyter notebook. Parameters width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop ViewEntry,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#viewentry,"class deeplake.core.dataset.ViewEntry Represents a view saved inside a dataset. delete() Deletes the view. property id: str Returns id of the view. load(verbose=True) Loads the view and returns the Dataset. Parameters verbose (bool) – If True, logs will be printed. Defaults to True. Returns Loaded dataset view. Return type Dataset property message: str Returns the message with which the view was saved. optimize(tensors: Optional[List[str]] = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True) Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns ViewEntry Raises Exception – When query view cannot be optimized.",activeloop DeepLakeCloudDataset,https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplakeclouddataset,"property token Get attached token of the dataset update_creds_key(creds_key: str, new_creds_key: Optional[str] = None, managed: Optional[bool] = None) Updates the name and/or management status of a creds key . Parameters creds_key (str) – The key whose management status is to be changed. new_creds_key (str, optional) – The new key to replace the old key. If not provided, the old key will be used. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises ValueError – If the dataset is not connected to activeloop platform. ValueError – If both new_creds_key and managed are None. KeyError – If the creds key is not present in the dataset. Exception – All other errors such as during population of managed creds. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Rename the key and change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.update_creds_key('my_s3_key', 'my_managed_key', True) Copy to clipboard visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None) Visualizes the dataset in the Jupyter notebook. Parameters width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop deeplake.core.tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#deeplake-core-tensor,"extend(samples: Union[ndarray, Sequence[Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]], Tensor], progressbar: bool = False, ignore_errors: bool = False) Extends the end of the tensor by appending multiple elements from a sequence . Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files. See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. ignore_errors (bool) – Skip samples that cause errors while extending, if set to True. Raises TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool Whether this tensor is a hidden tensor. property htype Htype of the tensor. property info: Info Returns the information about the tensor. User can set info of tensor. Returns Information about the tensor. Return type Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard invalidate_libdeeplake_dataset() Invalidates the libdeeplake dataset object. property is_dynamic: bool Will return True if samples in this tensor have shapes that are unequal. property is_link Whether this tensor is a link tensor. property is_sequence Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False) Return list data. Only applicable for tensors with ‘list' base htype. property meta Metadata of the tensor. modified_samples(target_id: Optional[str] = None, return_indexes: Optional[bool] = False) Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type Tensor Raises TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int Number of dimensions of the tensor. property num_samples: int Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length",activeloop deeplake.core.tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#deeplake-core-tensor," Tensor class deeplake.core.tensor.Tensor __len__() Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns The current length of this tensor. Return type int __setitem__(item: Union[int, slice], value: Any) Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard _check_compatibility_with_htype(htype) Checks if the tensor is compatible with the given htype. Raises an error if not compatible. property _config Returns a summary of the configuration of the tensor. _linked_sample() Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]) Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear() Deletes all samples from the tensor creds_key() Return path data. Only applicable for linked tensors data(aslist: bool = False, fetch_chunks: bool = False) → Any Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False) Return json data. Only applicable for tensors with ‘json' base htype. property dtype: Optional[dtype] Dtype of the tensor",activeloop deeplake.core.tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#deeplake-core-tensor,"numpy(aslist=False, fetch_chunks=False) → Union[ndarray, List[ndarray]] Computes the contents of the tensor in numpy format . Parameters aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(aslist: bool = True, fetch_chunks: bool = False) Return path data. Only applicable for linked tensors. Parameters aslist (bool) – Returns links in a list if True. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. Returns A list or numpy array of links. Return type Union[np.ndarray, List] Raises Exception – If the tensor is not a linked tensor. play() Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: Optional[int] = None) Removes an element at the given index. property sample_indices Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Union[Dict, List[Dict]] Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[Optional[int], ...] Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval Returns a ShapeInterval object that describes this tensor's shape more accurately",activeloop Tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#tensor," class deeplake.core.tensor.Tensor __len__() Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns The current length of this tensor. Return type int __setitem__(item: Union[int, slice], value: Any) Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard _check_compatibility_with_htype(htype) Checks if the tensor is compatible with the given htype. Raises an error if not compatible. property _config Returns a summary of the configuration of the tensor. _linked_sample() Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]) Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear() Deletes all samples from the tensor creds_key() Return path data. Only applicable for linked tensors data(aslist: bool = False, fetch_chunks: bool = False) → Any Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False) Return json data. Only applicable for tensors with ‘json' base htype. property dtype: Optional[dtype] Dtype of the tensor",activeloop Tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#tensor,"numpy(aslist=False, fetch_chunks=False) → Union[ndarray, List[ndarray]] Computes the contents of the tensor in numpy format . Parameters aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(aslist: bool = True, fetch_chunks: bool = False) Return path data. Only applicable for linked tensors. Parameters aslist (bool) – Returns links in a list if True. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. Returns A list or numpy array of links. Return type Union[np.ndarray, List] Raises Exception – If the tensor is not a linked tensor. play() Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: Optional[int] = None) Removes an element at the given index. property sample_indices Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Union[Dict, List[Dict]] Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[Optional[int], ...] Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval Returns a ShapeInterval object that describes this tensor's shape more accurately",activeloop Tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#tensor,"extend(samples: Union[ndarray, Sequence[Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]], Tensor], progressbar: bool = False, ignore_errors: bool = False) Extends the end of the tensor by appending multiple elements from a sequence . Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files. See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. ignore_errors (bool) – Skip samples that cause errors while extending, if set to True. Raises TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool Whether this tensor is a hidden tensor. property htype Htype of the tensor. property info: Info Returns the information about the tensor. User can set info of tensor. Returns Information about the tensor. Return type Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard invalidate_libdeeplake_dataset() Invalidates the libdeeplake dataset object. property is_dynamic: bool Will return True if samples in this tensor have shapes that are unequal. property is_link Whether this tensor is a link tensor. property is_sequence Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False) Return list data. Only applicable for tensors with ‘list' base htype. property meta Metadata of the tensor. modified_samples(target_id: Optional[str] = None, return_indexes: Optional[bool] = False) Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type Tensor Raises TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int Number of dimensions of the tensor. property num_samples: int Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length",activeloop Tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#tensor,"Length is included . Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns Object containing lower and upper properties. Return type ShapeInterval Note If you are expecting a tuple, use shape instead. shapes() Get the shapes of all the samples in the tensor. Returns List of shapes of all the samples in the tensor. Return type np.ndarray summary() Prints a summary of the tensor. text(fetch_chunks: bool = False) Return text data. Only applicable for tensors with ‘text' base htype. property timestamps: ndarray Returns timestamps (in seconds) for video sample as numpy array. Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes Returns the bytes of the tensor. Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns The bytes of the tensor. Return type bytes Raises ValueError – If the tensor has multiple samples. property verify Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop deeplake.core.tensor,https://docs.deeplake.ai/en/latest/deeplake.core.tensor.html#deeplake-core-tensor,"Length is included . Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns Object containing lower and upper properties. Return type ShapeInterval Note If you are expecting a tuple, use shape instead. shapes() Get the shapes of all the samples in the tensor. Returns List of shapes of all the samples in the tensor. Return type np.ndarray summary() Prints a summary of the tensor. text(fetch_chunks: bool = False) Return text data. Only applicable for tensors with ‘text' base htype. property timestamps: ndarray Returns timestamps (in seconds) for video sample as numpy array. Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes Returns the bytes of the tensor. Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns The bytes of the tensor. Return type bytes Raises ValueError – If the tensor has multiple samples. property verify Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop deeplake.api,https://docs.deeplake.ai/en/latest/deeplake.api.html#deeplake-api,deeplake.api.dataset dataset deeplake.api.info Info deeplake.api.link link() deeplake.api.read read() deeplake.api.tiled tiled() deeplake.api.link_tiled link_tiled(),activeloop deeplake.auto,https://docs.deeplake.ai/en/latest/deeplake.auto.html#deeplake-auto,deeplake.auto.structured deeplake.auto.structured.base deeplake.auto.structured.dataframe deeplake.auto.unstructured deeplake.auto.unstructured.base deeplake.auto.unstructured.image_classification deeplake.auto.unstructured.kaggle,activeloop deeplake.util,https://docs.deeplake.ai/en/latest/deeplake.util.html#deeplake-util,deeplake.util.shape_interval ShapeInterval deeplake.util.remove_cache remove_memory_cache() get_base_storage() get_dataset_with_zero_size_cache() create_read_copy_dataset() deeplake.util.notebook is_notebook() is_jupyter() is_colab() deeplake.util.exceptions ExternalCommandError KaggleError KaggleMissingCredentialsError KaggleDatasetAlreadyDownloadedError InvalidPathException AutoCompressionError InvalidFileExtension SamePathException TensorInvalidSampleShapeError TensorMetaMissingKey TensorDoesNotExistError TensorAlreadyExistsError TensorGroupDoesNotExistError TensorGroupAlreadyExistsError InvalidTensorNameError InvalidTensorGroupNameError DynamicTensorNumpyError InvalidShapeIntervalError InvalidKeyTypeError UnsupportedTensorTypeError InvalidBytesRequestedError ProviderListEmptyError DirectoryAtPathException FileAtPathException ProviderSizeListMismatch ModuleNotInstalledException LoginException UserNotLoggedInException InvalidHubPathException PathNotEmptyException AuthenticationException AuthorizationException InvalidPasswordException CouldNotCreateNewDatasetException ResourceNotFoundException BadRequestException OverLimitException ServerException BadGatewayException GatewayTimeoutException WaitTimeoutException LockedException UnexpectedStatusCodeException EmptyTokenException S3Error S3GetError S3SetError S3DeletionError S3ListError UnsupportedCompressionError SampleCompressionError SampleDecompressionError InvalidImageDimensions TensorUnsupportedSampleType MetaError MetaDoesNotExistError MetaAlreadyExistsError MetaInvalidKey MetaInvalidRequiredMetaKey TensorMetaInvalidHtype TensorMetaInvalidHtypeOverwriteValue TensorMetaMissingRequiredValue TensorMetaInvalidHtypeOverwriteKey TensorDtypeMismatchError InvalidTensorLinkError TensorMetaMutuallyExclusiveKeysError ReadOnlyModeError TransformError FilterError InvalidInputDataError UnsupportedSchedulerError TensorMismatchError InvalidOutputDatasetError InvalidTransformDataset HubComposeEmptyListError HubComposeIncompatibleFunction DatasetUnsupportedPytorch CorruptedMetaError ChunkEngineError FullChunkError ChunkIdEncoderError ChunkSizeTooSmallError DatasetHandlerError MemoryDatasetCanNotBePickledError CorruptedSampleError VersionControlError MergeError MergeNotSupportedError MergeMismatchError MergeConflictError CheckoutError CommitError EmptyCommitError TensorModifiedError GCSDefaultCredsNotFoundError InvalidOperationError AgreementError AgreementNotAcceptedError NotLoggedInError RenameError BufferError InfoError OutOfChunkCountError OutOfSampleCountError SampleHtypeMismatchError EmptyTensorError DatasetViewSavingError ManagedCredentialsNotFoundError UnableToReadFromUrlError InvalidTokenException TokenPermissionError,activeloop deeplake.client.log,https://docs.deeplake.ai/en/latest/deeplake.client.log.html#deeplake-client-log,Deep Lake does logging using the 'deeplake' logger. Logging level is logging.INFO by default. See example on how to change this. Copy to clipboard,activeloop deeplake.core.transform,https://docs.deeplake.ai/en/latest/deeplake.core.transform.html#deeplake-core-transform,"Raises InvalidInputDataError – If data_in passed to transform is invalid . It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError – If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError – If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError – If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError – All other exceptions raised if there are problems while running the pipeline. ValueError – If num_workers > 0 and checkpoint_interval is not a multiple of num_workers or if checkpoint_interval > 0 and ds_out is None. # noqa: DAR401 Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard Note pad_data_in is only applicable if data_in is a Deep Lake dataset.",activeloop deeplake.core.transform,https://docs.deeplake.ai/en/latest/deeplake.core.transform.html#deeplake-core-transform," class deeplake.core.transform.Pipeline(functions: List[ComputeFunction]) eval(data_in, ds_out: Optional[Dataset] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, skip_ok: bool = False, check_lengths: bool = True, pad_data_in: bool = False, read_only_ok: bool = False, cache_size: int = 16, checkpoint_interval: int = 0, ignore_errors: bool = False, verbose: bool = True, **kwargs) Evaluates the pipeline on data_in to produce an output dataset ds_out. Parameters data_in – Input passed to the transform to generate output dataset. Should support __getitem__ and __len__. Can be a Deep Lake dataset. ds_out (Dataset, optional) – The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty, i.e., all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int) – The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). skip_ok (bool) – If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool) – If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool) – If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False. read_only_ok (bool) – If True and output dataset is same as input dataset, the read-only check is skipped. Defaults to False. cache_size (int) – Cache size to be used by transform per worker. checkpoint_interval (int) – If > 0, the transform will be checkpointed with a commit every checkpoint_interval input samples to avoid restarting full transform due to intermitten failures. If the transform is interrupted, the intermediate data is deleted and the dataset is reset to the last commit. If <= 0, no checkpointing is done. Checkpoint interval should be a multiple of num_workers if num_workers > 0. Defaults to 0. ignore_errors (bool) – If True, input samples that causes transform to fail will be skipped and the errors will be ignored if possible. verbose (bool) – If True, prints additional information about the transform. **kwargs – Additional arguments",activeloop deeplake.core.vectorstore.deep_memory,https://docs.deeplake.ai/en/latest/deeplake.core.vectorstore.deep_memory.html#deeplake-core-vectorstore-deep-memory," VectorStore class deeplake.core.vectorstore.deep_memory.DeepMemory __init__(dataset: Dataset, client: DeepMemoryBackendClient, logger: Logger, embedding_function: Optional[Any] = None, token: Optional[str] = None, creds: Optional[Dict[str, Any]] = None) Based Deep Memory class to train and evaluate models on DeepMemory managed service. Parameters dataset (Dataset) – deeplake dataset object. client (DeepMemoryBackendClient) – Client to interact with the DeepMemory managed service. Defaults to None. logger (logging.Logger) – Logger object. embedding_function (Optional[Any], optional) – Embedding funtion class used to convert queries/documents to embeddings. Defaults to None. token (Optional[str], optional) – API token for the DeepMemory managed service. Defaults to None. creds (Optional[Dict[str, Any]], optional) – Credentials to access the dataset. Defaults to None. Raises ImportError – if indra is not installed cancel(job_id: str) Cancel a training job on DeepMemory managed service. Examples >>> cancelled: bool = vectorstore.deep_memory.cancel(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was cancelled successfully, False otherwise. Return type bool delete(job_id: str) Delete a training job on DeepMemory managed service. Examples >>> deleted: bool = vectorstore.deep_memory.delete(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was deleted successfully, False otherwise. Return type bool evaluate(relevance: List[List[Tuple[str, int]]], queries: List[str], embedding_function: Optional[Callable[[...], List[ndarray]]] = None, embedding: Optional[Union[List[ndarray], List[List[float]]]] = None, top_k: List[int] = [1, 3, 5, 10, 50, 100], qvs_params: Optional[Dict[str, Any]] = None) → Dict[str, Dict[str, float]] Evaluate a model using the DeepMemory managed service. Examples # 1. Evaluate a model using an embedding function: relevance = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] queries = ['What is the capital of India?', 'What is the capital of France?'] embedding_function = openai_embedding.embed_documents vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding_function=embedding_function, ) # 2. Evaluate a model with precomputed embeddings: embeddings = [[-1.2, 12, …], …] vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, ) # 3. Evaluate a model with precomputed embeddings and log queries: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={'log_queries': True}, ) # 4. Evaluate with precomputed embeddings, log queries, and a custom branch: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={ 'log_queries': True, 'branch': 'queries', } ) Parameters queries (List[str]) – Queries for model evaluation. relevance (List[List[Tuple[str, int]]]) – Relevant documents and scores for each query. - Outer list: matches the queries. - Inner list: pairs of doc_id and relevance score. - doc_id: Document ID from the corpus dataset, found in the id tensor. - relevance_score: Between 0 (not relevant) and 1 (relevant). embedding (Optional[np.ndarray], optional) – Query embeddings. Defaults to None. embedding_function (Optional[Callable[..., List[np.ndarray]]], optional) – Function to convert queries into embeddings. Defaults to None. top_k (List[int], optional) – Ranks for model evaluation. Defaults to [1, 3, 5, 10, 50, 100]. qvs_params (Optional[Dict], optional) – Parameters to initialize the queries vectorstore. When specified, creates a new vectorstore to track evaluation queries, the Deep Memory response, and the naive vector search results. Defaults to None",activeloop VectorStore,https://docs.deeplake.ai/en/latest/deeplake.core.vectorstore.deep_memory.html#vectorstore,"Returns Recalls for each rank . Return type Dict[str, Dict[str, float]] Raises ImportError – If indra is not installed. ValueError – If no embedding_function is provided either during initialization or evaluation. list_jobs(debug=False) List all training jobs on DeepMemory managed service. status(job_id: str) Get the status of a training job on DeepMemory managed service. Examples >>> vectorstore.deep_memory.status(job_id) -------------------------------------------------------------- | 6508464cd80cab681bfcfff3 | -------------------------------------------------------------- | status | pending | -------------------------------------------------------------- | progress | None | -------------------------------------------------------------- | results | not available yet | -------------------------------------------------------------- Copy to clipboard Parameters job_id (str) – job_id of the training job. train(queries: List[str], relevance: List[List[Tuple[str, int]]], embedding_function: Optional[Callable[[str], ndarray]] = None, token: Optional[str] = None) → str Train a model on DeepMemory managed service. Examples >>> queries: List[str] = ['What is the capital of India?', 'What is the capital of France?'] >>> relevance: List[List[Tuple[str, int]]] = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] >>> # doc_id_1, doc_id_2, doc_id_3 are the ids of the documents in the corpus dataset that is relevant to the queries. It is stored in the `id` tensor of the corpus dataset. >>> job_id: str = vectorstore.deep_memory.train(queries, relevance) Copy to clipboard Parameters queries (List[str]) – List of queries to train the model on. relevance (List[List[Tuple[str, int]]]) – List of relevant documents for each query with their respective relevance score. The outer list corresponds to the queries and the inner list corresponds to the doc_id, relevence_score pair for each query. doc_id is the document id in the corpus dataset. It is stored in the id tensor of the corpus dataset. relevence_score is the relevance score of the document for the query. The range is between 0 and 1, where 0 stands for not relevant and 1 stands for relevant. embedding_function (Optional[Callable[[str], np.ndarray]], optional) – Embedding funtion used to convert queries to embeddings. Defaults to None. token (str, optional) – API token for the DeepMemory managed service. Defaults to None. Returns job_id of the training job. Return type str Raises ValueError – if embedding_function is not specified either during initialization or during training.",activeloop deeplake.core.vectorstore.deep_memory,https://docs.deeplake.ai/en/latest/deeplake.core.vectorstore.deep_memory.html#deeplake-core-vectorstore-deep-memory,"Returns Recalls for each rank . Return type Dict[str, Dict[str, float]] Raises ImportError – If indra is not installed. ValueError – If no embedding_function is provided either during initialization or evaluation. list_jobs(debug=False) List all training jobs on DeepMemory managed service. status(job_id: str) Get the status of a training job on DeepMemory managed service. Examples >>> vectorstore.deep_memory.status(job_id) -------------------------------------------------------------- | 6508464cd80cab681bfcfff3 | -------------------------------------------------------------- | status | pending | -------------------------------------------------------------- | progress | None | -------------------------------------------------------------- | results | not available yet | -------------------------------------------------------------- Copy to clipboard Parameters job_id (str) – job_id of the training job. train(queries: List[str], relevance: List[List[Tuple[str, int]]], embedding_function: Optional[Callable[[str], ndarray]] = None, token: Optional[str] = None) → str Train a model on DeepMemory managed service. Examples >>> queries: List[str] = ['What is the capital of India?', 'What is the capital of France?'] >>> relevance: List[List[Tuple[str, int]]] = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] >>> # doc_id_1, doc_id_2, doc_id_3 are the ids of the documents in the corpus dataset that is relevant to the queries. It is stored in the `id` tensor of the corpus dataset. >>> job_id: str = vectorstore.deep_memory.train(queries, relevance) Copy to clipboard Parameters queries (List[str]) – List of queries to train the model on. relevance (List[List[Tuple[str, int]]]) – List of relevant documents for each query with their respective relevance score. The outer list corresponds to the queries and the inner list corresponds to the doc_id, relevence_score pair for each query. doc_id is the document id in the corpus dataset. It is stored in the id tensor of the corpus dataset. relevence_score is the relevance score of the document for the query. The range is between 0 and 1, where 0 stands for not relevant and 1 stands for relevant. embedding_function (Optional[Callable[[str], np.ndarray]], optional) – Embedding funtion used to convert queries to embeddings. Defaults to None. token (str, optional) – API token for the DeepMemory managed service. Defaults to None. Returns job_id of the training job. Return type str Raises ValueError – if embedding_function is not specified either during initialization or during training.",activeloop VectorStore,https://docs.deeplake.ai/en/latest/deeplake.core.vectorstore.deep_memory.html#vectorstore," class deeplake.core.vectorstore.deep_memory.DeepMemory __init__(dataset: Dataset, client: DeepMemoryBackendClient, logger: Logger, embedding_function: Optional[Any] = None, token: Optional[str] = None, creds: Optional[Dict[str, Any]] = None) Based Deep Memory class to train and evaluate models on DeepMemory managed service. Parameters dataset (Dataset) – deeplake dataset object. client (DeepMemoryBackendClient) – Client to interact with the DeepMemory managed service. Defaults to None. logger (logging.Logger) – Logger object. embedding_function (Optional[Any], optional) – Embedding funtion class used to convert queries/documents to embeddings. Defaults to None. token (Optional[str], optional) – API token for the DeepMemory managed service. Defaults to None. creds (Optional[Dict[str, Any]], optional) – Credentials to access the dataset. Defaults to None. Raises ImportError – if indra is not installed cancel(job_id: str) Cancel a training job on DeepMemory managed service. Examples >>> cancelled: bool = vectorstore.deep_memory.cancel(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was cancelled successfully, False otherwise. Return type bool delete(job_id: str) Delete a training job on DeepMemory managed service. Examples >>> deleted: bool = vectorstore.deep_memory.delete(job_id) Copy to clipboard Parameters job_id (str) – job_id of the training job. Returns True if job was deleted successfully, False otherwise. Return type bool evaluate(relevance: List[List[Tuple[str, int]]], queries: List[str], embedding_function: Optional[Callable[[...], List[ndarray]]] = None, embedding: Optional[Union[List[ndarray], List[List[float]]]] = None, top_k: List[int] = [1, 3, 5, 10, 50, 100], qvs_params: Optional[Dict[str, Any]] = None) → Dict[str, Dict[str, float]] Evaluate a model using the DeepMemory managed service. Examples # 1. Evaluate a model using an embedding function: relevance = [[('doc_id_1', 1), ('doc_id_2', 1)], [('doc_id_3', 1)]] queries = ['What is the capital of India?', 'What is the capital of France?'] embedding_function = openai_embedding.embed_documents vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding_function=embedding_function, ) # 2. Evaluate a model with precomputed embeddings: embeddings = [[-1.2, 12, …], …] vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, ) # 3. Evaluate a model with precomputed embeddings and log queries: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={'log_queries': True}, ) # 4. Evaluate with precomputed embeddings, log queries, and a custom branch: vectorstore.deep_memory.evaluate( relevance=relevance, queries=queries, embedding=embeddings, qvs_params={ 'log_queries': True, 'branch': 'queries', } ) Parameters queries (List[str]) – Queries for model evaluation. relevance (List[List[Tuple[str, int]]]) – Relevant documents and scores for each query. - Outer list: matches the queries. - Inner list: pairs of doc_id and relevance score. - doc_id: Document ID from the corpus dataset, found in the id tensor. - relevance_score: Between 0 (not relevant) and 1 (relevant). embedding (Optional[np.ndarray], optional) – Query embeddings. Defaults to None. embedding_function (Optional[Callable[..., List[np.ndarray]]], optional) – Function to convert queries into embeddings. Defaults to None. top_k (List[int], optional) – Ranks for model evaluation. Defaults to [1, 3, 5, 10, 50, 100]. qvs_params (Optional[Dict], optional) – Parameters to initialize the queries vectorstore. When specified, creates a new vectorstore to track evaluation queries, the Deep Memory response, and the naive vector search results. Defaults to None",activeloop deeplake.random.seed,https://docs.deeplake.ai/en/latest/deeplake.random.html#deeplake-random-seed,"class deeplake.core.seed.DeeplakeRandom get_seed() → Optional[int] Returns the seed which set to the deeplake to control the flows seed(seed: Optional[int] = None) Set random seed to the deeplake engines Parameters seed (int, optional) – Integer seed for initializing the computational engines, used for bringing reproducibility to random operations. Set to None to reset the seed. Defaults to None. Raises TypeError – If the provided value type is not supported. Background Specify a seed to train models and run randomized Deep Lake operations reproducibly. Features affected are: Dataloader shuffling Sampling and random operations in Tensor Query Language (TQL) Dataset.random_split The random seed can be specified using deeplake.random.seed: >>> import deeplake >>> deeplake.random.seed(0) Copy to clipboard Random number generators in other libraries The Deep Lake random seed does not affect random number generators in other libraries such as numpy. However, seeds in other libraries will affect code where Deep Lake uses those libraries, but it will not impact the methods above where Deep Lake uses its internal seed.",activeloop Background,https://docs.deeplake.ai/en/latest/deeplake.random.html#background,Specify a seed to train models and run randomized Deep Lake operations reproducibly. Features affected are: Dataloader shuffling Sampling and random operations in Tensor Query Language (TQL) Dataset.random_split The random seed can be specified using deeplake.random.seed: Copy to clipboard,activeloop Random number generators in other libraries,https://docs.deeplake.ai/en/latest/deeplake.random.html#random-number-generators-in-other-libraries,"The Deep Lake random seed does not affect random number generators in other libraries such as numpy.However, seeds in other libraries will affect code where Deep Lake uses those libraries, but it will not impact the methods above where Deep Lake uses its internal seed.",activeloop Index,https://docs.deeplake.ai/en/latest/genindex.html,_ABCDEFGHIKLMNOPQRSTUVW,activeloop _,https://docs.deeplake.ai/en/latest/genindex.html,"__contains__() (deeplake.core.storage.GCSProvider method) __delitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getitem__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getstate__() (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) __init__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.sample.Sample method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) (deeplake.util.shape_interval.ShapeInterval method) __iter__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __len__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __repr__() (deeplake.core.index.Index method) __setitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __setstate__() (deeplake.core.storage.LRUCache method) __str__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) __weakref__ (deeplake.core.index.Index attribute) (deeplake.core.index.IndexEntry attribute) (deeplake.core.storage.StorageProvider attribute) _all_keys() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) _check_compatibility_with_htype() (deeplake.core.tensor.Tensor method) _check_is_file() (deeplake.core.storage.LocalProvider method) _check_update_creds() (deeplake.core.storage.S3Provider method) _config (deeplake.core.tensor.Tensor property) _flush_if_not_read_only() (deeplake.core.storage.LRUCache method) _forward() (deeplake.core.storage.LRUCache method) _forward_value() (deeplake.core.storage.LRUCache method) _free_up_space() (deeplake.core.storage.LRUCache method) _insert_in_cache() (deeplake.core.storage.LRUCache method) _is_hub_path (deeplake.core.storage.StorageProvider attribute) _linked_sample() (deeplake.core.tensor.Tensor method) _pop_from_cache() (deeplake.core.storage.LRUCache method) _set_hub_creds_info() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.S3Provider method) _state_keys() (deeplake.core.storage.S3Provider method)",activeloop B,https://docs.deeplake.ai/en/latest/genindex.html,BadGatewayException (class in deeplake.util.exceptions) BadRequestException (class in deeplake.util.exceptions) base_htype (deeplake.core.tensor.Tensor property) batch() (deeplake.enterprise.DeepLakeDataLoader method) branch (deeplake.core.dataset.Dataset property) branches (deeplake.core.dataset.Dataset property) BufferError (class in deeplake.util.exceptions),activeloop D,https://docs.deeplake.ai/en/latest/genindex.html,"data() (deeplake.core.tensor.Tensor method) DataFrame (class in deeplake.auto.structured.dataframe) dataloader() (deeplake.core.dataset.Dataset method) dataset (class in deeplake.api.dataset) Dataset (class in deeplake.core.dataset) dataset() (in module deeplake) DatasetHandlerError (class in deeplake.util.exceptions) DatasetUnsupportedPytorch (class in deeplake.util.exceptions) DatasetViewSavingError (class in deeplake.util.exceptions) deepcopy() (deeplake.api.dataset.dataset static method) (in module deeplake) deeplake module deeplake.api.info module deeplake.integrations.wandb.wandb module DeepLakeCloudDataset (class in deeplake.core.dataset) DeepLakeDataLoader (class in deeplake.enterprise) DeeplakeRandom (class in deeplake.core.seed) DeepMemory (class in deeplake.core.vectorstore.deep_memory), [1] delete() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.dataset.ViewEntry method) (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) (in module deeplake) delete_branch() (deeplake.core.dataset.Dataset method) delete_by_path() (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore static method) delete_group() (deeplake.core.dataset.Dataset method) delete_tensor() (deeplake.core.dataset.Dataset method) delete_view() (deeplake.core.dataset.Dataset method) dict() (deeplake.core.tensor.Tensor method) diff() (deeplake.core.dataset.Dataset method) DirectoryAtPathException (class in deeplake.util.exceptions) disable_readonly() (deeplake.core.storage.StorageProvider method) download_kaggle_dataset() (in module deeplake.auto.unstructured.kaggle) downsample() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) dtype (deeplake.core.tensor.Tensor property) DynamicTensorNumpyError (class in deeplake.util.exceptions)",activeloop C,https://docs.deeplake.ai/en/latest/genindex.html,"cancel() (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] check_readonly() (deeplake.core.storage.StorageProvider method) checkout() (deeplake.core.dataset.Dataset method) (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) CheckoutError (class in deeplake.util.exceptions) ChunkEngineError (class in deeplake.util.exceptions) ChunkIdEncoderError (class in deeplake.util.exceptions) ChunkSizeTooSmallError (class in deeplake.util.exceptions) clear() (deeplake.api.info.Info method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) clear_cache() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) clear_deeplake_objects() (deeplake.core.storage.LRUCache method) client (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) close() (deeplake.enterprise.DeepLakeDataLoader method) commit() (deeplake.core.dataset.Dataset method) (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) commit_id (deeplake.core.dataset.Dataset property) CommitError (class in deeplake.util.exceptions) commits (deeplake.core.dataset.Dataset property) compose() (in module deeplake) compose_at() (deeplake.core.index.Index method) compressed_bytes() (deeplake.core.sample.Sample method) compute() (in module deeplake) connect() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (in module deeplake) copy() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.storage.StorageProvider method) (in module deeplake) CorruptedMetaError (class in deeplake.util.exceptions) CorruptedSampleError (class in deeplake.util.exceptions) CouldNotCreateNewDatasetException (class in deeplake.util.exceptions) create_group() (deeplake.core.dataset.Dataset method) create_read_copy_dataset() (in module deeplake.util.remove_cache) create_tensor() (deeplake.core.dataset.Dataset method) create_tensor_like() (deeplake.core.dataset.Dataset method) creds_key() (deeplake.core.tensor.Tensor method)",activeloop E,https://docs.deeplake.ai/en/latest/genindex.html,"empty() (deeplake.api.dataset.dataset static method) (in module deeplake) EmptyCommitError (class in deeplake.util.exceptions) EmptyTensorError (class in deeplake.util.exceptions) EmptyTokenException (class in deeplake.util.exceptions) enable_readonly() (deeplake.core.storage.StorageProvider method) eval() (deeplake.core.transform.Pipeline method) evaluate() (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] exists() (deeplake.api.dataset.dataset static method) (in module deeplake) extend() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) ExternalCommandError (class in deeplake.util.exceptions)",activeloop G,https://docs.deeplake.ai/en/latest/genindex.html,GatewayTimeoutException (class in deeplake.util.exceptions) GCSDefaultCredsNotFoundError (class in deeplake.util.exceptions) GCSProvider (class in deeplake.core.storage) GDriveProvider (class in deeplake.core.storage) get() (deeplake.api.info.Info method) get_base_storage() (in module deeplake.util.remove_cache) get_bytes() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) get_commit_details() (deeplake.core.dataset.Dataset method) get_creds_keys() (deeplake.core.dataset.Dataset method) get_dataset_with_zero_size_cache() (in module deeplake.util.remove_cache) get_deeplake_object() (deeplake.core.storage.LRUCache method) get_managed_creds_keys() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) get_seed() (deeplake.core.seed.DeeplakeRandom method) get_view() (deeplake.core.dataset.Dataset method) get_views() (deeplake.core.dataset.Dataset method) groups (deeplake.core.dataset.Dataset property),activeloop F,https://docs.deeplake.ai/en/latest/genindex.html,FileAtPathException (class in deeplake.util.exceptions) fill_dataset() (deeplake.auto.structured.dataframe.DataFrame method) filter() (deeplake.core.dataset.Dataset method) FilterError (class in deeplake.util.exceptions) find_axis() (deeplake.core.index.Index method) fix_vc() (deeplake.core.dataset.Dataset method) flush() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.StorageProvider method) FullChunkError (class in deeplake.util.exceptions),activeloop I,https://docs.deeplake.ai/en/latest/genindex.html,id (deeplake.core.dataset.ViewEntry property) ImageClassification (class in deeplake.auto.unstructured.image_classification) Index (class in deeplake.core.index) IndexEntry (class in deeplake.core.index) indices() (deeplake.core.index.IndexEntry method) Info (class in deeplake.api.info) info (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) InfoError (class in deeplake.util.exceptions) ingest_classification() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_coco() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_dataframe() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_huggingface() (in module deeplake) ingest_kaggle() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_yolo() (in module deeplake) invalidate_libdeeplake_dataset() (deeplake.core.tensor.Tensor method) InvalidBytesRequestedError (class in deeplake.util.exceptions) InvalidFileExtension (class in deeplake.util.exceptions) InvalidHubPathException (class in deeplake.util.exceptions) InvalidImageDimensions (class in deeplake.util.exceptions) InvalidInputDataError (class in deeplake.util.exceptions) InvalidKeyTypeError (class in deeplake.util.exceptions) InvalidOperationError (class in deeplake.util.exceptions) InvalidOutputDatasetError (class in deeplake.util.exceptions) InvalidPasswordException (class in deeplake.util.exceptions) InvalidPathException (class in deeplake.util.exceptions) InvalidShapeIntervalError (class in deeplake.util.exceptions) InvalidTensorGroupNameError (class in deeplake.util.exceptions) InvalidTensorLinkError (class in deeplake.util.exceptions) InvalidTensorNameError (class in deeplake.util.exceptions) InvalidTokenException (class in deeplake.util.exceptions) InvalidTransformDataset (class in deeplake.util.exceptions) is_actually_cloud (deeplake.core.dataset.DeepLakeCloudDataset property) is_colab() (in module deeplake.util.notebook) is_dynamic (deeplake.core.tensor.Tensor property) is_head_node (deeplake.core.dataset.Dataset property) is_jupyter() (in module deeplake.util.notebook) is_link (deeplake.core.tensor.Tensor property) is_notebook() (in module deeplake.util.notebook) is_sequence (deeplake.core.tensor.Tensor property) is_trivial() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) is_view (deeplake.core.dataset.Dataset property) items() (deeplake.api.info.Info method),activeloop K,https://docs.deeplake.ai/en/latest/genindex.html,KaggleDatasetAlreadyDownloadedError (class in deeplake.util.exceptions) KaggleError (class in deeplake.util.exceptions) KaggleMissingCredentialsError (class in deeplake.util.exceptions) keys() (deeplake.api.info.Info method),activeloop H,https://docs.deeplake.ai/en/latest/genindex.html,has_head_changes (deeplake.core.dataset.Dataset property) hidden (deeplake.core.tensor.Tensor property) htype (deeplake.core.tensor.Tensor property) HubComposeEmptyListError (class in deeplake.util.exceptions) HubComposeIncompatibleFunction (class in deeplake.util.exceptions),activeloop A,https://docs.deeplake.ai/en/latest/genindex.html,add() (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) add_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) AgreementError (class in deeplake.util.exceptions) AgreementNotAcceptedError (class in deeplake.util.exceptions) append() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) apply() (deeplake.core.index.Index method) apply_squeeze() (deeplake.core.index.Index method) array (deeplake.core.sample.Sample property) AuthenticationException (class in deeplake.util.exceptions) AuthorizationException (class in deeplake.util.exceptions) AutoCompressionError (class in deeplake.util.exceptions),activeloop M,https://docs.deeplake.ai/en/latest/genindex.html,ManagedCredentialsNotFoundError (class in deeplake.util.exceptions) max_len (deeplake.core.dataset.Dataset property) max_view (deeplake.core.dataset.Dataset property) maybe_flush() (deeplake.core.storage.StorageProvider method) MemoryDatasetCanNotBePickledError (class in deeplake.util.exceptions) MemoryProvider (class in deeplake.core.storage) merge() (deeplake.core.dataset.Dataset method) merge_slices() (in module deeplake.core.index) MergeConflictError (class in deeplake.util.exceptions) MergeError (class in deeplake.util.exceptions) MergeMismatchError (class in deeplake.util.exceptions) MergeNotSupportedError (class in deeplake.util.exceptions) message (deeplake.core.dataset.ViewEntry property) meta (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) MetaAlreadyExistsError (class in deeplake.util.exceptions) MetaDoesNotExistError (class in deeplake.util.exceptions) MetaError (class in deeplake.util.exceptions) MetaInvalidKey (class in deeplake.util.exceptions) MetaInvalidRequiredMetaKey (class in deeplake.util.exceptions) min_len (deeplake.core.dataset.Dataset property) min_view (deeplake.core.dataset.Dataset property) modified_samples() (deeplake.core.tensor.Tensor method) module deeplake deeplake.api.info deeplake.integrations.wandb.wandb ModuleNotInstalledException (class in deeplake.util.exceptions),activeloop O,https://docs.deeplake.ai/en/latest/genindex.html,offset() (deeplake.enterprise.DeepLakeDataLoader method) optimize() (deeplake.core.dataset.ViewEntry method) OutOfChunkCountError (class in deeplake.util.exceptions) OutOfSampleCountError (class in deeplake.util.exceptions) OverLimitException (class in deeplake.util.exceptions),activeloop N,https://docs.deeplake.ai/en/latest/genindex.html,nbytes (deeplake.api.info.Info property) ndim (deeplake.core.tensor.Tensor property) need_to_reload_creds() (deeplake.core.storage.S3Provider method) no_view_dataset (deeplake.core.dataset.Dataset property) NotLoggedInError (class in deeplake.util.exceptions) num_samples (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) numpy() (deeplake.core.tensor.Tensor method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop L,https://docs.deeplake.ai/en/latest/genindex.html,"length() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) like() (deeplake.api.dataset.dataset static method) (in module deeplake) link() (in module deeplake) (in module deeplake.api.link) link_tiled() (in module deeplake) (in module deeplake.api.link_tiled) LinkedSample (class in deeplake.core.linked_sample) LinkedTiledSample (class in deeplake.core.linked_tiled_sample) list() (deeplake.api.dataset.dataset static method) (deeplake.core.tensor.Tensor method) list_jobs() (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] load() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) load_view() (deeplake.core.dataset.Dataset method) LocalProvider (class in deeplake.core.storage) LockedException (class in deeplake.util.exceptions) log() (deeplake.core.dataset.Dataset method) LoginException (class in deeplake.util.exceptions) LRUCache (class in deeplake.core.storage)",activeloop R,https://docs.deeplake.ai/en/latest/genindex.html,random_split() (deeplake.core.dataset.Dataset method) read() (in module deeplake) (in module deeplake.api.read) read_only (deeplake.core.dataset.Dataset property) ReadOnlyModeError (class in deeplake.util.exceptions) rechunk() (deeplake.core.dataset.Dataset method) register_deeplake_object() (deeplake.core.storage.LRUCache method) remove_deeplake_object() (deeplake.core.storage.LRUCache method) remove_memory_cache() (in module deeplake.util.remove_cache) rename() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.S3Provider method) (in module deeplake) rename_group() (deeplake.core.dataset.Dataset method) rename_tensor() (deeplake.core.dataset.Dataset method) RenameError (class in deeplake.util.exceptions) replace_with() (deeplake.api.info.Info method) reset() (deeplake.core.dataset.Dataset method) ResourceNotFoundException (class in deeplake.util.exceptions) root (deeplake.core.dataset.Dataset property),activeloop Q,https://docs.deeplake.ai/en/latest/genindex.html,query() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop P,https://docs.deeplake.ai/en/latest/genindex.html,parent (deeplake.core.dataset.Dataset property) PartialSample (class in deeplake.core.partial_sample) path() (deeplake.core.tensor.Tensor method) PathNotEmptyException (class in deeplake.util.exceptions) pending_commit_id (deeplake.core.dataset.Dataset property) pil (deeplake.core.sample.Sample property) Pipeline (class in deeplake.core.transform) play() (deeplake.core.tensor.Tensor method) pop() (deeplake.api.info.Info method) (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) popitem() (deeplake.api.info.Info method) populate_creds() (deeplake.core.dataset.Dataset method) ProviderListEmptyError (class in deeplake.util.exceptions) ProviderSizeListMismatch (class in deeplake.util.exceptions) pytorch() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop S,https://docs.deeplake.ai/en/latest/genindex.html,"S3DeletionError (class in deeplake.util.exceptions) S3Error (class in deeplake.util.exceptions) S3GetError (class in deeplake.util.exceptions) S3ListError (class in deeplake.util.exceptions) S3Provider (class in deeplake.core.storage) S3SetError (class in deeplake.util.exceptions) SamePathException (class in deeplake.util.exceptions) Sample (class in deeplake.core.sample) sample_by() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method) sample_indices (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) sample_info (deeplake.core.tensor.Tensor property) SampleCompressionError (class in deeplake.util.exceptions) SampleDecompressionError (class in deeplake.util.exceptions) SampleHtypeMismatchError (class in deeplake.util.exceptions) save_view() (deeplake.core.dataset.Dataset method) search() (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) seed() (deeplake.core.seed.DeeplakeRandom method) ServerException (class in deeplake.util.exceptions) set_bytes() (deeplake.core.storage.StorageProvider method) set_token() (deeplake.core.dataset.Dataset method) setdefault() (deeplake.api.info.Info method) shape (deeplake.core.tensor.Tensor property) shape_interval (deeplake.core.tensor.Tensor property) ShapeInterval (class in deeplake.util.shape_interval) shapes() (deeplake.core.tensor.Tensor method) shuffle() (deeplake.enterprise.DeepLakeDataLoader method) size_approx() (deeplake.core.dataset.Dataset method) slice_at_int() (in module deeplake.core.index) slice_length() (in module deeplake.core.index) status() (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] StorageProvider (class in deeplake.core.storage) structure() (deeplake.auto.unstructured.image_classification.ImageClassification method) StructuredDataset (class in deeplake.auto.structured.base) subscriptable() (deeplake.core.index.IndexEntry method) summary() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) sync() (deeplake.core.storage.GDriveProvider method)",activeloop U,https://docs.deeplake.ai/en/latest/genindex.html,UnableToReadFromUrlError (class in deeplake.util.exceptions) uncompressed_bytes() (deeplake.core.sample.Sample method) UnexpectedStatusCodeException (class in deeplake.util.exceptions) UnstructuredDataset (class in deeplake.auto.unstructured.base) UnsupportedCompressionError (class in deeplake.util.exceptions) UnsupportedSchedulerError (class in deeplake.util.exceptions) UnsupportedTensorTypeError (class in deeplake.util.exceptions) update() (deeplake.api.info.Info method) (deeplake.core.dataset.Dataset method) update_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) update_embedding() (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) UserNotLoggedInException (class in deeplake.util.exceptions),activeloop T,https://docs.deeplake.ai/en/latest/genindex.html,"Tensor (class in deeplake.core.tensor) TensorAlreadyExistsError (class in deeplake.util.exceptions) TensorDoesNotExistError (class in deeplake.util.exceptions) TensorDtypeMismatchError (class in deeplake.util.exceptions) tensorflow() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method) TensorGroupAlreadyExistsError (class in deeplake.util.exceptions) TensorGroupDoesNotExistError (class in deeplake.util.exceptions) TensorInvalidSampleShapeError (class in deeplake.util.exceptions) TensorMetaInvalidHtype (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteKey (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteValue (class in deeplake.util.exceptions) TensorMetaMissingKey (class in deeplake.util.exceptions) TensorMetaMissingRequiredValue (class in deeplake.util.exceptions) TensorMetaMutuallyExclusiveKeysError (class in deeplake.util.exceptions) TensorMismatchError (class in deeplake.util.exceptions) TensorModifiedError (class in deeplake.util.exceptions) tensors (deeplake.core.dataset.Dataset property) tensors() (deeplake.core.vectorstore.deeplake_vectorstore.VectorStore method) TensorUnsupportedSampleType (class in deeplake.util.exceptions) text() (deeplake.core.tensor.Tensor method) tiled() (in module deeplake) (in module deeplake.api.tiled) timestamps (deeplake.core.tensor.Tensor property) tobytes() (deeplake.core.tensor.Tensor method) token (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) TokenPermissionError (class in deeplake.util.exceptions) train() (deeplake.core.vectorstore.deep_memory.DeepMemory method), [1] transform() (deeplake.enterprise.DeepLakeDataLoader method) TransformError (class in deeplake.util.exceptions)",activeloop V,https://docs.deeplake.ai/en/latest/genindex.html,validate() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) values() (deeplake.api.info.Info method) VectorStore (class in deeplake.core.vectorstore.deeplake_vectorstore) verify (deeplake.core.tensor.Tensor property) VersionControlError (class in deeplake.util.exceptions) ViewEntry (class in deeplake.core.dataset) visualize() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method),activeloop W,https://docs.deeplake.ai/en/latest/genindex.html,WaitTimeoutException (class in deeplake.util.exceptions),activeloop Python Module Index,https://docs.deeplake.ai/en/latest/py-modindex.html,d d deeplake deeplake.api.info deeplake.integrations.wandb.wandb,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.1.5/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Enterprise Features Dataloader Sampler Tensor Query Language Random Split API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.1.5/#indices-and-tables,Index Module Index Search Page,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.1.0/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Enterprise Features Dataloader Sampler Tensor Query Language API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.1.0/#indices-and-tables,Index Module Index Search Page,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.0.16/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Experimental API Dataloader Tensor Query Language API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.0.16/#indices-and-tables,Index Module Index Search Page,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.0.15/#indices-and-tables,Index Module Index Search Page,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.0.15/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Experimental API Dataloader Tensor Query Language API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log,activeloop Indices and tables,https://docs.deeplake.ai/en/v2.8.5/#indices-and-tables,Index Module Index Search Page,activeloop Hub API Reference,https://docs.deeplake.ai/en/v2.8.5/#hub-api-reference,Hub is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Sequence htype Link htype Sample Compressions PyTorch and Tensorflow Support Utility Functions General Functions Making Hub Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Experimental API Dataloader & Query API Reference hub hub.core hub.core.dataset hub.core.tensor hub.api hub.auto hub.util hub.client.log hub.experimental hub.experimental.dataloader,activeloop Activeloop Storage and Managed Datasets,https://docs.activeloop.ai/storage-and-credentials/storage-options#activeloop-storage-and-managed-datasets,"In order to access datasets stored in Deep Lake, or datasets in other clouds that are managed by Activeloop, users must register and authenticate using the steps in the link below:User Authentication",activeloop AWS S3,https://docs.activeloop.ai/storage-and-credentials/storage-options#aws-s3,"Authentication with AWS S3 has 4 options:1.Use Deep Lake on a machine in the AWS ecosystem that has access to the relevant S3 bucket via AWS IAM, in which case there is no need to pass credentials in order to access datasets in that bucket.2.Configure AWS through the cli using aws configure. This creates a credentials file on your machine that is automatically access by Deep Lake during authentication.3.Save the AWS_ACCESS_KEY_ID ,AWS_SECRET_ACCESS_KEY , and AWS_SESSION_TOKEN (optional) in environmental variables of the same name, which are loaded as default credentials if no other credentials are specified.4.Create a dictionary with the AWS_ACCESS_KEY_ID ,AWS_SECRET_ACCESS_KEY , and AWS_SESSION_TOKEN (optional), and pass it to Deep Lake using:Note: the dictionary keys must be lowercase!# Vector Store APIvector_store = VectorStore('s3://<bucket_name>/<dataset_name>', creds = { 'aws_access_key_id': <your_access_key_id>, 'aws_secret_access_key': <your_aws_secret_access_key>, 'aws_session_token': <your_aws_session_token>, # Optional } )# Low Level APIds = deeplake.load('s3://<bucket_name>/<dataset_name>', creds = { 'aws_access_key_id': <your_access_key_id>, 'aws_secret_access_key': <your_aws_secret_access_key>, 'aws_session_token': <your_aws_session_token>, # Optional } ) can be used for connecting to other object storages supporting S3-like API such as MinIO, StorageGrid and others.",activeloop Custom Storage with S3 API,https://docs.activeloop.ai/storage-and-credentials/storage-options#custom-storage-with-s3-api,"In order to connect to other object storages supporting S3-like API such as MinIO, StorageGrid and others, simply add endpoint_url the the creds dictionary.# Vector Store APIvector_store = VectorStore('s3://...', creds = { 'aws_access_key_id': <your_access_key_id>, 'aws_secret_access_key': <your_aws_secret_access_key>, 'aws_session_token': <your_aws_session_token>, # Optional 'endpoint_url': 'http://localhost:8888' } )# Low Level APIds = deeplake.load('s3://...', creds = { 'aws_access_key_id': <your_access_key_id>, 'aws_secret_access_key': <your_aws_secret_access_key>, 'aws_session_token': <your_aws_session_token>, # Optional 'endpoint_url': 'http://localhost:8888' } )",activeloop Microsoft Azure,https://docs.activeloop.ai/storage-and-credentials/storage-options#microsoft-azure,"Authentication with Microsoft Azure has 4 options:1.Log in from your machine's CLI using az login.2.Save the AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_KEY , or other credentials in environmental variables of the same name, which are loaded as default credentials if no other credentials are specified.3.Create a dictionary with the ACCOUNT_KEY or SAS_TOKEN and pass it to Deep Lake using:Note: the dictionary keys must be lowercase!# Vector Store APIvector_store = VectorStore('azure://<account_name>/<container_name>/<dataset_name>', creds = { 'account_key': <your_account_key>, 'sas_token': <your_sas_token>, } )# Low Level APIds = deeplake.load('azure://<account_name>/<container_name>/<dataset_name>', creds = { 'account_key': <your_account_key>, #OR 'sas_token': <your_sas_token>, } )",activeloop Google Cloud Storage,https://docs.activeloop.ai/storage-and-credentials/storage-options#google-cloud-storage,"Authentication with Google Cloud Storage has 2 options:1.Create a service account, download the JSON file containing the keys, and then pass that file to the creds parameter in deeplake.load('gcs://.....', creds = 'path_to_keys.json') . It is also possible to manually pass the information from the JSON file into the creds parameter using: # Vector Store APIvector_store = VectorStore('gcs://.....', creds = {<information from the JSON file>} )# Low Level APIds = deeplake.load('gcs://.....', creds = {<information from the JSON file>} )2.Authenticate through the browser using the steps below. This requires that the project credentials are stored on your machine, which happens after gcloud is initialized and logged in through the CLI. Afterwards, creds can be switched to creds = 'cache'.# Vector Store APIvector_store = VectorStore('gcs://.....', creds = 'browser' # Switch to 'cache' after doing this once )# Low Level APIds = deeplake.load('gcs://.....', creds = 'browser' # Switch to 'cache' after doing this once )",activeloop "How to Customize Deep Lake Vector Stores for Images, Multi-Embedding Applications, and More.",https://docs.activeloop.ai/getting-started/vector-store/step-4-customizing-vector-stores#how-to-customize-deep-lake-vector-stores-for-images-multi-embedding-applications-and-more.,"Under-the-hood, Deep Lake vector stores use the Deep Lake tabular format, where Tensors are conceptually equivalent to columns. A unique feature in Deep Lake is that Tensors can be customized to a variety of use-cases beyond simple embeddings of text.",activeloop Creating vector stores with non-text data,https://docs.activeloop.ai/getting-started/vector-store/step-4-customizing-vector-stores#creating-vector-stores-with-non-text-data,"To create a Vector Store for images, we should write a custom embedding function that embeds images from a file using a neural network, since we cannot use OpenAI for embedding images yet.import osimport torchfrom torchvision import transforms, modelsfrom torchvision.models.feature_extraction import create_feature_extractorfrom PIL import Imagemodel = models.resnet18(pretrained=True)return_nodes = { 'avgpool': 'embedding'}model = create_feature_extractor(model, return_nodes=return_nodes)model.eval()model.to('cpu')tform = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Lambda(lambda x: torch.cat([x, x, x], dim=0) if x.shape[0] == 1 else x), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])def embedding_function(images, model = model, transform = tform, batch_size = 4): '''Creates a list of embeddings based on a list of image filenames. Images are processed in batches.''' if isinstance(images, str): images = [images] #Proceess the embeddings in batches, but return everything as a single list embeddings = [] for i in range(0, len(images), batch_size): batch = torch.stack([transform(Image.open(item)) for item in images[i:i+batch_size]]) batch = batch.to('cpu') with torch.no_grad(): embeddings+= model(batch)['embedding'][:,:,0,0].cpu().numpy().tolist() return embeddingsLets download and unzip 6 example images with common objects and create a list of containing their filenames.data_folder = '/Users/istranic/ActiveloopCode/Datasets/common_objects'image_fns = [os.path.join(data_folder, file) for file in os.listdir(data_folder) if os.path.splitext(file)[-1]=='.jpg']Earlier in this tutorial, we did not specify any data-structure-related information when initializing the Vector Store, which by default creates a vector store with tensors for text, metadata, id (auto-populated), and embedding. Here, we create a Vector Store for image similarity search, which should contains tensors for the image, its embedding, and the filename for the image. This can be achieved by specifying custom tensor_params.vector_store_path = '/vector_store_getting_started_images'vector_store = VectorStore( path = vector_store_path, tensor_params = [{'name': 'image', 'htype': 'image', 'sample_compression': 'jpg'}, {'name': 'embedding', 'htype': 'embedding'}, {'name': 'filename', 'htype': 'text'}],)We add data to the Vector Store just as if we were adding text data earlier in the Getting Started Guide.vector_store.add(image = image_fns, filename = image_fns, embedding_function = embedding_function, embedding_data = image_fns)",activeloop Performing image similarity search,https://docs.activeloop.ai/getting-started/vector-store/step-4-customizing-vector-stores#performing-image-similarity-search,"Let's find the image in the Vector Store that is most similar to the reference image below.image_path = '/reference_image.jpg'result = vector_store.search(embedding_data = [image_path], embedding_function = embedding_function)We can display the result of the most similar image, which shows a picture of a yellow Lamborghini, which is fairly similar to the black Porsche above. Image.fromarray(result['image'][0])",activeloop Creating Vector Stores with multiple embeddings,https://docs.activeloop.ai/getting-started/vector-store/step-4-customizing-vector-stores#creating-vector-stores-with-multiple-embeddings,COMING SOON,activeloop APIs for Search,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#apis-for-search,Vector search can occur via a variety of APIs in Deep Lake. They are explained in the links below:Deep Lake Vector Store APIREST APILangChain API,activeloop Overview of Vector Search Options in Deep Lake,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#overview-of-vector-search-options-in-deep-lake,Deep Lake offers a variety of vector search options depending on the Storage Location of the Vector Store and infrastructure that should run the computations.In memory or localClient-sideDeep Lake OSS Python CodeUser cloud (must be connected to Deep Lake)Client-sideDeep Lake C++ Compute EngineDeep Lake StorageClient-sideDeep Lake C++ Compute EngineDeep Lake Managed Tensor DatabaseManaged DatabaseDeep Lake C++ Compute Engine,activeloop Overview of Options for Search Computation Execution,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#overview-of-options-for-search-computation-execution,The optimal option for search execution is automatically selected based on the Vector Stores storage location. The different computation options are explained below.,activeloop Python (Client-Side),https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#python-client-side,Deep Lake OSS offers query execution logic that run on the client (your machine) using OSS code in Python. This compute logic is accessible in all Deep Lake Python APIs and is available for Vector Stores stored in any location. See individual APIs below for details.,activeloop Managed Tensor Database (Server-Side Running Compute Engine),https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#managed-tensor-database-server-side-running-compute-engine,Deep Lake offers a Managed Tensor Database that executes queries on Deep Lake infrastructure while running Compute Engine under-the-hood. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored in the Deep Lake Managed Tensor Database. See individual APIs below for details.,activeloop Compute Engine (Client-Side),https://docs.activeloop.ai/tutorials/vector-store/vector-search-options#compute-engine-client-side,"Deep Lake Compute Engine offers query execution logic that run on the client (your machine) using C++ Code that is called via Python API. This compute logic is accessible in all Deep Lake Python APIs and is only available for Vector Stores stored Deep Lake storage or in user clouds connected to Deep Lake. See individual APIs below for details. To run queries using Compute Engine, make sure to !pip install 'deeplake[enterprise]'.",activeloop How to Register and Authenticate in Deep Lake,https://docs.activeloop.ai/storage-and-credentials/user-authentication#how-to-register-and-authenticate-in-deep-lake,"In order to use Deep Lake features that require authentication (Activeloop storage, connecting your cloud dataset to the Deep Lake UI, etc.) you should register and login with Deep Lake.",activeloop Registration,https://docs.activeloop.ai/storage-and-credentials/user-authentication#registration,"You can register in the Deep Lake App, or in the CLI using:",activeloop Authentication in Programmatic Interfaces,https://docs.activeloop.ai/storage-and-credentials/user-authentication#authentication-in-programmatic-interfaces,"After registering, you can create an API token in the Deep Lake UI (top-right corner, user settings) and authenticate in programatic interfaces using 3 options:",activeloop Environmental Variable,https://docs.activeloop.ai/storage-and-credentials/user-authentication#environmental-variable,"Set the environmental variable ACTIVELOOP_TOKEN to your API token. In Python, this can be done using:",activeloop CLI Login,https://docs.activeloop.ai/storage-and-credentials/user-authentication#cli-login,"Login in the CLI using two options:activeloop login -u <username> -p <password>activeloop login -t <your_token>Credentials created using the CLI login !activeloop login expire after 1000 hrs. Credentials created using API tokens in the Deep Lake App expire after the time specified for the individual token. Therefore, long-term workflows should be run using API tokens in order to avoid expiration of credentials mid-workflow.",activeloop Pass the Token to Individual Methods,https://docs.activeloop.ai/storage-and-credentials/user-authentication#pass-the-token-to-individual-methods,You can pass your API token to individual methods that require authentication such as:,activeloop How to Write Data Concurrently to Deep Lake Datasets,https://docs.activeloop.ai/tutorials/concurrent-writes#how-to-write-data-concurrently-to-deep-lake-datasets,"Deep Lake offers 3 solutions for concurrently writing data, depending on the required scale of the application. Concurrency is not native to the Deep Lake format, so these solutions use locks and queues to schedule and linearize the write operations to Deep Lake.",activeloop Concurrency Using External Locks,https://docs.activeloop.ai/tutorials/concurrent-writes#concurrency-using-external-locks,Concurrent writes can be supported using an in-memory database that serves as the locking mechanism for Deep Lake datasets. Tools such as Zookeper or Redis are highly performant and reliable and can be deployed using a few lines of code. External locks are recommended for small-to-medium workloads.Concurrency Using Zookeeper Locks,activeloop Managed Concurrency,https://docs.activeloop.ai/tutorials/concurrent-writes#managed-concurrency,COMING SOON. Deep Lake will offer a Managed Tensor Database that supports read (search) and write operations at scale. Deep Lake ensures the operations are performant by provisioning the necessary infrastructure and executing the underlying user requests in a distributed manner. This approach is recommended for production applications that require a separate service to handle the high computational loads of vector search.,activeloop Default Behavior,https://docs.activeloop.ai/tutorials/concurrent-writes#default-behavior,"By default, Deep Lake datasets are loaded in write mode and a lock file is created. This can be avoided by specifying read_only = True to APIs that load datasets. An error will occur if the Deep Lake dataset is locked and the user tries to open it in write mode. To specify a waiting time for the lock to be released, you can specify lock_timeout = <timeout_in_s> to APIs that load datasets.",activeloop Concurrency Using Deep Lake Locks,https://docs.activeloop.ai/tutorials/concurrent-writes#concurrency-using-deep-lake-locks,"Deep Lake datasets internally support file-based locks. File-base locks are generally slower and less reliable that the other listed solutions, and they should only be used for prototyping.",activeloop Manipulating Locks,https://docs.activeloop.ai/tutorials/concurrent-writes#manipulating-locks,"Locks can manually be set or released using:from deeplake.core.lock import lock_dataset, unlock_datasetunlock_dataset(<dataset_path>)lock_dataset(<dataset_path>)",activeloop How to Create Datasets in Deep Lake Format,https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually#how-to-create-datasets-in-deep-lake-format,"This guide creates Deep Lake datasets locally. You may create datasets in the Activeloop cloud by registering, creating an API token, and replacing the local paths below with the path to your Deep Lake organization hub://organization_name/dataset_nameYou don't have to worry about uploading datasets after you've created them. They are automatically synchronized with wherever they are being stored.",activeloop Manual Creation,https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually#manual-creation,"Let's follow along with the example below to create our first dataset manually. First, download and unzip the small classification dataset below called animals. animals datasetThe dataset has the following folder structure:_animals|_cats |_image_1.jpg |_image_2.jpg|_dogs |_image_3.jpg |_image_4.jpgNow that you have the data, you can create a Deep Lake Dataset and initialize its tensors. Running the following code will create Deep Lake dataset inside of the ./animals_deeplakefolder.import deeplakefrom PIL import Imageimport numpy as npimport osds = deeplake.empty('./animals_deeplake') # Create the dataset locallyNext, let's inspect the folder structure for the source dataset './animals' to find the class names and the files that need to be uploaded to the Deep Lake dataset.# Find the class_names and list of files that need to be uploadeddataset_folder = './animals'# Find the subfolders, but filter additional files like DS_Store that are added on Mac machines.class_names = [item for item in os.listdir(dataset_folder) if os.path.isdir(os.path.join(dataset_folder, item))]files_list = []for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename))Next, let's create the dataset tensors and upload metadata. Check out our page on Storage Synchronization for details about the with syntax below.with ds: # Create the tensors with names of your choice. ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) # Add arbitrary metadata - Optional ds.info.update(description = 'My first Deep Lake dataset') ds.images.info.update(camera_type = 'SLR')Specifying htype and dtype is not required, but it is highly recommended in order to optimize performance, especially for large datasets. Usedtypeto specify the numeric type of tensor data, and usehtypeto specify the underlying data structure.Finally, let's populate the data in the tensors. The data is automatically uploaded to the dataset, regardless of whether it's local or in the cloud. with ds: # Iterate through the files and append to Deep Lake dataset for file in files_list: label_text = os.path.basename(os.path.dirname(file)) label_num = class_names.index(label_text) #Append data to the tensors ds.append({'images': deeplake.read(file), 'labels': np.uint32(label_num)})Appending the object deeplake.read(path)is equivalent to appending PIL.Image.fromarray(path). However, the deeplake.read() method is significantly faster because it does not decompress and recompress the image if the compression matches thesample_compression for that tensor. Further details are available in Understanding Compression.In order to maintain proper indexing across tensors, ds.append({...}) requires that you to append to all tensors in the dataset. If you wish to skip tensors during appending, please use ds.append({...}, skip_ok = True) or append to a single tensor using ds.tensor_name.append(...).Check out the first image from this dataset. More details about Accessing Data are available in Step 4.Image.fromarray(ds.images[0].numpy())",activeloop Automatic Creation,https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually#automatic-creation,"If your source data conforms to one of the formats below, you can ingest them directly with 1 line of code.YOLOCOCOClassificationsFor example, the above animals dataset can be converted to Deep Lake format using:src = './animals'dest = './animals_deeplake_auto'ds = deeplake.ingest_classification(src, dest)",activeloop Dataset inspection,https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually#dataset-inspection,You can print a summary of the dataset structure using:ds.summary()Congrats! You just created your first dataset! 🎉,activeloop Creating Tensor Hierarchies,https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually#creating-tensor-hierarchies,"Often it's important to create tensors hierarchically, because information between tensors may be inherently coupled—such as bounding boxes and their corresponding labels. Hierarchy can be created using tensor groups:ds = deeplake.empty('./groups_test') # Creates the dataset# Create tensor hierarchiesds.create_group('my_group')ds.my_group.create_tensor('my_tensor')# Alternatively, a group can us created using create_tensor with '/'ds.create_tensor('my_group_2/my_tensor') #Automatically creates the group 'my_group_2'Tensors in groups are accessed via:ds.my_group.my_tensor#ORds['my_group/my_tensor']For more detailed information regarding accessing datasets and their tensors, check out Step 4.",activeloop This Deep Learning Getting Started guide is available as a Colab Notebook,https://docs.activeloop.ai/getting-started/deep-learning,Step 1: Hello WorldStep 2: Creating Deep Lake DatasetsStep 3: Understanding CompressionStep 4: Accessing and Updating DataStep 5: Visualizing DatasetsStep 6: Using Activeloop StorageStep 7: Connecting Deep Lake Datasets to ML FrameworksStep 8: Parallel ComputingStep 9: Dataset Version ControlStep 10: Dataset Filtering,activeloop Connecting Data From Your Cloud Using Deep Lake Managed Credentials,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#connecting-data-from-your-cloud-using-deep-lake-managed-credentials,Connecting data from your own cloud and managing credentials in Deep Lake unlocks several important capabilities:Access to performant features such as the Deep Lake Compute EngineAccess to the Deep Lake App for datasets stored in your own cloudSimpler access to Deep Lake datasets stored in your own cloud using the Python APINo need for continuously specifying cloud access keys in Python,activeloop Managed Credentials,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#managed-credentials,"In order for the Deep Lake to access datasets or linked tensors stored in the user's cloud, Deep Lake must authenticate the respective cloud resources. Access can be provided using access keys or using role-based access (provisioning steps here). The video below summarizes the UI for managing your cloud credentials. Authentication Using Managed Credentials",activeloop Default Storage,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#default-storage,"Default storage enables you to map the Deep Lake path hub://org_id/dataset_nameto a cloud path of your choice. Subsequently, all datasets created using the Deep Lake path will be stored at the user-specified specified, and they can be accessed using API tokens and managed credentials from Deep Lake. By default, the default storage is set as Activeloop Storage, and you may change it using the UI below.Note: that in order to visualize data in the Deep Lake browser application, it is necessary to enable CORS in the bucket containing any source data.",activeloop Connecting Deep Lake Dataset in your Cloud to the Deep Lake to App,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#connecting-deep-lake-dataset-in-your-cloud-to-the-deep-lake-to-app,"If you do not set the Default Storage as your own cloud, Datasets in user's clouds can be connected to the Deep Lake App using the Python API below. Once a dataset is connected to Deep Lake, it is assigned a Deep Lake path hub://org_id/dataset_name, and it can be accessed using API tokens and managed credentials from Deep Lake, without continuously having to specify cloud credentials.",activeloop Connecting Datasets in the Python API,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#connecting-datasets-in-the-python-api,"# Step 1: Create the dataset directly in the cloud using your own cloud credsds = deeplake.empty('s3://my_bucket/dataset_name', creds = {...})# Step 2: Connect the dataset to Deep Lake and specify the managed credentials# (creds_key) for accessing the data (See Managed Credentials above)ds.connect(org_id = 'org_id', creds_key = 'my_creds_key', token = 'my_token')ORds.connect(dest_path = 'hub://org_id/dataset_name', creds_key = 'my_creds_key', token = 'my_token')Specifying org_id creates the dataset in the specified org using the dataset_name from the cloud path. Specifying the dest_path creates the dataset at the org_id and dataset_name from the specified path.",activeloop Using Manage Credentials with Linked Tensors,https://docs.activeloop.ai/storage-and-credentials/managed-credentials#using-manage-credentials-with-linked-tensors,"Managed credentials can be used for accessing data stored in linked tensors. Simply add the managed credentials to the dataset's creds_keys and assign them to each sample.ds.create_tensors('images', htype = 'link[image]', sample_compression = 'jpeg')ds.add_creds_key('my_creds_key', managed=True)ds.images.append(deeplake.link(link_to_sample, creds_key = 'my_creds_key')",activeloop Enabling CORS in AWS for Data Visualization,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/enabling-cors#enabling-cors-in-aws-for-data-visualization,"In order to visualize Deep Lake datasets stored in your own S3 buckets in the Deep Lake app, please enable Cross-Origin Resource Sharing (CORS) in the buckets containing the Deep Lake dataset and any source data in linked tensors, by inserting the snippet below in the CORS section of the Permissions tab for the bucket:[ { 'AllowedHeaders': [ '*' ], 'AllowedMethods': [ 'GET', 'HEAD' ], 'AllowedOrigins': [ '*.activeloop.ai', '*.activeloop.dev' ], 'ExposeHeaders': [] }]",activeloop Visualizing Your Datasets Locally,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/enabling-cors#visualizing-your-datasets-locally,"In order to visualize Deep Lake datasets stored in your own cloud using ds.visualize() or using our embedded visualizer, the AllowedOrigins values in CORS should be set to *.",activeloop How to optimize Deep Lake for training models at scale,https://docs.activeloop.ai/technical-details/best-practices/training-models-at-scale#how-to-optimize-deep-lake-for-training-models-at-scale,"There are several Deep Lake related tuning parameters that affect the speed of Deep Lake OSS and Performant dataloaders. The plot below shows performance of the Deep Lake dataloaders under different scenarios, and it is discussed in detail below.ImageNet data streaming speeds from S3 to a p3.8xlarge EC2 instance. The average image size is 0.114 MB. Details on the simple and complex transform are available in the appending at the end of this page.",activeloop Setting num_workers,https://docs.activeloop.ai/technical-details/best-practices/training-models-at-scale#setting-num_workers,"Both the OSS and Performant dataloaders in Deep Lake have a num_workers parameter that parallelizes the data fetching, decompression, and transformation.Increasing num_workers will not improve performance in GPU-bottlenecked scenarios. Therefore, we recommend starting with 2-4 workers, and increasing num_workers it if the GPU utilization is low.Increasing num_workers beyond the number of CPUs on a machine does not improve performance.It is common for GPU machines to have 8x CPUs per GPU Increasing num_workers linearly improves streaming speed, with diminishing returns beyond 8+ workers.Increasing num_workers beyond 16 is generally unnecessary, unless you are running complex transformations.",activeloop Choosing the optimal decode_method for images,https://docs.activeloop.ai/technical-details/best-practices/training-models-at-scale#choosing-the-optimal-decode_method-for-images,"Faster dataloading is achieved by minimizing the amount of operations that take place before data is delivered to the GPU. It is important to the decode_method parameter in the OSS and Performant dataloaders based on the following guidelines:When transforming images using tools the require numpy arrays as inputs, such as Albumentations, decode_method should be to numpy, which is the default (No parameters changes are needed)When transforming images using tools the require PIL images as inputs, such as torchvision transforms, decode_method should be to {'image_tensor_name': 'pil'}. torchvision.transforms.ToPIL() should be removed from the top of the transforms stack.Leaving the decode_method as numpy may decrease dataloading speed by up to 2X, because the image is decoded to a numpy array and then re-encoded as a PIL image, instead of being directly decoded to a PIL image.",activeloop APPENDIX TO THE PLOT ABOVE,https://docs.activeloop.ai/technical-details/best-practices/training-models-at-scale#appendix-to-the-plot-above,"The torchvision transforms used to create the comparison in the plot above are:tform_simple = transforms.Compose( [ transforms.Resize((128, 128)), transforms.RandomAffine(20), transforms.RandomHorizontalFlip(p=0.5), transforms.ToTensor(), transforms.Lambda(lambda x: x.repeat(int(3 / x.shape[0]), 1, 1)), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ])tform_complex = transforms.Compose( [ transforms.Resize((256, 256)), transforms.RandomAffine(20), transforms.RandomHorizontalFlip(p=0.5), transforms.RandomVerticalFlip(p=0.5), transforms.RandomPerspective(distortion_scale=0.5, p=0.5), transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2), transforms.ToTensor(), transforms.Lambda(lambda x: x.repeat(int(3 / x.shape[0]), 1, 1)), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ])",activeloop Choosing the optimal dataloader,https://docs.activeloop.ai/technical-details/best-practices/training-models-at-scale#choosing-the-optimal-dataloader,"The Deep Lake Performant dataloader streams data faster compared to the OSS dataloaer, due to its C++ implementation that optimizes asynchronous data fetching and decompression.The Performant dataloader is ~1.5-3X faster compared to the OSS version, depending on the complexity of the transform and the number of workers available for parallelization.Distributed training is only available in the Performant dataloader.",activeloop How to achieve full reproducibility of model training by combining Deep Lake data lineage with W&B logging,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#how-to-achieve-full-reproducibility-of-model-training-by-combining-deep-lake-data-lineage-with-w-and,"Experiment tracking tools such as Weights & Biases (W&B) improve reproducibility of your machine learning experiments by offering logging of datasets, hyper parameters, source codes, and more. When running model training with W&B and Deep Lake, Deep Lake automatically pushes information required to reproduce the data such as the uri, commit_id, and view_id to the active W&B run. By fully logging the state of your dataset, model, and source code, you can achieve full reproducibility of model training run for datasets of any size.This playbook demonstrates how to use Activeloop Deep Lake with Weights & Biases to:Upload a Deep Lake dataset in a W&B run and create a W&B artifactQuery the dataset using Activeloop and save the query result in optimized format for trainingTrain an object detection model on the saved query result and log the training parameters in a W&B runRe-train the model with adjusted parameters and use W&B to compare the different training runs.",activeloop Prerequisites,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#prerequisites,"In addition to installation of commonly used packages in deep-learning, this playbook requires installation of: !pip install deeplake!pip install wandb!pip install albumentationsThe required python imports are:import deeplakeimport albumentations as Afrom albumentations.pytorch import ToTensorV2import numpy as npimport torchimport wandbimport timeimport sysimport mathimport torchvisionfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictorYou should also register with Activeloop and W&B, create API tokens for both tools, and log in to your W&B account via the CLI using:!wandb login",activeloop Creating a W&B Artifact from a Deep Lake Dataset,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#creating-a-w-and-b-artifact-from-a-deep-lake-dataset,"While the most common use case W&B is to track training and evaluation jobs, you may also wrap your dataset upload jobs in a W&B run in order to create W&B artifacts. Any future runs that consume this dataset will also consume the corresponding W&B artifact. As a result, your Deep Lake datasets will be automatically tracked by W&B and can be visualized in the W&B artifacts lineage UI.Any commit, copy or deepcopy operation inside a W&B run will create a W&B artifact. Here we emulate a dataset upload by copying a dataset that is already hosted by Activeloop.WANDB_PROJECT = 'deeplake-demos'run = wandb.init(project=WANDB_PROJECT, job_type='dataset_upload')ds_train = deeplake.deepcopy('hub://activeloop/visdrone-det-train', 'hub://dl-corp/visdrone-det-train', dest_token = 'Insert API Token')ds_val = deeplake.deepcopy('hub://activeloop/visdrone-det-val', 'hub://dl-corp/visdrone-det-val', dest_token = 'Insert API Token')run.finish()You may replace dl-corp in the dataset path above with your own Deep Lake organization in order to run the code.If we open the W&B page for the run above, we see that the datasets has been tracked that artifacts were created for both the training and validation datasets.",activeloop Querying the dataset,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#querying-the-dataset,"Let's use the query engine on the Deep Lake UI to filter out samples that contain any cars with a width or height below 20 pixels. Since most images with vehicles contain cars, this query is a reliable proxy for imaging distance.select * where not any(logical_and(logical_or(boxes[:,2]<20, boxes[:,3]<20), labels == 'car'))This query eliminates approximately 50% of the samples in the dataset.",activeloop Train a model using the Deep Lake dataset,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#train-a-model-using-the-deep-lake-dataset,"Suppose we are building a surveillance system to count and classify vehicles in small-to-medium size parking lots. The visdrone dataset is suitable starting point because it contains a variety of images of vehicles taken in cities using a UAV. However, many images are taken from very large viewing distances, thus resulting in small objects that are difficult to detect in object detection models, and are also not relevant to our surveillance application, like the image below.Therefore, we filter out these images and train the model on a subset of the dataset that is more appropriate for our application.",activeloop Training the model,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#training-the-model," Before training the model, which will often happen on a different machine than where the dataset was created, we first re-load the data for training and optimize if for streaming performance.ds_train = deeplake.load('hub://dl-corp/visdrone-det-train', token = 'Insert API Token', read_only = True)ds_val = deeplake.load('hub://dl-corp/visdrone-det-val', token = 'Insert API Token', read_only = True)ds_train_view = ds_train.load_view('6337166131028ba8bcf7f2ff', optimize = True, num_workers = 4)ds_val_view = ds_val.load_view('633716a10c1052d4385ab5c8', optimize = True, num_workers = 4)An object detection model can be trained using the same approach that is used for all Deep Lake datasets, with several examples in our tutorials.When using subsets of datasets, it's advised to remap the input classes for model training. In this example, the source dataset has 12 classes, but we are only interested in 9 classes containing objects we want to localize in our parking lot (bicycle, car, van, truck, tricycle, awning-tricycle, bus, motor, others). Therefore, we remap the classes of interest to values 0,1,2,3,4,6,7,8 before feeding them into the model for training. Later in this playbook, we will experiment with different transform resolutions, so we specify the transform resolution (WIDTH, HEIGHT), BATCH_SIZE, and minimum bounding box area for transformation (MIN_AREA). All bounding boxes below MIN_AREA are ignored in the transformation and are not fed to the modelWIDTH = 160HEIGHT = 128MIN_AREA = 32BATCH_SIZE = 16# These are the classes we care about and they will be remapped to 0,1,2,3,4,5,6,7,8 in the modelCLASSES_OF_INTEREST = ['bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor', 'others']# The classes of interest correspond to the following array values in the current datasetINDS_OF_INTEREST = [ds_train.labels.info.class_names.index(item) for item in CLASSES_OF_INTEREST]Next, let's specify an augmentation pipeline, which mostly utilizes Albumentations. We perform the remapping of the class labels inside the transformation function.# Augmentation pipeline for training using Albumentationstform_train = A.Compose([ A.RandomSizedBBoxSafeCrop(width=WIDTH, height=HEIGHT, erosion_rate=0.2), A.Rotate(limit=20, p=0.5), A.HorizontalFlip(p=0.5), A.RandomBrightnessContrast(brightness_limit=0.1, contrast_limit=0.1, p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2()], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=MIN_AREA, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Augmentation pipeline for validation using Albumentationstform_val = A.Compose([ A.Resize(width=WIDTH, height=HEIGHT), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2()], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=MIN_AREA, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Transformation function for pre-processing the Deep Lake sample before sending it to the modeldef transform_train(sample_in): # sample_in is a row in the dataset, referenced as a dictionary",activeloop Training the model,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#training-the-model,"# sample_in['images'] is like ds.images[index].numpy() # Convert any grayscale images to RGB image = sample_in['images'] shape = image.shape if shape[2] == 1: image = np.repeat(image, int(3/shape[2]), axis = 2) # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes'], shape) # Filter only the labels that we care about for this training run labels_all = sample_in['labels'] indices = [l for l, label in enumerate(labels_all) if label in INDS_OF_INTEREST] labels_filtered = labels_all[indices] labels_remapped = [INDS_OF_INTEREST.index(label) for label in labels_filtered] boxes_filtered = boxes[indices,:] # Make sure the number of labels and boxes is still the same after filtering assert(len(labels_remapped)) == boxes_filtered.shape[0] # Pass all data to the Albumentations transformation transformed = tform_train(image = image, bboxes = boxes_filtered, bbox_ids = np.arange(boxes_filtered.shape[0]), class_labels = labels_remapped, ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically . # Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Put annotations in a separate object target = {'labels': labels_torch, 'boxes': boxes_torch} return transformed['image'], target# Transformation function for pre-processing the Deep Lake validation sample before sending it to the modeldef transform_val(sample_in): # Convert any grayscale images to RGB image = sample_in['images'] shape = image.shape if shape[2] == 1: image = np.repeat(image, 3, axis = 2) # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes'], shape) # Filter only the labels that we care about for this training run labels_all = sample_in['labels'] indices = [l for l, label in enumerate(labels_all) if label in INDS_OF_INTEREST] labels_filtered = labels_all[indices] labels_remapped = [INDS_OF_INTEREST.index(label) for label in labels_filtered] boxes_filtered = boxes[indices,:] # Make sure the number of labels and boxes is still the same after filtering assert(len(labels_remapped)) == boxes_filtered.shape[0] # Pass all data to the Albumentations transformation transformed = tform_val(image = image, bboxes = boxes_filtered, bbox_ids = np.arange(boxes_filtered.shape[0]), class_labels = labels_remapped, ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically. # Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Put annotations in a separate object target = {'labels': labels_torch, 'boxes': boxes_torch} # We also return the shape of the original image in order to resize the predictions to the dataset image size return transformed['image'], target, sample_in['index'], shape# Conversion script for bounding boxes from coco to Pascal VOC formatdef coco_2_pascal(boxes, shape): # Convert bounding boxes to Pascal VOC format and clip bounding boxes to make sure they have non-negative width and height return np.stack((np.clip(boxes[:,0], 0, None), np.clip(boxes[:,1], 0, None), np.clip(boxes[:,0]+np.clip(boxes[:,2], 1, None), 0, shape[1]), np.clip(boxes[:,1]+np.clip(boxes[:,3], 1, None), 0, shape[0])), axis = 1)def collate_fn(batch): return tuple(zip(*batch))This playbook uses a pre-trained torchvision neural network from the torchvision.models module",activeloop Training the model,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#training-the-model,"Otherwise, it's on a CPU.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(device)Let's initialize the model and optimizer.model = get_model_object_detection(len(CLASSES_OF_INTEREST))model.to(device)# Specify the optimizerparams = [p for p in model.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)Next, we initialize the W&B run and create the dataloaders for training and validation data . We will log the training loss and validation IOU, as well as other parameters like the transform resolution.Creation of the Deep Lake dataloaders is a trigger for the W&B run to log Deep Lake-related information. Therefore, the the W&B run should be initialized before dataloader creation.config={'width': WIDTH, 'height': HEIGHT, 'min_area': MIN_AREA, 'batch_size': BATCH_SIZE}run = wandb.init(project=WANDB_PROJECT, config = config)train_loader = ds_train_view.pytorch(num_workers = 8, shuffle = True, transform = transform_train, tensors = ['images', 'labels', 'boxes'], batch_size = BATCH_SIZE, collate_fn = collate_fn)val_loader = ds_val_view.pytorch(num_workers = 8, shuffle = False, transform = transform_val, tensors = ['images', 'labels', 'boxes'], batch_size = BATCH_SIZE, collate_fn = collate_fn, return_index = True)The model and data are ready for training 🚀!Let's train the model for 8 epochs and save the gradients, parameters, and final trained model as an artifact.# Train the model for 8 epochnum_epochs = 8lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1) wandb.watch(model, log='all', log_freq=50)wandb.define_metric('epoch')wandb.define_metric('validation_iou', step_metric='epoch')for epoch in range(num_epochs): # loop over the dataset multiple times print('------------------ Training Epoch {} ------------------'.format(epoch)) train_one_epoch(model, optimizer, train_loader, device) lr_scheduler.step() print('----------- Evaluating the Model ------------') iou = evaluate_iou(model, val_loader, device) wandb.log({'epoch': epoch, 'validation_iou': iou})torch.save(model.state_dict(), 'model_weights_wandb.pth')model_artifact = wandb.Artifact('object_detection_model', 'model')model_artifact.add_file('model_weights_wandb.pth')run.log_artifact(model_artifact)run.finish()print('Finished Training')In the W&B UI for this run, we see that in addition to the metrics and parameters that are typically logged by W&B, the Deep Lake integration also logged the dataset uri, commit_id, and view_id for the training and evaluation data, which uniquely identifies all the data that was used in this training project. The Deep Lake integration with W&B logs the dataset uri, commit_id, and view_id in a training run even if a W&B artifact was not created for the Deep Lake dataset.I",activeloop Training the model,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#training-the-model,"We define helper functions for loading the model, training for 1 epoch (including W&B logging), and evaluating the model by computing the average IOU (intersection-over-union) for the bounding boxes.# Helper function for loading the modeldef get_model_object_detection(num_classes): # Load an instance segmentation model pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # Get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model # Helper function for training for 1 epochdef train_one_epoch(model, optimizer, data_loader, device): model.train() start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) targets = [{k: v.to(device) for k, v in t.items()} for t in data[1]] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) loss_value = losses.item() wandb.log({'loss': loss_value}) # Print performance statistics if i%10 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f' % (i, loss_value, speed)) if not math.isfinite(loss_value): print(f'Loss is {loss_value}, stopping training') print(loss_dict) break optimizer.zero_grad() losses.backward() optimizer.step() # Helper function for computing the average IOU (intersection-over-union) for all the datadef evaluate_iou(model, data_loader, device = 'cpu', score_thresh = 0.5): # This function removes predictions in the output and IUO calculation that are below a confidence threshold . # Set the model to eval mode. model.eval() ious_list = [] start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) ground_truths = [{k: v.to(device) for k, v in t.items()} for t in data[1]] model_start = time.time() with torch.no_grad(): predictions = model(images) model_end = time.time() assert len(ground_truths) == len(predictions) # Check if data in dataloader is consistent for j, pred in enumerate(predictions): # Ignore boxes below the confidence threshold thresh_inds = pred['scores']>score_thresh pred_boxes = pred['boxes'][thresh_inds] pred_labels = pred['labels'][thresh_inds] pred_scores = pred['scores'][thresh_inds] # Find the union of prediceted and groud truth labels and iterate through it all_labels = np.union1d(pred_labels.to('cpu'), ground_truths[j]['labels'].to('cpu')) ious = np.zeros((len(all_labels))) for l, label in enumerate(all_labels): # Find the boxes corresponding to the label boxes_1 = pred_boxes[pred_labels == label] boxes_2 = ground_truths[j]['boxes'][ground_truths[j]['labels'] == label] iou = torchvision.ops.box_iou(boxes_1, boxes_2).cpu() # This method returns a matrix of the IOU of each box with every other box. # Consider the IOU as the maximum overlap of a box with any other box. Find the max along the axis that has the most boxes. if 0 in iou.shape: ious[l] = 0 else: if boxes_1.shape>boxes_2.shape: max_iou, _ = iou.max(dim=0) else: max_iou, _ = iou.max(dim=1) # Compute the average iou for that label ious[l] = np.mean(np.array(max_iou)) #Take the average iou for all the labels. If there are no labels, set the iou to 0. if len(ious)>0: ious_list.append(np.mean(ious)) else: ious_list.append(0) # Print progress if i%10 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] speed: %.2f' % (i, speed)) return sum(ious_list)/len(ious_list) Training is performed on a GPU if possible",activeloop Notes on GPU Performance,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#notes-on-gpu-performance,"Using W&B automatic logging of CPU and GPU performance, we observe that the GPU utilization for the training runs on this Tesla V100 GPU was approximately 90%, aside from the dips between epochs when the shuffle buffer was filling. Note that the data was streamed from Activeloop storage (not in AWS) to an AWS SageMaker instance. This is made possible by Deep Lake's efficient data format and high-performance dataloader.",activeloop Congratulations 🚀. You can now use Activeloop Deep Lake and Weights & Biases to experiment and train models will full reproducibility!,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#congratulations-.-you-can-now-use-activeloop-deep-lake-and-weights-and-biases-to-experiment-and-trai,,activeloop Improving the Model Performance,https://docs.activeloop.ai/playbooks/training-reproducibility-with-wandb#improving-the-model-performance,"The average IOU of 0.29 achieved in the previous training run is likely unsatisfactory for a deploying a working product. Two potential explanations for the poor performance are:Despite filtering our samples with tiny objects, the dataset still contains fairly small bounding boxes that are difficult to detect by object detection modelsThe differences between some objects in a birds-eye view are subtle, even for human perception, such as the cars and vans in the image below.One remedy for both problems is to train models higher-resolution images, so let's increase the resolution of the transformation and examine its effect on model performance. In addition to changing the resolution, we must also scale MIN_AREA proportionally to the image area, so that the same bounding boxes are ignored in two training runs.WIDTH = 320HEIGHT = 256MIN_AREA = 128After retraining the model using the same code above, we observe that the average IOU increased from 0.29 to 0.37, which is substantial given the simple increase in image resolution. The model is still not production-ready, and further opportunities for improvement are:Assessing model performance on a per-image basis, which helps identify mislabeled or difficult data. A playbook for this workflow is available here.Adding random real-world images that do not contain the objects of interest. This helps the model eliminate false positives. Adding more data to the training set. 3000 images is likely not enough for a high-accuracy model.Strengthening of transformations that affect image color, blur, warping, and othersExploring different optimizers, learning rates, and schedulersFurther increasing the transform resolution until diminishing returns are achieved",activeloop CONTAINS and ==,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#contains-and,"# Exact match, which generally requires that the sample# has 1 value, i.e. no lists or multi-dimensional arraysselect * where tensor_name == 'text_value' # If value is numericselect * where tensor_name == numeric_value # If values is textselect * where contains(tensor_name, 'text_value')Any special characters in tensor or group names should be wrapped with double-quotes:select * where contains('tensor-name', 'text_value')select * where 'tensor_name/group_name' == numeric_value",activeloop SHAPE,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#shape,select * where shape(tensor_name)[dimension_index] > numeric_value select * where shape(tensor_name)[1] > numeric_value # Second array dimension > value,activeloop LIMIT,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#limit,"select * where contains(tensor_name, 'text_value') limit num_samples",activeloop "AND, OR, NOT",https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#and-or-not,"select * where contains(tensor_name, 'text_value') and NOT contains(tensor_name_2, numeric_value)select * where contains(tensor_name, 'text_value') or tensor_name_2 == numeric_valueselect * where (contains(tensor_name, 'text_value') and shape(tensor_name_2)[dimension_index]>numeric_value) or contains(tensor_name, 'text_value_2')",activeloop UNION and INTERSECT,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#union-and-intersect,"(select * where contains(tensor_name, 'value')) intersect (select * where contains(tensor_name, 'value_2'))(select * where contains(tensor_name, 'value') limit 100) union (select * where shape(tensor_name)[0] > numeric_value limit 100)",activeloop ORDER BY,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#order-by,"# Order by requires that sample is numeric and has 1 value, # i.e. no lists or multi-dimensional arrays# The default order is ASCENDING (asc)select * where contains(tensor_name, 'text_value') order by tensor_name asc",activeloop "ANY, ALL, and ALL_STRICT",https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#any-all-and-all_strict,"all adheres to NumPy and list logic where all(empty_sample) returns Trueall_strict is more intuitive for queries so all_strict(empty_sample) returns Falseselect * where all(tensor_name==0) # Returns True for empty samplesselect * where all_strict(tensor_name[:,2]>numeric_value) # Returns False for empty samplesselect * where any(tensor_name[0:6]>numeric_value)",activeloop IN and BETWEEN,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#in-and-between,"Only works for scalar numeric values and text references to class_namesselect * where tensor_name in (1, 2, 6, 10)select * where class_label_tensor_name in ('car', 'truck')select * where tensor_name between 5 and 20LOGICAL_AND and LOGICAL_ORselect * where any(logical_and(tensor_name_1[:,3]>numeric_value, tensor_name_2 == 'text_value'))",activeloop REFERENCING SAMPLES IN EXISTING TENORS,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#referencing-samples-in-existing-tenors,"# Select based on index (row_number)select * where row_number() == 10# Referencing values of of a tensor at index (row_number)select * order by l2_norm(<tensor_name> - data(<tensor_name>, index))# Finds rows of data with embeddings most similar to index 10select * order by l2_norm(embedding - data(embedding, 10))",activeloop SAMPLE BY,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#sample-by,"select * sample by weight_choice(expression_1: weight_1, expression_2: weight_2, ...) replace True limit Nweight_choice resolves the weight that is used when multiple expressions evaluate to True for a given sample. Options are max_weight, sum_weight. For example, if weight_choice is max_weight, then the maximum weight will be chosen for that sample.replace determines whether samples should be drawn with replacement. It defaults to True.limit specifies the number of samples that should be returned. If unspecified, the sampler will return the number of samples corresponding to the length of the dataset",activeloop EMBEDDING SEARCH,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#embedding-search,"Deep Lake supports several vector operations for embedding search. Typically, vector operations are called by returning data ordered by the score based on the vector search method.select * from (select tensor_1, tensor_2, <VECTOR_OPERATION> as score) order by score desc limit 10# THE SUPPORTED VECTOR_OPERATIONS ARE:l1_norm(<embedding_tensor> - ARRAY[<search_embedding>]) # Order should be ascl2_norm(<embedding_tensor> - ARRAY[<search_embedding>]) # Order should be asclinf_norm(<embedding_tensor> - ARRAY[<search_embedding>]) # Order should be asccosine_similarity(<embedding_tensor>, ARRAY[<search_embedding>]) # Order should be desc",activeloop VIRTUAL TENSORS,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#virtual-tensors,"Virtual tensors are the result of a computation and are not tensors in the Deep Lake dataset. However, they can be treated as tensors in the API.# 'score' is a virtual tensorselect * from (select tensor_1, tensor_2, <VECTOR_OPERATION> as score) order by score desc limit 10# 'box_beyond_image' is a virtual tensorselect *, any(boxes[:,0])<0 as box_beyond_image where ....# 'tensor_sum' is a virtual tensorselect *, tensor_1 + tensor_3 as tensor_sum where ......When combining embedding search with filtering (where conditions), the filter condition is evaluated prior to the embedding search.",activeloop GROUP BY AND UNGROUP BY,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#group-by-and-ungroup-by,"creates a sequence of data based on the common properties that are being grouped (i.e. frames into videos). Ungroup by splits sequences into their individual elements (i.e. videos into images).select * group by label, video_id # Groups all data with the same label and video_id in to the same sequenceselect * ungroup by split # Splits sequences into their original pieces",activeloop EXPAND BY,https://docs.activeloop.ai/performance-features/querying-datasets/query-syntax#expand-by,"includes samples before and after a query condition is satisfied.select * where <condition> expand by rows_before, rows_after",activeloop How to sample datasets using Deep Lake's query engine,https://docs.activeloop.ai/performance-features/querying-datasets/sampling-datasets#how-to-sample-datasets-using-deep-lakes-query-engine,"Sampling is often used when training models in order to modify the distribution of data that models are trained on. A common objective of samplers is to rebalance the data in order to achieve an more uniform distribution of classes in the training loop. Deep Lake provides a powerful API for several sampling methods via the query engine. Sampler queries in Deep Lake are only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.The general syntax for sampling is using the sample by keywords:select * sample by weight_choice(expression_1: weight_1, expression_2: weight_2, ...) replace True limit Nweight_choice resolves the weight that is used when multiple expressions evaluate to True for a given sample. Options are max_weight, sum_weight. For example, if weight_choice is max_weight, then the maximum weight will be chosen for that sample.replace determines whether samples should be drawn with replacement. It defaults to True.limit specifies the number of samples that should be returned. If unspecified, the sampler will return the number of samples corresponding to the length of the datasetSampling can be performed in the query interface in the Deep Lake UI, or in the Python API as shown below.",activeloop Example Usage,https://docs.activeloop.ai/performance-features/querying-datasets/sampling-datasets#example-usage,"Suppose we're working with a medical imaging dataset such as the NIH Chest X-Ray. Let's use samplers to create a more balanced view of the dataset that we can use for training a model. First, let's load the dataset:import deeplakeimport numpy as npfrom matplotlib import pyplot as pltds = deeplake.load('hub://activeloop/nih-chest-xray-train')Next, let's calculate the a histogram of the medical findings (findings tensor) and plot it.# Extract the list of class_names in a separate variable for re-useclass_names = ds.findings.info.class_namesnum_classes = len(class_names)class_count_raw = np.bincount(np.concatenate(ds.findings.numpy(aslist = True), axis=0))plt.bar(np.arange(num_classes), class_count_raw, tick_label = class_names)plt.xlabel('Condition', weight='bold')plt.xticks(rotation='vertical')plt.ylabel('Number of Instances', weight='bold')plt.title('Frequency per Condition', weight='bold')plt.show()We observe that findings such as Hernia, Pneumonia, Fibrosis, Edema , and Emphysema are very rare, which may cause our model to underperform when predicting these conditions. Note that even though many images have No_Finding, this is desirable for avoiding false positives when training models for medical imaging applications.We can use Deep Lake Tensor-Query-Language to upsample the under-represented findings in order to create a more balanced dataset.select * sample by max_weight(contains(findings, 'Hernia'): 20, contains(findings, 'Pneumonia'): 8, contains(findings, 'Fibrosis'): 5, contains(findings, 'Edema'): 5, contains(findings, 'Emphysema'): 2, True: 1)We can run this query in the UI or in the Python API using ds.query(...):balanced_view = ds.query('select * sample by max_weight(contains(findings, 'Hernia'): 20, contains(findings, 'Pneumonia'): 8, contains(findings, 'Fibrosis'): 5, contains(findings, 'Edema'): 5, contains(findings, 'Emphysema'): 2, True: 1)')In this sampler query, we're upsampling Hernia, by 20x, Pneumonia by 8x, Fibrosis by 5x, Edema by 5x, and Emphysema by 2x. Let's recalculate the histogram for the balanced dataset and compare it to the raw data histogram. class_count_balanced = np.bincount(np.concatenate(balanced_view.findings.numpy(aslist = True), axis=0))X_axis = np.arange(len(class_names))plt.figure(figsize=(8, 4))plt.bar(X_axis - 0.2, class_count_raw, 0.4, label = 'Raw Data')plt.bar(X_axis + 0.2, class_count_balanced, 0.4, label = 'Sampled Data') plt.xticks(X_axis, class_names)plt.xlabel('Condition', weight='bold')plt.xticks(rotation='vertical')plt.ylabel('Number of Instances', weight='bold')plt.title('Frequency per Condition', weight='bold')plt.legend()plt.show()The data in the upsampled dataset has much better representation of the rare conditions. Note that since a given image may have multiple conditions, and since conditions can be correlated, upsampling by one condition may implicitly upsample another condition, if they tend to occur in the same image.",activeloop Training Models on Sampled Views,https://docs.activeloop.ai/performance-features/querying-datasets/sampling-datasets#training-models-on-sampled-views,The sampled dataset view can be passed to a dataloader just like an ordinary Deep Lake dataset. Examples of dataset training can be found in our training tutorials.pytorch_dataloader = balanced_view.pytorch(...)for data in pytorch_dataloader: # Training loop,activeloop How to Filter and Query Data in Deep Lake,https://docs.activeloop.ai/getting-started/deep-learning/dataset-filtering#how-to-filter-and-query-data-in-deep-lake,"Filtering and querying is an important aspect of data engineering because analyzing and utilizing data in smaller units is much more productive than executing workflows on all data all the time. Queries can be performed in Deep Lake enables with user-defined functions, or they can be executed in Activeloop Platform using our highly-performance SQL-style query language.",activeloop Filtering using our Tensor Query Language (TQL),https://docs.activeloop.ai/getting-started/deep-learning/dataset-filtering#filtering-using-our-tensor-query-language-tql,"Deep Lake offers a highly-performant SQL-style query language that is built in C++ and is optimized for Deep Lake datasets. Queries and their results are executed and saved in the UI, and they can be accessed in Deep Lake using using the the Dataset Views API described below.Full details about the query language are described in a standalone tutorial.",activeloop Filtering with user-defined-functions (UDF),https://docs.activeloop.ai/getting-started/deep-learning/dataset-filtering#filtering-with-user-defined-functions-udf,"The first step for querying using UDFs is to define a function that returns a boolean depending on whether an dataset sample meets the user-defined condition. In this example, we define a function that returns True if the labels in a tensor are in the desired labels_list. If there are inputs to the filtering function other than sample_in, it must be decorated with @deeplake.compute.import deeplakefrom PIL import Image# Let's create a local copy of the dataset (Explanation is in the next section)ds = deeplake.deepcopy('hub://activeloop/mnist-train', './mnist-train-local') labels_list = ['0', '8'] # Desired labels for filtering@deeplake.computedef filter_labels(sample_in, labels_list): return sample_in.labels.data()['text'][0] in labels_listThe filtering function is executed using the ds.filter() command below, and it returns a Dataset View that only contains the indices that met the filtering condition (more details below). Just like in the Parallel Computing API, the sample_in parameter does not need to be passed into the filter function when evaluating it, and multi-processing can be specified using the scheduler and num_workers parameters.ds_view = ds.filter(filter_labels(labels_list), scheduler = 'threaded', num_workers = 0)print(len(ds_view))In most cases, multi-processing is not necessary for queries that involve simple data such as labels or bounding boxes. However, multi-processing significantly accelerates queries that must load rich data types such as images and videos.",activeloop Dataset Views,https://docs.activeloop.ai/getting-started/deep-learning/dataset-filtering#dataset-views,"A Dataset View is any subset of a Deep Lake dataset that does not contains all of the samples. It can be an output of a query, filtering function, or regular indexing like ds[0:2:100].In the filtering example above, we copied mnist-train locally in order to gain write access to the dataset. With write access, the views are saved as part of the dataset. Without write access, views are stored elsewhere or in custom paths, and full details are available here. Users have write access to their own datasets, regardless of whether the datasets are local or in the cloud.The data in the returned ds_view can be accessed just like a regular dataset. Image.fromarray(ds_view.images[10].numpy())A Dataset View can be saved permanently using the method below, which stores its indices without copying the data:ds_view.save_view(message = 'Samples with 0 and 8')In order to maintain data lineage, Dataset Views are immutable and are connected to specific commits. Therefore, views can only be saved if the dataset has a commit and there are no uncommitted changes in the HEAD. Each Dataset View has a unique id, and views can be examined or loaded using:views = ds.get_views()print(views)ds_view = views[0].load()# OR# ds_view = ds.load_view(id)print(len(ds_view))Congrats! You just learned to filter and query data with Deep Lake! 🎈",activeloop How Deep Memory Works,https://docs.activeloop.ai/performance-features/deep-memory/how-it-works#how-deep-memory-works,"Deep Memory computes transformation of your embeddings based on several examples of embeddings for which the most relevant embedding in the Vector Store is known. The transformation is computed computed on a Deep Lake Managed Service, and it is applied at inference in Deep Lake's Tensor Query Language (TQL).",activeloop Tuning the Index Parameters,https://docs.activeloop.ai/performance-features/index-for-ann-search/caching-and-optimization#tuning-the-index-parameters,"The parameters of the HSNW index can be tuned using the index_params shown below:vectorstore = VectorStore(path, index_params = {'threshold': <DEFAULT_-1>, # Index turned off by default 'distance_metric':<DEFAULT_COS>, 'additional_params': { 'efConstruction': <DEFAULT_200>, 'M': <DEFAULT_16>}})Further information about the impact of the index parameters can be found here.",activeloop Caching of Embeddings and Index,https://docs.activeloop.ai/performance-features/index-for-ann-search/caching-and-optimization#caching-of-embeddings-and-index,"Either of the following operations caches the embeddings are on disk and the index in RAM:The index is createdThe first vector search is executed after the Vector Store is loadedSince the first query caches critical information, subsequent queries will execute much faster compared to the first query. Since the cache is invalidated after the Vector Store is loaded or initialized, the optimal access pattern is not to re-load the Vector Store each search, unless you believe it was updated by another client.The embeddings are cached on disk in the following locations:Mac: /tmp/....Linux: /var/folders/",activeloop Caching of Other Tensors,https://docs.activeloop.ai/performance-features/index-for-ann-search/caching-and-optimization#caching-of-other-tensors,"Tensors containing other information such as text and metadata are also cached in memory when they are used in queries. As a result, the first query that utilized this data will be slowest, with subsequent queries running much faster. If the data size exceeds the cache size, it will be re-fetched with every query, thus reducing query performance. The default cache size is 2 MB, and you may increase the cache size using the parameter below:vectorstore = VectorStore(path, memory_cache_size = <cache_in_MB))",activeloop Querying via the REST API,https://docs.activeloop.ai/performance-features/managed-database/rest-api#querying-via-the-rest-api,"The primary input to the query API is a query string that contains all the necessary information for executing the query, including the path to the Deep Lake data. Full details on the query syntax are available here.",activeloop Overview of the Managed Database REST API,https://docs.activeloop.ai/performance-features/managed-database/rest-api#overview-of-the-managed-database-rest-api,"The REST API is currently in Alpha, and the syntax may change without announcement.The Deep Lake Tensor Database can be accessed via REST API. The datasets must be stored in the Tensor Database by specifying the deeplake_path = hub://org_id/dataset_name and runtime = {'tensor_db': True}. Full details on path and storage management are available here.",activeloop Input,https://docs.activeloop.ai/performance-features/managed-database/rest-api#input,"url = 'https://app.activeloop.ai/api/query/v1'headers = { 'Authorization': f'Bearer {user_token}' }# Format the embedding array or list as a string, so it can be passed in the REST API request.embedding_string = ','.join([str(item) for item in embedding])request = { 'query': f'select * from (select text, cosine_similarity(embedding, ARRAY[{embedding_string}]) as score from \'{dataset_path}\') order by score desc limit 5', 'as_list': True/False # Defaults to True. }",activeloop Response,https://docs.activeloop.ai/performance-features/managed-database/rest-api#response," If as_list = True (default). Returns a list of jsons, one per row.{ 'message': 'Query successful.', 'tensors': [ 'text', 'score' ], 'data': [ { 'text': '# Twitter's Recommendation Algorithm\n\nTwitter's Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the\nHome Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The\ndiagram below illustrates how major services and jobs interconnect.\n\n\n\nThese are the main components of the Recommendation Algorithm included in this repository:', 'score': 22.59016227722168 }, { 'text': '\n\nThese are the main components of the Recommendation Algorithm included in this repository:', 'score': 22.5976619720459 },... ]If as_list = False. Returns a list of values per tensor.{ 'message': 'Query successful.', 'tensors': [ 'text', 'score' ], 'data': { 'text': [ '# Twitter's Recommendation Algorithm\n\nTwitter's Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the\nHome Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The\ndiagram below illustrates how major services and jobs interconnect.\n\n\n\nThese are the main components of the Recommendation Algorithm included in this repository:', '\n\nThese are the main components of the Recommendation Algorithm included in this repository:', '| Type | Component | Description |\n|------------|------------|------------|\n| Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |\n| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |\n| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |\n| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict the likelihood of a Twitter User interacting with another User. |\n| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |\n| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |\n| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |\n| Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. |\n| | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. |\n| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos). |\n| | [follow-recommendation-service](follow-recommendations-service/README.md) (FRS)| Provides Users with recommendations for accounts to follow, and Tweets from those accounts. |\n| Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light Ranker model used by search index (Earlybird) to rank Tweets. |\n| | [heavy-ranker](https://github.com/twitter/the-algorithm-ml/blob/main/projects/home/recap/README.md) | Neural network for ranking candidate tweets. One of the main signals used to select timeline Tweets post candidate sourcing. |\n| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md)",activeloop Response,https://docs.activeloop.ai/performance-features/managed-database/rest-api#response,"|\n| | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking . |\n| | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. |\n| Software framework | [navi](navi/README.md) | High performance, machine learning model serving written in Rust. |\n| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |\n| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |', 'We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file.\n\n## Contributing', 'We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file.\n\n## Contributing\n\nWe invite the community to submit GitHub issues and pull requests for suggestions on improving the recommendation algorithm. We are working on tools to manage these suggestions and sync changes to our internal repository. Any security concerns or issues should be routed to our official [bug bounty program](https://hackerone.com/twitter) through HackerOne. We hope to benefit from the collective intelligence and expertise of the global community in helping us identify issues and suggest improvements, ultimately leading to a better Twitter.\n\nRead our blog on the open source initiative [here](https://blog.twitter.com/en_us/topics/company/2023/a-new-era-of-transparency-for-twitter).' ], 'score': [ 22.843185424804688, 22.83962631225586, 22.835460662841797, 22.83342170715332, 22.832916259765625 ] }}",activeloop How to migrate existing Deep Lake datasets to the Tensor Database,https://docs.activeloop.ai/performance-features/managed-database/migrating-datasets-to-the-tensor-database#how-to-migrate-existing-deep-lake-datasets-to-the-tensor-database,"Datasets are created in the Tensor Database by specifying the dataset_path = 'hub://<org_id>/<dataset_name>' and runtime = {'tensor_db': True}) during dataset creation. If datasets are currently stored locally, in your could, or in non-database Activeloop storage, they can be migrated to the Tensor Database using:import deeplakeds_tensor_db = deeplake.deepcopy(src = <current_path>, dest = 'hub://<org_id>/<dataset_name>', runtime = {'tensor_db': True}, src_creds = {<creds_dict>}, # Only necessary if src is in your cloud )",activeloop How to use Deep Lake as a Vector Store for LLM applications,https://docs.activeloop.ai/tutorials/vector-store#how-to-use-deep-lake-as-a-vector-store-for-llm-applications,"Deep Lake can be used as a Vector Store for storing embeddings and their metadata including text, jsons, images, audio, video, and more. Its serverless architecture can be self-hosted, and it is also available via fully managed service. Deep Lake's designated API for Vector Store applications is shown in these tutorials. Details on the low-level Deep Lake API are available here.",activeloop Vector Store Tutorials:,https://docs.activeloop.ai/tutorials/vector-store#vector-store-tutorials,Deep Lake Vector Store in LangChainVector Search Options,activeloop How to use Deep Lake for Deep Learning Applications,https://docs.activeloop.ai/tutorials/deep-learning#how-to-use-deep-lake-for-deep-learning-applications,"Deep Lake can be used as tool for managing Deep Learning data, including rapidly training models while streaming data, running queries, tracking dataset versions, visualizing datasets, and more.These tutorials show how to use Deep Lake's low-level API for deep-learning use cases.",activeloop Deep Learning Tutorials:,https://docs.activeloop.ai/tutorials/deep-learning#deep-learning-tutorials,Creating DatasetsTraining ModelsUpdating DatasetsData Processing Using Parallel Computing,activeloop How to use queries and version control to train models with reproducible data lineage.,https://docs.activeloop.ai/playbooks/training-with-lineage#how-to-use-queries-and-version-control-to-train-models-with-reproducible-data-lineage.,"The road from raw data to a trainable deep-learning dataset can be treacherous, often involving multiple tools glued together with spaghetti code. Activeloop simplifies this journey so you can create high-quality datasets and train production-level deep-learning models.",activeloop This playbook demonstrates how to use Activeloop Deep Lake to:,https://docs.activeloop.ai/playbooks/training-with-lineage,"Create a Deep Lake dataset from data stored in an S3 bucketVisualize the data to gain insights about the underlying data challenges Update, edit, and store different versions of the data with reproducibilityQuery the data, save the query result, and materialize it for training a model.Train a object detection model while streaming data",activeloop Prerequisites,https://docs.activeloop.ai/playbooks/training-with-lineage#prerequisites,"In addition to installation of commonly used packages, this playbook requires installation of: pip3 install deeplakepip3 install albumentationspip3 install opencv-python-headless==4.1.2.30 #In order for Albumentations to work properlypip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'The required python imports are:import deeplakeimport numpy as npimport boto3import mathimport timeimport osfrom tqdm import tqdmfrom pycocotools.coco import COCOimport albumentations as Afrom albumentations.pytorch import ToTensorV2import torchimport torchvisionfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictorYou should also register with Activeloop and create an API token in the UI.",activeloop Creating the Dataset,https://docs.activeloop.ai/playbooks/training-with-lineage#creating-the-dataset,"Since many real-world datasets use the COCO annotation format, the COCO training dataset is used in this playbook. To avoid data duplication, linked tensors are used to store references to the images in the Deep Lake dataset from the S3 bucket containing the original data. For simplicity, only the bounding box annotations are copied to the the Deep Lake dataset.To convert the original dataset to Deep Lake format, let's establish a connection to the original data in S3.dataset_bucket = 'non-hub-datasets's3 = boto3.resource('s3', aws_access_key_id=os.environ.get('aws_access_key_id'), aws_secret_access_key=os.environ.get('aws_secret_access_key'))s3_bucket = s3.Bucket(dataset_bucket)Next, let's load the annotations so we can access them later:ann_path = 'coco/annotations/instances_train2017.json'local_ann_path = 'anns_train.json's3_bucket.download_file(ann_path, local_ann_path)coco = COCO(local_ann_path)category_info = coco.loadCats(coco.getCatIds())Moving on, let's create an empty Deep Lake dataset and pull managed credentials from Platform, so that we don't have to manually specify the credentials to access the s3 links every time we use this dataset. Since the Deep Lake dataset is stored in Deep Lake storage, we also provide an API token to identify the user.ds = deeplake.empty('hub://dl-corp/coco-train', token = 'Insert API Token')creds_name = 'my_s3_creds'ds.add_creds_key(creds_name, managed = True)The UI for managed credentials in Platform is shown below, and more details are available here.Last but not least, let's create the Deep Lake dataset's tensors. In this example, we ignore the segmentations and keypoints from the COCO dataset, only uploading the bounding box annotations as well as their labels.img_ids = sorted(coco.getImgIds()) # Image ids for uploadingwith ds: ds.create_tensor('images', htype = 'link[image]', sample_compression = 'jpg') ds.create_tensor('boxes', htype = 'bbox') ds.create_tensor('categories', htype = 'class_label')Finally, let's iterate through the data and append it to our Deep Lake dataset. Note that when appending data, we directly pass the s3 URL and the managed credentials key for accessing that URL using deeplake.link(url, creds_key)with ds: ## ---- Iterate through each image and upload data ----- ## for img_id in tqdm(img_ids): anns = coco.loadAnns(coco.getAnnIds(img_id)) img_coco = coco.loadImgs(img_id)[0] #First create empty objects for all the annotations boxes = np.zeros((len(anns),4)) categories = [] #Then populate the objects with the annotations data for i, ann in enumerate(anns): boxes[i,:] = ann['bbox'] categories.append([category_info[i]['name'] for i in range(len(category_info)) if category_info[i]['id']==ann['category_id']][0]) #If there are no categories present, append the empty list as None. if categories == []: categories = None img_url = 's3://{}/coco/train2017/{}'.format(dataset_bucket, img_coco['file_name']) ds.append({'images': deeplake.link(img_url, creds_key=creds_name), 'boxes': boxes.astype('float32'), 'categories': categories})Note: if dataset creation speed is a priority, it can be accelerated using 2 options:By uploading the dataset in parallel. An example is available here.By setting the optional parameters below to False. In this case, the upload machine will not load any of the data before creating the dataset, thus speeding the upload by up to 100X. The parameters below are defaulted to True because they improve the query speed on image shapes and file metadata, and they also verify the integrity of the data before uploading. More information is available here:ds.create_tensor('images', htype = 'link[image]', verify = False, create_shape_tensor = False, create_sample_info_tensor = False )",activeloop Inspecting the Dataset,https://docs.activeloop.ai/playbooks/training-with-lineage#inspecting-the-dataset,"In this example, we will train an object detection model for driving applications. Therefore, we are interested in images containing cars, busses, trucks, bicycles, motorcycles, traffic lights, and stop signs, which we can find by running a SQL query on the dataset in Platform. More details on the query syntax are available here.(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'bus') limit 1000) union (select * where contains(categories, 'truck') limit 1000) union (select * where contains(categories, 'bicycle') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000) union (select * where contains(categories, 'traffic light') limit 1000) union (select * where contains(categories, 'stop sign') limit 1000)A quick visual inspection of the dataset reveals several problems with the data including:Sample 61 but is a-low quality image where it's very difficult to discern the features, and it is not clear whether the small object in the distance is an actual traffic light. Images like this do not positively contribute to model performance, so let's delete all the data in this sample.ds.pop(61)ds.commit('Deleted index 61 because the image is low quality.')In sample 8, a road sign is labeled as a stop sign, even though the sign is facing away from the camera. Even though it may be a stop sign, computer vision systems should positively identify the type of a road sign based on its visible text. Therefore, let's remove the stop sign label from this image. ds.categories[8] = ds.categories[8].numpy()[np.arange(0,4)!=2]ds.boxes[8] = ds.boxes[8].numpy()[np.arange(0,4)!=2,:]ds.commit('Deleted bad label at index 8')Both changes are now evident in the visualizer, and they were both logged as separate commits in the version control history. A summary of this inspection workflow is shown below:",activeloop Optimizing the Dataset for Training,https://docs.activeloop.ai/playbooks/training-with-lineage#optimizing-the-dataset-for-training,"Now that the dataset has been improved, we save the query result containing the samples of interest and optimize the data for training. Since query results are associated with a particular commit, they are immutable and can be retrieved at any point in time.First, let's re-run the query and save the result as a dataset view, which is uniquely identified by an id.The dataset is currently storing references to the images in S3, so the images are not rapidly streamable for training. Therefore, we materialize the query result (Dataset View) by copying and re-chunking the data for maximum performance:ds.load_view('62d6d490e49d0d7bab4e251f', optimize = True, num_workers = 4)Once we're finished using the materialized dataset view, we may choose to delete it using:# ds.delete_view('62d6d490e49d0d7bab4e251f')",activeloop Training an Object Detection Model,https://docs.activeloop.ai/playbooks/training-with-lineage#training-an-object-detection-model," An object detection model can be trained using the same approach that is used for all Deep Lake datasets, with several examples in our tutorials. Typically the training would occur on another machine with more GPU power, so we start by loading the dataset and and corresponding dataset view:ds = deeplake.load('hub://dl-corp/coco-train', token = 'Insert API Token')ds_view = ds.load_view('62d6d490e49d0d7bab4e251f')When using subsets of datasets, it's advised to remap the input classes for model training. In this example, the source dataset has 81 classes, but we are only interested in 7 classes (cars, busses, trucks, bicycles, motorcycles, traffic lights, and stop signs). Therefore, we remap the classes of interest to values 0,1,2,3,4,6 before feeding them into the model for training. We also specify resolution for resizing the data before training the model.WIDTH = 128HEIGHT = 128# These are the classes we care about and they will be remapped to 0,1,2,3,4,6 in the modelCLASSES_OF_INTEREST = ['car', 'truck', 'bus', 'motorcycle', 'bicycle', 'traffic light', 'stop sign']# The classes of interest correspond to the following array values in the current datasetINDS_OF_INTEREST = [ds.categories.info.class_names.index(item) for item in CLASSES_OF_INTEREST]Next, let's specify an augmentation pipeline, which mostly utilizes Albumentations. We perform the remapping of the class labels inside the transformation function.# Augmentation pipeline using Albumentationstform_train = A.Compose([ A.RandomSizedBBoxSafeCrop(width=WIDTH, height=HEIGHT, erosion_rate=0.2), A.Rotate(limit=20, p=0.5), A.RandomBrightnessContrast(brightness_limit=0.1, contrast_limit=0.1, p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2()], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=16, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Transformation function for pre-processing the deeplake sample before sending it to the modeldef transform_train(sample_in): # Convert any grayscale images to RGB image = sample_in['images'] shape = image.shape if shape[2] == 1: image = np.repeat(image, int(3/shape[2]), axis = 2) # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes'], shape) # Filter only the labels that we care about for this training run labels_all = sample_in['categories'] indices = [l for l, label in enumerate(labels_all) if label in INDS_OF_INTEREST] labels_filtered = labels_all[indices] labels_remapped = [INDS_OF_INTEREST.index(label) for label in labels_filtered] boxes_filtered = boxes[indices,:] # Make sure the number of labels and boxes is still the same after filtering assert(len(labels_remapped)) == boxes_filtered.shape[0] # Pass all data to the Albumentations transformation transformed = tform_train(image = image, bboxes = boxes_filtered, bbox_ids = np.arange(boxes_filtered.shape[0]), class_labels = labels_remapped, ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically",activeloop Training an Object Detection Model,https://docs.activeloop.ai/playbooks/training-with-lineage#training-an-object-detection-model,"# Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Put annotations in a separate object target = {'labels': labels_torch, 'boxes': boxes_torch} return transformed['image'], target# Conversion script for bounding boxes from coco to Pascal VOC formatdef coco_2_pascal(boxes, shape): # Convert bounding boxes to Pascal VOC format and clip bounding boxes to make sure they have non-negative width and height return np.stack((np.clip(boxes[:,0], 0, None), np.clip(boxes[:,1], 0, None), np.clip(boxes[:,0]+np.clip(boxes[:,2], 1, None), 0, shape[1]), np.clip(boxes[:,1]+np.clip(boxes[:,3], 1, None), 0, shape[0])), axis = 1)def collate_fn(batch): return tuple(zip(*batch))You can now create a PyTorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds_view.pytorch() . This method automatically applies the transformation function and takes care of random shuffling (if desired). The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.train_loader = ds_view.pytorch(num_workers = 8, shuffle = True, transform = transform_train, tensors = ['images', 'categories', 'boxes'], batch_size = 16, collate_fn = collate_fn)This playbook uses a pre-trained torchvision neural network from the torchvision.models module. We define helper functions for loading the model and for training 1 epoch.# Helper function for loading the modeldef get_model_object_detection(num_classes): # Load an instance segmentation model pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # Get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # Replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model # Helper function for training for 1 epochdef train_one_epoch(model, optimizer, data_loader, device): model.train() start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) targets = [{k: v.to(device) for k, v in t.items()} for t in data[1]] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) loss_value = losses.item() # Print performance statistics if i%10 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f' % (i, loss_value, speed)) if not math.isfinite(loss_value): print(f'Loss is {loss_value}, stopping training') print(loss_dict) break optimizer.zero_grad() losses.backward() optimizer.step()Training is performed on a GPU if possible. Otherwise, it's on a CPU.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(device)Let's initialize the model and optimizer.model = get_model_object_detection(len(CLASSES_OF_INTEREST))model.to(device)# Specify the optimizerparams = [p for p in model.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)The model and data are ready for training 🚀!# Train the model for 1 epochnum_epochs = 3lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1) for epoch in range(num_epochs): # Loop over the dataset multiple times print('------------------ Training Epoch {} ------------------'.format(epoch+1)) train_one_epoch(model, optimizer, train_loader, device) lr_scheduler.step() # --- Insert Testing Code Here ---print('Finished Training')",activeloop How to evaluate model performance and compare ground-truth annotations with model predictions.,https://docs.activeloop.ai/playbooks/evaluating-model-performance#how-to-evaluate-model-performance-and-compare-ground-truth-annotations-with-model-predictions.,"Models are never perfect after the first training, and model predictions need to be compared with ground-truth annotations in order to iterate on the training process. This comparison often reveals incorrectly annotated data and sheds light on the types of data where the model fails to make the correct prediction.This playbook demonstrates how to use Activeloop Deep Lake to:Improve training data by finding data for which the model has poor performanceTrain an object detection model using a Deep Lake datasetUpload the training loss per image to a branch on the dataset designated for evaluating model performanceSort the training dataset based on model loss and identify bad samplesEdit and clean the bad training data and commit the changesEvaluate model performance on validation data and identify difficult dataCompute model predictions of object detections for a validation Deep Lake datasetUpload the model predictions to the validation dataset, compared them to ground truth annotations, and identify samples for which the model fails to make the correct predictions.",activeloop Prerequisites,https://docs.activeloop.ai/playbooks/evaluating-model-performance#prerequisites,"In addition to installation of commonly user packages, this playbook requires installation of: pip3 install deeplakepip3 install albumentationspip3 install opencv-python-headless==4.1.2.30 #In order for Albumentations to work properlyThe required python imports are:import deeplakeimport numpy as npimport mathimport sysimport timefrom PIL import Imageimport albumentations as Afrom albumentations.pytorch import ToTensorV2import torchimport torchvisionfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictorYou should also register with Activeloop and create an API token in the UI.",activeloop Training an Object Detection Model,https://docs.activeloop.ai/playbooks/evaluating-model-performance#training-an-object-detection-model," An object detection model can be trained using the same approach that is used for all Deep Lake datasets, with several examples in our tutorials. First, let's specify an augmentation pipeline, which mostly utilizes Albumentations. We also define several helper functions for resizing and converting the format of bounding boxes.WIDTH = 128HEIGHT = 64NUM_CLASSES = len(ds_train.labels.info.class_names)# Augmentation pipeline for training using Albumentationstform_train = A.Compose([ A.RandomSizedBBoxSafeCrop(width=WIDTH, height=HEIGHT, erosion_rate=0.2), A.Rotate(limit=20, p=0.5), A.RandomBrightnessContrast(brightness_limit=0.1, contrast_limit=0.1, p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2()], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=8, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Augmentation pipeline for validation using Albumentationstform_val = A.Compose([ A.Resize(width=WIDTH, height=HEIGHT), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2()], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=8, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Transformation function for pre-processing the Deep Lake training sample before sending it to the modeldef transform_train(sample_in): # Convert any grayscale images to RGB image = sample_in['images'] shape = image.shape if shape[2] == 1: image = np.repeat(image, 3, axis = 2) # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes'], shape) # Pass all data to the Albumentations transformation transformed = tform_train(image = image, bboxes = boxes, bbox_ids = np.arange(boxes.shape[0]), class_labels = sample_in['labels'], ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically. # Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Put annotations in a separate object target = {'labels': labels_torch, 'boxes': boxes_torch} return transformed['image'], target# Transformation function for pre-processing the Deep Lake validation sample before sending it to the modeldef transform_val(sample_in): # Convert any grayscale images to RGB image = sample_in['images'] shape = image.shape if shape[2] == 1: image = np.repeat(images, 3, axis = 2) # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes'], shape) # Pass all data to the Albumentations transformation transformed = tform_val(image = image, bboxes = boxes, bbox_ids = np.arange(boxes.shape[0]), class_labels = sample_in['labels'], ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically",activeloop Training an Object Detection Model,https://docs.activeloop.ai/playbooks/evaluating-model-performance#training-an-object-detection-model,"# Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Put annotations in a separate object target = {'labels': labels_torch, 'boxes': boxes_torch} # We also return the shape of the original image in order to resize the predictions to the dataset image size return transformed['image'], target, sample_in['index'], shape# Conversion script for bounding boxes from coco to Pascal VOC formatdef coco_2_pascal(boxes, shape): # Convert bounding boxes to Pascal VOC format and clip bounding boxes to make sure they have non-negative width and height return np.stack((np.clip(boxes[:,0], 0, None), np.clip(boxes[:,1], 0, None), np.clip(boxes[:,0]+np.clip(boxes[:,2], 1, None), 0, shape[1]), np.clip(boxes[:,1]+np.clip(boxes[:,3], 1, None), 0, shape[0])), axis = 1)# Conversion script for resizing the model predictions back to shape of the dataset imagedef model_2_image(boxes, model_shape, img_shape): # Resize the bounding boxes convert them from Pascal VOC to COCO m_h, m_w = model_shape i_h, i_w = img_shape x0 = boxes[:,0]*(i_w/m_w) y0 = boxes[:,1]*(i_h/m_h) x1 = boxes[:,2]*(i_w/m_w) y1 = boxes[:,3]*(i_h/m_h) return np.stack((x0, y0, x1-x0, y1-y0), axis = 1)def collate_fn(batch): return tuple(zip(*batch))We can now create a PyTorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds.pytorch() . This method automatically applies the transformation function and takes care of random shuffling (if desired). The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.train_loader = ds_train.pytorch(num_workers = 8, shuffle = True, transform = transform_train, tensors = ['images', 'labels', 'boxes'], batch_size = 4, collate_fn = collate_fn)This playbook uses a pre-trained torchvision neural network from the torchvision.models module. We define helper functions for loading the model and for training 1 epoch.# Helper function for loading the modeldef get_model_object_detection(num_classes): # Load an instance segmentation model pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # Get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # Replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model # Helper function for training for 1 epochdef train_one_epoch(model, optimizer, data_loader, device): model.train() start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) targets = [{k: v.to(device) for k, v in t.items()} for t in data[1]] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) loss_value = losses.item() # Print performance statistics if i%100 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f' % (i, loss_value, speed)) if not math.isfinite(loss_value): print(f'Loss is {loss_value}, stopping training') print(loss_dict) break optimizer.zero_grad() losses.backward() optimizer.step()Training is performed on a GPU if possible",activeloop Creating the Dataset,https://docs.activeloop.ai/playbooks/evaluating-model-performance#creating-the-dataset,"In this playbook we will use the svhn-train and -test datasets that are already hosted by Activeloop. Let's copy them to our own organization dl-corp in order to have write access:ds_train = deeplake.deepcopy('hub://activeloop/svhn-train', 'hub://dl-corp/svhn-train', )ds_test = deeplake.deepcopy('hub://activeloop/svhn-test', 'hub://dl-corp/svhn-test')These are object detection datasets that localize address numbers on buildings:Let's create a branch called training_run on both datasets for storing the model results.ds_train.checkout('training_run', create = True)ds_test.checkout('training_run', create = True)Since we will write the model results back to the Deep Lake datasets, let's create a group called model_evaluation in the datasets and add tensors that will store the model results.Putting the model results in a separate group will prevent the visualizer from confusing the predictions and ground-truth data.# Store the loss in the training datasetds_train.create_group('model_evaluation')ds_train.model_evaluation.create_tensor('loss')# Store the predictions for the labels, boxes, and the average iou of the # boxes, for the test datasetds_test.create_group('model_evaluation')ds_test.model_evaluation.create_tensor('labels', htype = 'class_label', class_names = ds_test.labels.info.class_names)ds_test.model_evaluation.create_tensor('boxes', htype = 'bbox', coords = {'type': 'pixel', 'mode': 'LTWH'})ds_test.model_evaluation.create_tensor('iou')",activeloop Evaluating Model Performance on Training Data,https://docs.activeloop.ai/playbooks/evaluating-model-performance#evaluating-model-performance-on-training-data,"Evaluating the performance of the model on a per-image basis can be a powerful tool for identifying bad or difficult data. First, we define a helper function that does a forward-pass through the model and computes the loss per image, without updating the weights. Since the model outputs the loss per batch, this functions requires that the batch size is 1.def evaluate_loss(model, data_loader, device): # This function assumes the data loader may be shuffled, and it returns the loss in a sorted fashion # using knowledge of the indices that are being trained in each batch. # Set the model to train mode in order to get the loss, even though we're not training. model.train() loss_list = [] indices_list = [] assert data_loader.batch_size == 1 start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) targets = [{k: v.to(device) for k, v in t.items()} for t in data[1]] indices = data[2] with torch.no_grad(): loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) loss_value = losses.item() loss_list.append(loss_value) indices_list.append(indices) # Print performance statistics if i%100 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f' % (i, loss_value, speed)) loss_list = [x for _, x in sorted(zip(indices_list, loss_list))] return loss_listNext, let's create another PyTorch dataloader on the training dataset that is not shuffled, has a batch size of 1, uses the evaluation transform, and returns the indices of the current batch the dataloader using return_index= True:train_loader_eval = ds_train.pytorch(num_workers = 8, shuffle = False, transform = transform_val, tensors = ['images', 'labels', 'boxes'], batch_size = 1, collate_fn = collate_fn, return_index = True)Finally, we evaluate the loss for each image, write it back to the dataset, and add a commit to the training_run branch that we created at the start of this playbook:loss_per_image = evaluate_loss(model, train_loader_eval, device)with ds_train: ds_train.model_evaluation.loss.extend(loss_per_image) ds_train.commit('Trained the model and computed the loss for each image.')",activeloop Inspecting the Training Dataset based on Model Results,https://docs.activeloop.ai/playbooks/evaluating-model-performance#inspecting-the-training-dataset-based-on-model-results,"The dataset can be sorted based on loss in Activeloop Platform. An inspection of the high-loss images immediately reveals that many of them have poor quality or are incorrectly annotated.The sort feature in the video below was removed. To sort, please run the query:We can edit some of the bad data by deleting the incorrect annotation of '1' at index 14997 , and by removing the poor quality samples at indices 2899 and 32467. # Remove label '1' from 14997. It's in the first positions in the labels and boxes arraysds_train.labels[14997] = ds_train.labels[14997].numpy()[1:]ds_train.boxes[14997] = ds_train.boxes[14997].numpy()[1:,:]# Delete bad samplesds_train.pop(32467)ds_train.pop(2899)Lastly, we commit the edits in order to permanently store this snapshot of the data. ds_train.commit('Updated labels at index 14997 and deleted samples at 2899 and 32467')The next step would be perform a more exhaustive inspection of the high-loss data and make further improvements to the dataset, after which the model should be re-trained.",activeloop Training an Object Detection Model,https://docs.activeloop.ai/playbooks/evaluating-model-performance#training-an-object-detection-model,"Otherwise, it's on a CPU.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(device)Let's initialize the model and optimizer:model = get_model_object_detection(NUM_CLASSES)model.to(device)# Specify the optimizerparams = [p for p in model.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)The model and data are ready for training 🚀!# Train the modelnum_epochs = 3lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1) for epoch in range(num_epochs): # loop over the dataset multiple times print('------------------ Training Epoch {} ------------------'.format(epoch+1)) train_one_epoch(model, optimizer, train_loader, device) lr_scheduler.step() print('Finished Training')torch.save(model.state_dict(), 'model_weights_svhn_first_train.pth') ",activeloop Evaluating Model Performance on Validation Data,https://docs.activeloop.ai/playbooks/evaluating-model-performance#evaluating-model-performance-on-validation-data," After iterating on the training data re-training the model, a general assessment of model performance should be performed on validation data that was not used to train the model. We create a helper function for running an inference of the model on the validation data that returns the model predictions and the average IOU (intersection-over-union) for each sample:# Run an inference of the model and compute the average IOU (intersection-over-union) for each sampledef evaluate_iou(model, data_loader, num_classes, device = 'cpu', score_thresh = 0.5): # This function removes predictions in the output and IUO calculation that are below a confidence threshold. # This function assumes the data loader may be shuffled, and it returns the loss in a sorted fashion # using knowledge of the indices that are being trained in each batch. # Set the model to eval mode. model.eval() ious_list = [] boxes_list = [] labels_list = [] indices_list = [] start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) ground_truths = [{k: v.to(device) for k, v in t.items()} for t in data[1]] indices = data[2] model_start = time.time() with torch.no_grad(): predictions = model(images) model_end = time.time() assert len(ground_truths) == len(predictions) == len(indices) # Check if data in dataloader is consistent for j, pred in enumerate(predictions): # Ignore boxes below the confidence threshold thresh_inds = pred['scores']>score_thresh pred_boxes = pred['boxes'][thresh_inds] pred_labels = pred['labels'][thresh_inds] pred_scores = pred['scores'][thresh_inds] # Find the union of prediceted and groud truth labels and iterate through it all_labels = np.union1d(pred_labels.to('cpu'), ground_truths[j]['labels'].to('cpu')) ious = np.zeros((len(all_labels))) for l, label in enumerate(all_labels): # Find the boxes corresponding to the label boxes_1 = pred_boxes[pred_labels == label] boxes_2 = ground_truths[j]['boxes'][ground_truths[j]['labels'] == label] iou = torchvision.ops.box_iou(boxes_1, boxes_2).cpu() # This method returns a matrix of the IOU of each box with every other box. # Consider the IOU as the maximum overlap of a box with any other box. Find the max along the axis that has the most boxes. if 0 in iou.shape: ious[l] = 0 else: if boxes_1.shape>boxes_2.shape: max_iou, _ = iou.max(dim=0) else: max_iou, _ = iou.max(dim=1) # Compute the average iou for that label ious[l] = np.mean(np.array(max_iou)) #Take the average iou for all the labels. If there are no labels, set the iou to 0",activeloop Evaluating Model Performance on Validation Data,https://docs.activeloop.ai/playbooks/evaluating-model-performance#evaluating-model-performance-on-validation-data,"if len(ious)>0: ious_list.append(np.mean(ious)) else: ious_list.append(0) boxes_list.append(model_2_image(pred_boxes.cpu(), (HEIGHT, WIDTH), (data[3][j][0], data[3][j][1]))) # Convert the bounding box back to teh shape of the original image labels_list.append(np.array(pred_labels.cpu())) indices_list.append(indices[j]) # Print progress if i%100 ==0: batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] speed: %.2f' % (i, speed)) # Sort the data based on index, just in case shuffling was used in the dataloader ious_list = [x for _, x in sorted(zip(indices_list, ious_list))] boxes_list = [x for _, x in sorted(zip(indices_list, boxes_list))] labels_list = [x for _, x in sorted(zip(indices_list, labels_list))] return ious_list, boxes_list, labels_listLet's create a PyTorch dataloader using the validation data and run the inference using evaluate_iou above.val_loader = ds_test.pytorch(num_workers = 8, shuffle = False, transform = transform_val, tensors = ['images', 'labels', 'boxes'], batch_size = 16, collate_fn = collate_fn, return_index = True)iou_val, boxes_val, labels_val = evaluate_iou(model, val_loader, NUM_CLASSES, device, score_thresh = 0.5)Finally, we write the predictions back to the dataset and add a commit to the training_run branch that we created at the start of this playbook:with ds_test: ds_test.model_evaluation.labels.extend(labels_eval_test) ds_test.model_evaluation.boxes.extend(boxes_eval_test) ds_test.model_evaluation.iou.extend(iou_eval_test) ds_test.commit('Added model predictions.') ",activeloop Comparing Model Results to Ground-Truth Annotations.,https://docs.activeloop.ai/playbooks/evaluating-model-performance#comparing-model-results-to-ground-truth-annotations.,"When sorting the model predictions based on IOU, we observe that the model successfully makes the correct predictions in images with one street number and where the street letters are large relative to the image. However, the model predictions are very poor for data with small street numbers, and there exist artifacts in the data where the model interprets vertical objects, such as narrow windows that the model thinks are the number '1'.The sort feature in the video below was removed. To sort, please run the query:Understanding the edge cases for which the model makes incorrect predictions is critical for improving the model performance. If the edge cases are irrelevant given the model's intended use, they should be eliminated from both the training and validation data. If they are applicable, more representative edge cases should be added to the training dataset, or the edge cases should be sampled more frequently while training.",activeloop Congratulations 🚀. You can now use Activeloop Deep Lake to evaluate the performance of your Deep-Learning models and compare their predictions to the ground-truth!,https://docs.activeloop.ai/playbooks/evaluating-model-performance#congratulations-.-you-can-now-use-activeloop-deep-lake-to-evaluate-the-performance-of-your-deep-lear,,activeloop Cleanup and Reverting Mistakes in The Workflow,https://docs.activeloop.ai/playbooks/evaluating-model-performance#cleanup-and-reverting-mistakes-in-the-workflow,"If you make a mistake you can use the following commands to start over or delete the new data:Delete data in a tensor: ds.<tensor_name>.clear()Delete the entire tensor and its data: ds.delete_tensor(<tensor_name>)Reset all edits since the prior commit: ds.reset()Delete the branch you just created: ds.delete_branch(<branch_name>)Must be on another branch, and deleted branch must not have been merged to another.",activeloop How to manage video datasets and train models using Deep Lake.,https://docs.activeloop.ai/playbooks/working-with-videos#how-to-manage-video-datasets-and-train-models-using-deep-lake.,"Performing deep-learning on video data can be challenging due to the large size of video files, especially when they are uncompressed to raw numeric data that is fed into neural networks. Deep Lake abstracts these challenges away from the user so you can focus on building performant models.",activeloop Setup,https://docs.activeloop.ai/playbooks/working-with-videos#setup,Make sure to install Deep Lake with pip install 'deeplake[av]' in order to use Deep Lake's audio and video features.import deeplakeds = deeplake.empty('demo/video') # create a local dataset,activeloop Adding annotations,https://docs.activeloop.ai/playbooks/working-with-videos#adding-annotations,"See a complete example for this section in this notebook.Annotations like bounding boxes can be added and visualized in Deep Lake along with the video samples. We use tensors of htype sequence[bbox] for this purpose. Every sample in a sequence[bbox] tensor will be a sequence of bounding boxes which represents the annotations for the corresponding video sample in the video tensor.Learn more about sequences here.ds.create_tensor('boxes', htype='sequence[bbox]', coords={'type': 'pixel', 'mode': 'LTWH'})See this page for more details about the bbox htype.Next, here's an example of an annotations file taken from the LaSOT dataset. It contains annotations for every frame of a video.Example of an annotations fileWe convert this to a numpy array and append it to our boxes tensor.import pandas as pddf = pd.read_csv('groundtruth.txt', header=None)boxes = df.to_numpy().astype(np.float32)[:, np.newaxis]# boxes.shape == (2788, 1, 4) == (number of frames, number of boxes, 4)ds.boxes.append(boxes)Visualize the bounding boxes within your notebook using ds.visualize().The shapes of the samples in the video and sequence[bbox]tensors have to match in order for visualization to work properly.If the shape of video tensor is (# frames, height, width, 3), the shape of the sequence tensor should be(# frames, # of boxes in a frame, 4)Visualize video annotations in notebook",activeloop Shape,https://docs.activeloop.ai/playbooks/working-with-videos#shape,"We can get the shape of a video sample in (N, H, W, C) format usingds.videos[0].shape(400, 360, 640, 3)",activeloop Creating a video tensor,https://docs.activeloop.ai/playbooks/working-with-videos#creating-a-video-tensor,"To create a video tensor, we specify an htype of 'video' and set sample_compression to the format of the video.ds.create_tensor('videos', htype='video', sample_compression='mp4')",activeloop Adding video samples,https://docs.activeloop.ai/playbooks/working-with-videos#adding-video-samples,"We append videos to the newly created tensor by reading the video files with deeplake.readds.videos.append(deeplake.read('./videos/example1.mp4'))ds.videos.append(deeplake.read('./videos/example2.mp4')) can also read videos from http://, gcs:// and s3:// urls given you have the credentials to access them. Examples include:ds.videos.append( deeplake.read( 'http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4', creds=None, ))ds.videos.append( deeplake.read( 's3://bucket-name/sample_video.mp4', creds={ 'aws_access_key_id': '...', 'aws_secret_access_key': '...', 'aws_session_token': '...', }, ))See deeplake.read and check out this notebook to see this in action.",activeloop Sample info,https://docs.activeloop.ai/playbooks/working-with-videos#sample-info,"Info about a video sample can be accessed usingds.videos[0].sample_infoThis returns info about the first sample as a dict:{ 'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': './videos/example1.mp4', 'modified': False} is in units oftimebase",activeloop Accessing video timestamps,https://docs.activeloop.ai/playbooks/working-with-videos#accessing-video-timestamps,"Presentation timestamps (PTS) of frames can be obtained (in seconds) through a video tensor's .timestamp attribute after indexing it just like in the previous section:# timestamps of frames 10 - 15 of 0th sampleds.videos[0, 10:15].timestamparray([0.36703333, 0.4004 , 0.43376666, 0.46713334, 0.5005 ], dtype=float32)",activeloop Accessing video frames,https://docs.activeloop.ai/playbooks/working-with-videos#accessing-video-frames,"The most important part of working with videos on Deep Lake is retrieving the frames of a video sample as a numpy array.video = ds.videos[0].numpy()This decompresses the entire first video sample and returns the frames as a numpy array.print(type(video))print(video.shape)<class 'numpy.ndarray'>(400, 360, 640, 3)Be careful when decompressing an entire large video sample because it can blow up your memory.Deep Lake allows you to index the video tensor like a numpy array and return the frames you want. Only the required frames are decompressed. See a few examples below:Getting a 100 frames from index 100 - 200# 1st sample, frames 100 - 200video = ds.videos[1, 100:200].numpy()video.shape(100, 360, 640, 3)Indexing with step# 0th sample, frames 100 - 200 with step of 5 framesvideo = ds.videos[0, 100:200:5].numpy()video.shape(20, 360, 640, 3)Getting a single frame# 1st sample, last framelast_frame = ds.videos[1, -1].numpy()last_frame.shape(360, 640, 3)",activeloop .data(),https://docs.activeloop.ai/playbooks/working-with-videos#.data,"Calling ds.videos[index].data() will return a dict with keys 'frames' and 'timestamps' with the corresponding numpy arrays as values. Indexing works the same way as it does with .numpy().data = ds.videos[1, 15:20].data()data['frames'].shapedata['timestamps'](5, 360, 640, 3)array([0.5005 , 0.5672333 , 0.6006 , 0.6339667 , 0.76743335], dtype=float32)",activeloop .play(),https://docs.activeloop.ai/playbooks/working-with-videos#.play,Individual video samples can be instantly visualized by calling .play() on them:ds.videos[1].play()This will play the video on your web browser:video playback in browserOn a jupyter notebook this will look like:video playback on jupyter notebook,activeloop ds.visualize(),https://docs.activeloop.ai/playbooks/working-with-videos#ds.visualize,"The whole Deep Lake dataset can be visualized by calling .visualize() on your dataset in a jupyter or colab notebook.ds.visualize()ds.visualize() on colabTry this out for yourself here!On colab, we only support visualizing hub://datasets",activeloop Linked videos,https://docs.activeloop.ai/playbooks/working-with-videos#linked-videos,"Tensors of Deep Lake type link[video] can be used to store links to videos. All of the above features are supported for linked videos. https://, gcs://, s3://and gdrive://links are accepted.# create linked tensorlinks = ds.create_tensor('video_links', htype='link[video]')# append linked sampleslinks.append(deeplake.link('http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4', creds_key=None)) # link to public video# .numpy()video = links[0].numpy()# shape of numpy arrayvideo.shape(360, 720, 1280, 3)You will need to set credentials to link to private data on your S3 or GCS.",activeloop For Activeloop cloud datasets,https://docs.activeloop.ai/playbooks/working-with-videos#for-activeloop-cloud-datasets,"This proccess is easy and streamlined for deeplake:// datasets.First, go to your Activeloop platform, login and choose 'Managed credentials' in settings.And then choose 'Add Credentials'.Select a credentials provider, set the credentials name (say, 'MY_KEY'), fill the fields and save it.Done! Your credentials have now been set.",activeloop Add managed credentials to your dataset,https://docs.activeloop.ai/playbooks/working-with-videos#add-managed-credentials-to-your-dataset,"Use ds.add_creds_key with managed set to True to add the credentials to your dataset. Multiple credentials can be added.ds.add_creds_key('MY_KEY', managed=True)ds.add_creds_key('S3_KEY', managed=True)",activeloop Use credentials,https://docs.activeloop.ai/playbooks/working-with-videos#use-credentials,"And when adding linked data using deeplake.link, simply mention which credentials to use through the creds_key argument.ds.links.append(deeplake.link('s3://my-bucket/sample-video.mp4', creds_key='MY_KEY'))",activeloop Video streaming,https://docs.activeloop.ai/playbooks/working-with-videos#video-streaming,"This section describes some implementation details regarding how video data is fetched and decompressed in Deep Lake.Large video samples (> 16MB by default) stored in remote Deep Lake datasets are not downloaded in their entirety on calling .numpy(). Instead, they are streamed from storage. Only the required packets are decompressed and converted to numpy arrays based on how the tensor is indexed.also streams videos from storage.",activeloop For non Activeloop cloud datasets,https://docs.activeloop.ai/playbooks/working-with-videos#for-non-activeloop-cloud-datasets,"For non-hub:// datasets, you can use credentials set in your environment by mentioning creds_key='ENV'ds.links.append(deeplake.link('s3://my-bucket/sample-video.mp4', creds_key='ENV'))Or you can temporarily add creds to your datasetcreds={ 'aws_access_key_id': '...', 'aws_secret_access_key': '...', 'aws_session_token': '...', }# add creds key (Note that managed is False)ds.add_creds_key('TEMP_KEY')# populate creds with a credentials dictds.populate_creds('TEMP_KEY', creds)and thends.links.append(deeplake.link('s3://my-bucket/sample-video.mp4', creds_key='TEMP_KEY'))See deeplake.link",activeloop How to Implement External Locks using Zookeeper,https://docs.activeloop.ai/tutorials/concurrent-writes/concurrency-using-zookeeper-locks#how-to-implement-external-locks-using-zookeeper,Apache Zookeeper is a tool that can be used to manage Deep Lake locks and ensure that only 1 worker is writing to a Deep Lake dataset at a time. It offers a simple API for managing locks using a few lines of code.,activeloop Setup,https://docs.activeloop.ai/tutorials/concurrent-writes/concurrency-using-zookeeper-locks#setup,"First, let's install Zookeper and launch a local server using Docker in the CLI.pip install zookeeperdocker run --rm -p 2181:2181 zookeeper",activeloop Write Locks,https://docs.activeloop.ai/tutorials/concurrent-writes/concurrency-using-zookeeper-locks#write-locks,"All write operations should be executed while respecting the lock.Let's connect a Python client to the local server and create a WriteLock using:from kazoo.client import KazooClientzk = KazooClient(hosts='127.0.0.1:2181')zk.start()deeplake_writelock = zk.WriteLock('/deeplake')The client can be blocked from performing operations without a WriteLock using the code below. The code will wait until the lock becomes available, and the internal Deep Lake lock should be disabled by specifying lock_enabled=False:from deeplake.core.vectorstore import VectorStorewith deeplake_writelock: # Initialize the Vector Store vector_store = VectorStore(<vector_store_path>, lock_enabled=False) # Add data vector_store.add(text = <your_text>, metadata = <your_metadata>, embedding_function = <your_embedding_function>) # This code can also be used with the Deep Lake LangChain Integration # from langchain.vectorstores import DeepLake # db = DeepLake(<dataset_path>, embedding = <your_embedding_function>) # db.add_texts(tests = <your_texts>, metadatas = <your_metadatas>, ...) # This code can also be used with the low-level Deep Lake API # import deeplake # ds = deeplake.load(dataset_path) # ds.append({...})",activeloop When Writes Update and Delete Data,https://docs.activeloop.ai/tutorials/concurrent-writes/concurrency-using-zookeeper-locks#when-writes-update-and-delete-data,"If the write operations are updating or deleting rows of data, the read operations should also lock the dataset in order to avoid corrupted read operations. Let's connect a Python client to the same local server above and create a ReadLock . Multiple clients can have a ReadLock without blocking each other, but they will all be blocked by the WriteLock above.from kazoo.client import KazooClientzk = KazooClient(hosts='127.0.0.1:2181')zk.start()deeplake_readlock = zk.ReadLock('/deeplake')The syntax for restricting operations using the ReadLock is:from deeplake.core.vectorstore import VectorStorewith deeplake_readlock: # Initialize the Vector Store vector_store = VectorStore(<vector_store_path>, read_only = True) # Search for data search_results = vector_store.search(embedding_data = <your_prompt>, embedding_function = <your_embedding_function>) # This code can also be used with the Deep Lake LangChain Integration # from langchain.vectorstores import DeepLake # db = DeepLake(<dataset_path>, embedding = <your_embedding_function>, read_only = True) # retriever = db.as_retriever() # qa = RetrievalQA.from_llm(llm = <your_model>, retriever = retriever) # This code can also be used with the low-level Deep Lake API # import deeplake # ds = deeplake.load(<dataset_path>, read_only = True) # dataloader = ds.dataloader().pytorch(...)Congrats! You just learned how manage your own lock for Deep Lake using Zookeeper! 🎉",activeloop When Writes are Append-Only,https://docs.activeloop.ai/tutorials/concurrent-writes/concurrency-using-zookeeper-locks#when-writes-are-append-only,"If the write operations are only appending data, it is not necessary to use locks during read operations like as vector search. However, the Deep Lake datasets must be reloaded or re-initialized in order to have the latest available information from the write operations. from deeplake.core.vectorstore import VectorStore# Initialize the Vector Store vector_store = VectorStore(<vector_store_path>, read_only = True)# Search for datasearch_results = vector_store.search(embedding_data = <your_prompt>, embedding_function = <your_embedding_function>)# This code can also be used with the Deep Lake LangChain Integration# from langchain.vectorstores import DeepLake# db = DeepLake(<dataset_path>, embedding = <your_embedding_function>, read_only = True)# retriever = db.as_retriever()# qa = RetrievalQA.from_llm(llm = <your_model>, retriever = retriever)# This code can also be used with the low-level Deep Lake API# import deeplake# ds = deeplake.load(<dataset_path>, read_only = True)# dataloader = ds.dataloader().pytorch(...)",activeloop How Deep Lake Datasets are Synchronized with Long-Term Storage,https://docs.activeloop.ai/technical-details/best-practices/storage-synchronization#how-deep-lake-datasets-are-synchronized-with-long-term-storage,Using with context when updating Deep Lake datasets is critical for achieving rapid write performance.,activeloop BAD PRACTICE - Code without with context,https://docs.activeloop.ai/technical-details/best-practices/storage-synchronization#bad-practice-code-without-with-context,"Any standalone update to a Deep Lake dataset is immediately pushed to the dataset's long-term storage location. Due to the high number of write operations, there may be a significant increase in runtime when the data is stored in the cloud. In the example below, an update is pushed to storage for every call to the .append() command.for i in range(10): ds.my_tensor.append(i)",activeloop Code using with context,https://docs.activeloop.ai/technical-details/best-practices/storage-synchronization#code-using-with-context,"To increase write speeds when using Deep Lake, the with syntax significantly improves performance because it only pushes updates to long-term storage after the code block inside the with statement has been executed, or when the local cache is full. This significantly reduces the number of discreet write operations, thereby increasing the speed by up to 100X. with ds: for i in range(10): ds.my_tensor.append(i)",activeloop How to create Deep Lake datasets at scale,https://docs.activeloop.ai/technical-details/best-practices/creating-datasets-at-scale#how-to-create-deep-lake-datasets-at-scale,This workflow assumes the reader has experience uploading datasets using Deep Lake's distributed framework deeplake.compute.,activeloop "When creating large Deep Lake datasets, it is recommended to:",https://docs.activeloop.ai/technical-details/best-practices/creating-datasets-at-scale#when-creating-large-deep-lake-datasets-it-is-recommended-to,"Parallelize the ingestion using deeplake.compute with a large num_workers (8-32)Use checkpointing to periodically auto-commit data using .eval(... checkpoint_interval = <commit_every_N_samples>)If there is an error during the data ingestion, the dataset is automatically reset to the last auto-commit with valid data.Additional recommendations are:If upload errors are intermittent and error-causing samples may be skipped (like bad links), you can run .eval(... ignore_errors=True).When uploading linked data, if a data integrity check is not necessary, and if querying based on shape information is not important, you can increase the upload speed by 10-100X by setting the following parameters to False when creating the linked tensor: verify, create_shape_tensor , create_sample_info_tensorWe highly recommend performing integrity checks for linked data during dataset creation, even though it slows data ingestion. This one-time check will significantly reduce debugging during querying, training, or other workflows.",activeloop Example Dataset Creation Using Checkpointing,https://docs.activeloop.ai/technical-details/best-practices/creating-datasets-at-scale#example-dataset-creation-using-checkpointing,"The dataset has a total of ~118000 samples.coco_2_deeplake(coco_api = coco, bucket = coco_bucket, category_names = category_names, category_info = category_info, creds_key = creds_key).eval(img_ids, ds, num_workers = 8, checkpoint_interval=25000)After the upload is complete, we see commits like the one below in ds.log().Commit : firstdbf9474d461a19e9333c2fd19b46115348f (main) Author : <username>Time : 2023-03-27 19:18:14Message: Auto-commit during deeplake.compute of coco_2_deeplake after 20.0% progressTotal samples processed in transform: 25000If an upload error occurs but the script completes, the dataset will be reset to the prior checkpoint and you will see a message such as:If the script does not complete due to a system failure or keyboard interrupt, you should load the dataset and run ds.reset(), or load the dataset using ds = deeplake.load(.. . reset = True). This will restore the dataset to the prior checkpoint. You may find how many samples were successfully processed using:len(ds) -> length of the shortest tensords.max_len -> length of the longest tensords.log() -> Prints how many samples were processed by the checkpointing",activeloop Example Dataset Creation Using Checkpointing,https://docs.activeloop.ai/technical-details/best-practices/creating-datasets-at-scale#example-dataset-creation-using-checkpointing," In this example we upload the COCO dataset originally stored as an S3 bucket to a Deep Lake dataset stored in another S3 bucket. The images are uploaded as links and the annotations (categories, masks, bounding boxes) are stored in the Deep Lake dataset. Annotations such as pose keypoints or supercategories are omitted.import deeplakeimport numpy as npimport boto3import osfrom pycocotools.coco import COCOimport getpassFirst, let's define the S3 buckets where the source COCO data is stored, and where the Deep Lake dataset will be stored. Let's also connect to the source data via boto3 and define a credentials dictionary (on some systems credentials, can be automatically pulled from the environment).coco_bucket = <bucket_containing_the_source_data>deeplake_bucket = <bucket_for_storing_the_deep_lake_dataset>creds = {'aws_access_key_id': os.environ.get('aws_access_key_id'), 'aws_secret_access_key': os.environ.get('aws_secret_access_key')}# Create the connection to the source datas3 = boto3.resource('s3', aws_access_key_id = creds['aws_access_key_id'], aws_secret_access_key = creds['aws_secret_access_key'])s3_bucket = s3.Bucket(coco_bucket)The annotations are downloaded locally for simplifying the upload code, since the COCO API was designed to read the annotations from a local file.cloud_ann_path = 'coco/annotations/instances_train2017.json'local_ann_path = 'anns_train.json's3_bucket.download_file(ann_path, local_ann_path)coco = COCO(local_ann_path)category_info = coco.loadCats(coco.getCatIds())Next, let's create an empty Deep Lake dataset at the desired path and connect it to the Deep Lake backend. We also add managed credentials for accessing linked data. In this case, the managed credentials for accessing the dataset are the same as those for accessing the linked data, but that's not a general requirement. More details on managed credentials are available here. ds = deeplake.empty('s3://{}/coco-train'.format(deeplake_bucket), creds = creds, overwrite = True)creds_key = <managed_creds_key>ds.connect(org_id = <org_id>, creds_key = creds_key, token = <your_token>)ds.add_creds_key(creds_key, managed = True)Next, we define the list category_names that maps the numerical annotations to the index in this list. If label annotations are uploaded as text (which is not the case here), the list is auto-populated. We pass category_names to the class_names parameter during tensor creation, though it can also be updated later, or omitted entirely if the numerical labels are sufficient.category_names = [category['name'] for category in category_info]with ds: ds.create_tensor('images', htype = 'link[image]', sample_compression = 'jpg') ds.create_tensor('categories', htype = 'class_label', class_names = category_names) ds.create_tensor('boxes', htype = 'bbox') ds.create_tensor('masks', htype = 'binary_mask', sample_compression = 'lz4')Next, we define the input iterable and deepake.compute function. The elements in the iterable are parallelized among the workers during the execution of the function.img_ids = sorted(coco.getImgIds())@deeplake.computedef coco_2_deeplake(img_id, sample_out, coco_api, category_names, category_info, bucket, creds_key): anns = coco_api.loadAnns(coco_api.getAnnIds(img_id)) img_coco = coco_api.loadImgs(img_id)[0] # First create empty arrays for all annotations categories = np.zeros((len(anns)), dtype = np.uint32) boxes = np.zeros((len(anns),4), dtype = np.float32) masks = np.zeros((img_coco['height'], img_coco['width'], len(anns)), dtype = bool) # Then populate the arrays with the annotations data for i, ann in enumerate(anns): mask = coco.annToMask(ann) # Convert annotation to binary mask masks[:, :, i] = mask boxes[i,:] = ann['bbox'] # Find the deep lake category_names index from the coco category_id categories[i] = category_names.index([category_info[i]['name'] for i in range(len(category_info)) if category_info[i]['id']==ann['category_id']][0]) # Append the data to a deeplake sample sample_out.append({'images': deeplake.link('s3://{}/coco/train2017/{}'.format(bucket, img_coco['file_name']), creds_key = creds_key), 'categories': categories, 'boxes': boxes, 'masks': masks})Finally, execute the deeplake.compute function and set checkpoint_interval to 25000",activeloop How to restore a corrupted Deep Lake dataset,https://docs.activeloop.ai/technical-details/best-practices/restoring-corrupted-datasets#how-to-restore-a-corrupted-deep-lake-dataset,"Deliberate of accidental interruption of code may make a Deep Lake dataset or some of its tensors unreadable. At scale, code interruption is more likely to occur, and Deep Lake's version control is the primary tool for recovery.",activeloop How to Use Version Control to Retrieve Data,https://docs.activeloop.ai/technical-details/best-practices/restoring-corrupted-datasets#how-to-use-version-control-to-retrieve-data,"When manipulating Deep Lake datasets, it is recommended to commit periodically in order to create snapshots of the dataset that can be accessed later. This can be done automatically when creating datasets with deeplake.compute, or manually using our version control API.If a dataset becomes corrupted, when loading the dataset, you may see an error like:DatasetCorruptError: Exception occured (see Traceback). The dataset maybe corrupted. Try using `reset=True` to reset HEAD changes and load the previous commit. This will delete all uncommitted changes on the branch you are trying to load.To reset the uncommitted corrupted changes, load the dataset with the reset = True flag:ds = deeplake.load(<dataset_path>, reset = True)Note: this operation deletes all uncommitted changes.",activeloop How to Use Version Control in Deep Lake,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#how-to-use-version-control-in-deep-lake,"Deep Lake dataset version control allows you to manage changes to datasets with commands very similar to Git. It provides critical insights into how your data is evolving, and it works with datasets of any size!Let's check out how dataset version control works in Deep Lake! If you haven't done so already, please download and unzip the animals dataset from Step 2. First let's create a Deep Lake dataset in the ./version_control_deeplake folder.import deeplakeimport numpy as npfrom PIL import Image# Set overwrite = True for re-runabilityds = deeplake.dataset('./version_control_deeplake', overwrite = True)# Create a tensor and add an imagewith ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.images.append(deeplake.read('./animals/cats/image_1.jpg'))The first image in this dataset is a picture of a cat:Image.fromarray(ds.images[0].numpy())",activeloop Commit,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#commit,"To commit the data added above, simply run ds.commit:first_commit_id = ds.commit('Added image of a cat')print('Dataset in commit {} has {} samples'.format(first_commit_id, len(ds)))Next, let's add another image and commit the update:with ds: ds.images.append(deeplake.read('./animals/dogs/image_3.jpg')) second_commit_id = ds.commit('Added an image of a dog')print('Dataset in commit {} has {} samples'.format(second_commit_id, len(ds)))The second image in this dataset is a picture of a dog: Image.fromarray(ds.images[1].numpy())",activeloop Log,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#log,The commit history starting from the current commit can be show using ds.log:log = ds.log()This command prints the log to the console and also assigns it to the specified variable log. The author of the commit is the username of the Activeloop account that logged in on the machine.,activeloop Branch,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#branch,"Branching takes place by running the ds.checkout command with the parameter create = True . Let's create a new branch dog_flipped, flip the second image (dog), and create a new commit on that branch.ds.checkout('dog_flipped', create = True)with ds: ds.images[1] = np.transpose(ds.images[1], axes=[1,0,2])flipped_commit_id = ds.commit('Flipped the dog image')The dog image is now flipped and the log shows a commit on the dog_flipped branch as well as the previous commits on main: Image.fromarray(ds.images[1].numpy())ds.log()",activeloop Checkout,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#checkout,"A previous commit of the branch can be checked out using ds.checkout:ds.checkout('main')Image.fromarray(ds.images[1].numpy())As expected, the dog image on main is not flipped.",activeloop Diff,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#diff,"Understanding changes between commits is critical for managing the evolution of datasets. Deep Lake's ds.diff function enables users to determine the number of samples that were added, removed, or updated for each tensor. The function can be used in 3 ways:ds.diff() # Diff between the current state and the last commitds.diff(commit_id) # Diff between the current state and a specific commitds.diff(commit_id_1, commit_id_2) # Diff between two specific commits",activeloop Let's see how this works:,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#lets-see-how-this-works,"You should currently be on the main branch, which has 2 samples. You can check for uncommited changes using:ds.has_head_changesLet's add another image:print('Dataset on {} branch has {} samples'.format('main', len(ds)))with ds: ds.images.append(deeplake.read('./animals/dogs/image_4.jpg')) print('After updating, the HEAD commit on {} branch has {} samples'.format('main', len(ds)))The 3rd sample is also an image of a dog:Image.fromarray(ds.images[2].numpy())Next, if you checkout dog_flipped branch, the dataset contains 2 samples, which is sample count from when that branch was created. Therefore, the additional uncommitted third sample that was added to the main branch above is not reflected when other branches or commits are checked out.ds.checkout('dog_flipped')print('Dataset in {} branch has {} samples'.format('dog_flipped', len(ds)))Finally, when checking our the main branch again, the prior uncommitted changes and available and they are stored in the HEAD commit on main:ds.checkout('main')print('Dataset in {} branch has {} samples'.format('main', len(ds)))The dataset now contains 3 samples and the uncommitted dog image is visible:Image.fromarray(ds.images[2].numpy())You can delete any uncommitted changes using the reset command below, which will bring the main branch back to the state with 2 samples.ds.reset()print('Dataset in {} branch has {} samples'.format('main', len(ds)))",activeloop HEAD Commit,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#head-commit,"Unlike Git, Deep Lake's dataset version control does not have a local staging area because all dataset updates are immediately synced with the permanent storage location (cloud or local). Therefore, any changes to a dataset are automatically stored in a HEAD commit on the current branch. This means that the uncommitted changes do not appear on other branches, and uncommitted changes are visible to all users.",activeloop Merge,https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control#merge,"Merging is a critical feature for collaborating on datasets. It enables you to modify data on separate branches before making those changes available on the main branch, thus enabling you to experiment on your data without affecting workflows by other collaborators.We are currently on the main branch where the picture of the dog is right-side-up.ds.log()Image.fromarray(ds.images[1].numpy())We can merge the dog_flipped branch into main using the command below:ds.merge('dog_flipped')After merging the dog_flipped branch, we observe that the image of the dog is flipped. The dataset log now has a commit indicating that a commit from another branch was merged to main.Image.fromarray(ds.images[1].numpy())ds.log()Congrats! You just are now an expert in dataset version control! 🎓",activeloop How to Visualize Datasets in Deep Lake,https://docs.activeloop.ai/getting-started/deep-learning/visualizing-datasets#how-to-visualize-datasets-in-deep-lake,"One of Deep Lake's core features is to enable users to visualize and interpret large amounts of data. Let's load the COCO dataset, which is one of the most popular datasets in computer vision.import deeplakeds = deeplake.load('hub://activeloop/coco-train')The tensor layout for this dataset can be inspected using:ds.summary()The dataset can be visualized in the Activeloop UI or using an iframe in a jupyter notebook. If you don't already have flask and ipython installed, make sure to install Deep Lake using pip install deeplake[visualizer].ds.visualize()Visualizing datasets in Activeloop Platform will unlock more features and faster performance compared to visualization in Jupyter notebooks.",activeloop Visualizing your own datasets,https://docs.activeloop.ai/getting-started/deep-learning/visualizing-datasets#visualizing-your-own-datasets,Any Deep Lake dataset can be visualized using the methods above as long as it follows the conventions necessary for the visualization engine to interpret and parse the data. These conventions are explained in the link below:Dataset Visualization,activeloop How to use Deeplake with PyTorch or TensorFlow in Python,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#how-to-use-deeplake-with-pytorch-or-tensorflow-in-python,"Deep Lake Datasets can be connected to popular ML frameworks such as PyTorch and TensorFlow, so you can train models while streaming data from the cloud without bottlenecking the training process!Data Streaming using Deep Lake",activeloop Training models with PyTorch,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#training-models-with-pytorch,"There are two syntaxes that can be used to train models in Pytorch using Deep Lake datasets:1.Deep Lake Data Loaders are highly-optimized and unlock the fastest streaming and shuffling using Deep Lake's internal shuffling method. However, they do not support custom sampling or fully-random shuffling that is possible using PyTorch datasets + data loaders.2.Pytorch Datasets + PyTorch Data Loaders enable all the customizability supported by PyTorch. However, they have highly sub-optimal streaming using Deep Lake datasets and may result in 5X+ slower performance compared to using Deep Lake data loaders.",activeloop 1. Deep Lake Data Loaders for PyTorch,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#1.-deep-lake-data-loaders-for-pytorch,"Best option for fast streaming!The fastest streaming of data to GPUs using PyTorch is achieved using Deep Lake's built-in PyTorch dataloader ds.pytorch() . If your model training is highly sensitive to the randomization of the input data, please pre-shuffle the data, or explore our writeup onShuffling in dataloaders.import deeplakefrom torchvision import datasets, transforms, modelsds = deeplake.load('hub://activeloop/cifar100-train') # Deep Lake Dataset",activeloop Transform syntax #2 - For complex or dependent transforms per tensor,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#transform-syntax-2-for-complex-or-dependent-transforms-per-tensor,"Sometimes a single transformation function might need to be applied to all tensors, or tensors need to be combined in a transform. In this case, you can use the syntax below to perform the exact same transform as above:def transform(sample_in): return {'images': tform(sample_in['images']), 'labels': sample_in['labels']}#PyTorch Dataloaderdataloader= ds.pytorch(batch_size = 16, num_workers = 2, transform = transform, tensors = ['images', 'labels'], shuffle = True)Some datasets such as ImageNet contain both grayscale and color images, which can cause errors when the transformed images are passed to the model. To convert only the grayscale images to color format, you can add this Torchvision transform to your pipeline:",activeloop Transform syntax #1 - For independent transforms per tensor,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#transform-syntax-1-for-independent-transforms-per-tensor,"The transform parameter in ds.pytorch() is a dictionary where the key is the tensor name and the value is the transformation function for that tensor. If a tensor does not need to be returned, the tensor should be omitted from the keys. If no transformation is necessary on a tensor, the transformation function is set as None.tform = transforms.Compose([ transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),])#PyTorch Dataloaderdataloader= ds.pytorch(batch_size = 16, num_workers = 2, transform = {'images': tform, 'labels': None}, shuffle = True)",activeloop Training models with TensorFlow,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#training-models-with-tensorflow,"Deep Lake Datasets can be converted to TensorFlow Datasets using ds.tensorflow(). Downstream, functions from the tf.Data API such as map, shuffle, etc. can be applied to process the data before training.ds # Deep Lake Dataset object, to be used for trainingds_tf = ds.tensorflow() # A TensorFlow Dataset",activeloop 2. PyTorch Datasets + PyTorch Data Loaders using Deep Lake,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#2.-pytorch-datasets-+-pytorch-data-loaders-using-deep-lake,"Best option for full customizability.Deep Lake datasets can be integrated in the PyTorch Dataset class by passing the ds object to the PyTorch Dataset's constructor and pulling data in the __getitem__ method using self.ds.image[ids].numpy():from torch.utils.data import DataLoader, Datasetclass ClassificationDataset(Dataset): def __init__(self, ds, transform = None): self.ds = ds self.transform = transform def __len__(self): return len(self.ds) def __getitem__(self, idx): image = self.ds.images[idx].numpy() label = self.ds.labels[idx].numpy(fetch_chunks = True).astype(np.int32) if self.transform is not None: image = self.transform(image) sample = {'images': image, 'labels': label} return sampleWhen loading data sequentially, or when randomly loading samples from a tensor that fits into the cache (such as class_labels) it is recommended to set fetch_chunks = True. This increases the data loading speed by avoiding separate requests for each individual sample. This is not recommended when randomly loading large tensors, because the data is deleted from the cache before adjacent samples from a chunk are used.The PyTorch dataset + data loader is instantiated using the built-in PyTorch functions:cifar100_pytorch = ClassificationDataset(ds_train, transform = tform)dataloader_pytroch = DataLoader(dataset_pt, batch_size = 16, num_workers = 2, shuffle = True)",activeloop Iteration and Training,https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks#iteration-and-training,"You can iterate through both data loaders above using the exact same syntax. Loading the first batch of data using the Deep Lake data loader may take up to 30 seconds because the shuffle buffer is filled before any data is returned.for data in dataloader: print(data) break # Training Loopfor data in dataloader_pytorch: print(data) break # Training LoopFor more information on training, check out the tutorial on Training an Image Classification Model in PyTorch",activeloop How to Use Deep Lake as a Vector Store for Images,https://docs.activeloop.ai/tutorials/vector-store/image-similarity-search#how-to-use-deep-lake-as-a-vector-store-for-images,"Deep Lake is a unique Vector Store because it supports storage of various data types including images, video, and audio. In this tutorial we show how to use Deep Lake to perform similarity search for images.",activeloop Similarity Search,https://docs.activeloop.ai/tutorials/vector-store/image-similarity-search#similarity-search,"Let's perform a similarity search on a reference image to find similar images in our Vector Store. First we download the image:The similarity search will return data for the top k (defaults to 4) similar samples, including numpy arrays for the underlying images.image_path = '/image_similarity.jpg'result = vector_store.search(embedding_data = image_path, embedding_function = embedding_function)The key-value pairs in the result contains the tensor as the key and a list of values for the data:result.keys() # Returns: dict_keys(['filename', 'id', 'image', 'score'])len(result['score']) # Returns: 4result['image'][0].shape# Returns: (427, 640, 3)Since images can be quite large, and we may not want to return them as numpy arrays, so we use return_tensors to specify that only the filename and id tensors should be returned:result = vector_store.search(embedding_data = image_path, embedding_function = embedding_function, return_tensors = ['id', 'filename'])result.keys() # Returns: dict_keys(['filename', 'id', 'image', 'score'])",activeloop Creating the Vector Store,https://docs.activeloop.ai/tutorials/vector-store/image-similarity-search#creating-the-vector-store,"We will use ~5k images in the COCO Validation Dataset as a source of diverse images. First, let's download the data.!wget -O '<download_path>' http://images.cocodataset.org/zips/val2017.zip# MAC !curl -o '<download_path>' http://images.cocodataset.org/zips/val2017.zipWe must unzip the images and specify their parent folder below.images_path = <download_path>Next, let's define a ResNet18 PyTorch model to embed the images based on the output from the second-to-last layer. We use the torchvision feature extractor to return the output of the avgpool layer to the embedding key, and we run on a GPU if available. (Note: DeepLakeVectorStore class was deprecated, but you can still use it. The new API for calling Deep Lake's Vector Store is: VectorStore.)from deeplake.core.vectorstore.deeplake_vectorstore import VectorStoreimport osimport torchfrom torchvision import transforms, modelsfrom torchvision.models.feature_extraction import create_feature_extractorfrom PIL import Imagedevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')model = models.resnet18(pretrained=True)return_nodes = { 'avgpool': 'embedding'}model = create_feature_extractor(model, return_nodes=return_nodes)model.eval()model.to(device)Let's define an embedding function that will embed a list of image filenames and return a list of embeddings. A transformation must be applied to the images so they can be fed into the model, including handling of grayscale images.tform= transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Lambda(lambda x: torch.cat([x, x, x], dim=0) if x.shape[0] == 1 else x), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])def embedding_function(images, model = model, transform = tform, batch_size = 4): '''Creates a list of embeddings based on a list of image filenames. Images are processed in batches.''' if isinstance(images, str): images = [images] #Proceess the embeddings in batches, but return everything as a single list embeddings = [] for i in range(0, len(images), batch_size): batch = torch.stack([transform(Image.open(item)) for item in images[i:i+batch_size]]) batch = batch.to(device) with torch.no_grad(): embeddings+= model(batch)['embedding'][:,:,0,0].cpu().numpy().tolist() return embeddingsNow we can create the vector store for storing the data. The Vector Store does not have the default configuration with text, embedding, and metadata tensors, so we use the tensor_params input to define the structure of the Vector Store. vector_store_path = 'hub://<org_id>/<dataset_name>'vector_store = VectorStore( path = vector_store_path, tensor_params = [{'name': 'image', 'htype': 'image', 'sample_compression': 'jpg'}, {'name': 'embedding', 'htype': 'embedding'}, {'name': 'filename', 'htype': 'text'}],)Finally, we can create a list of images from the source data and add it to the vector store. image_fns = [os.path.join(images_path, item) for item in os.listdir(images_path) if os.path.splitext(item)[-1]=='.jpg']vector_store.add(image = image_fns, filename = image_fns, embedding_function = embedding_function, embedding_data = image_fns)We observe in the automatically printed summary that the Vector Store has tensors for the image, their filename, their embedding, and an id, with 5000 samples each. This summary is also available via vector_store.summary(). tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (5000, 512) float32 None filename text (5000, 1) str None id text (5000, 1) str None image image (5000, 145:640, 200:640, 1:3) uint8 jpeg",activeloop Visualizing the Similarity Results,https://docs.activeloop.ai/tutorials/vector-store/image-similarity-search#visualizing-the-similarity-results,"Instead of returning the results of the similarity search directly, we can use return_view = True to get the Deep Lake dataset view, which is a lazy pointer to the underlying data that satisfies the similarity search (no data is retrieved locally).view = vector_store.search(embedding_data = image_path, embedding_function = embedding_function, return_view = True)We can then save the view and visualize it in the Deep Lake UI:view.save_view()The images are all fairly similar to the reference image, so it looks like the similarity search worked well!Congrats! You just used the Deep Lake VectorStore in for image similarity search! 🎉",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory,"Deep Memory computes a transformation that converts your embeddings into an embedding space that is tailored for your use case, based on several examples for which the most relevant embedding is known. This can increase the accuracy of your Vector Search by up to 22%.In this example, we'll use Deep Memory to improve the accuracy of Vector Search on the SciFact dataset, where the input prompt is a scientific claim, and the search result is the corresponding abstract.",activeloop Downloading the Data,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#downloading-the-data,"First let's specify out Activeloop and OpenAI tokens. Make sure to install pip install datasets because we'll download the source data from HuggingFace.from deeplake import VectorStoreimport osimport getpassimport datasetsimport openaifrom pathlib import Pathos.environ['OPENAI_API_KEY'] = getpass.getpass()# Skip this step if you logged in through the CLIos.environ['ACTIVELOOP_TOKEN'] = getpass.getpass()Next, let's download the dataset locally:corpus = datasets.load_dataset('scifact', 'corpus')",activeloop Creating the Vector Store,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#creating-the-vector-store,"Now let's define an embedding function for the text data and create a Deep Lake Vector Store in our Managed Database. Deep Memory is only available for Vector Stores in our Managed Database.def embedding_function(texts, model='text-embedding-ada-002'): if isinstance(texts, str): texts = [texts] texts = [t.replace('\n', ' ') for t in texts] return [data['embedding']for data in openai.Embedding.create(input = texts, model=model)['data']]path = 'hub://<org_id>/<vector_store_name>'vectorstore = VectorStore( path=path, embedding_function=embedding_function, runtime={'tensor_db': True},)",activeloop Adding data to the Vector Store,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#adding-data-to-the-vector-store,"Next, let's extract the data from the SciFact dataset and add it to our Vector Store. In this example, we embed the abstracts of the scientific papers. Normally, the id tensor is auto-populated, but in this case, we want to use the ids in the SciFact dataset, in order to use the internal connection between ids, abstracts, and claims, that already exists in SciFact.ids = [f'{id_}' for id_ in corpus['train']['doc_id']]texts = [text[0] for text in corpus['train']['abstract']]metadata = [{'title': title} for title in corpus['train']['title']]vectorstore.add( text=texts, id=ids, embedding_data=texts, embedding_function=embedding_function, metadata=metadata,)",activeloop Generating claims,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#generating-claims,"We must create a relationship between the claims and their corresponding most relevant abstracts. This correspondence already exists in the SciFact dataset, and we extract that information using the helper function below.def preprocess_scifact(claims_dataset, dataset_type='train'): # Using a dictionary to store unique claims and their associated relevances claims_dict = {} for item in claims_dataset[dataset_type]: claim = item['claim'] relevance = (item['evidence_doc_id'], 1) # 1 indicates that the evidence is relevant to the question # Check for non-empty relevance if relevance[0] != '': if claim not in claims_dict: claims_dict[claim] = [relevance] else: # If the does not exist in the dictionary, append the new relevance if relevance not in claims_dict[claim]: claims_dict[claim].append(relevance) # Split the dictionary into two lists: claims and relevances claims = list(claims_dict.keys()) relevances = list(claims_dict.values()) return claims, relevancesclaims_dataset = datasets.load_dataset('scifact', 'claims')claims, relevances = preprocess_scifact(claims_dataset, dataset_type='train')Let's print the first 10 claims and their relevant abstracts. The relevances are a list of tuples, where each the id corresponds to the id tensor value in the Abstracts Vector Store, and 1 indicates a positive relevance.claims[:10]['1 in 5 million in UK have abnormal PrP positivity.', '32% of liver transplantation programs required patients to discontinue methadone treatment in 2001.', '40mg/day dosage of folic acid and 2mg/day dosage of vitamin B12 does not affect chronic kidney disease (CKD) progression.', '76-85% of people with severe mental disorder receive no treatment in low and middle income countries.', 'A T helper 2 cell (Th2) environment impedes disease development in patients with systemic lupus erythematosus (SLE).', 'A breast cancer patient's capacity to metabolize tamoxifen influences treatment outcome.', 'A country's Vaccine Alliance (GAVI) eligibility is not indictivate of accelerated adoption of the Hub vaccine.', 'A deficiency of folate increases blood levels of homocysteine.', 'A diminished ovarian reserve does not solely indicate infertility in an a priori non-infertile population.', 'A diminished ovarian reserve is a very strong indicator of infertility, even in an a priori non-infertile population.']relevances[:10][[('13734012', 1)], [('44265107', 1)], [('33409100', 1)], [('6490571', 1)], [('12670680', 1)], [('24341590', 1)], [('12428497', 1)], [('11705328', 1)], [('13497630', 1)], [('13497630', 1)]]",activeloop Running the Deep Memory Training,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#running-the-deep-memory-training,"Now we can run a Deep Memory training, which runs asynchronously and executes on our managed service.job_id = vectorstore.deep_memory.train( queries = claims, relevance = relevances, embedding_function = embedding_function,)All of the Deep Memory training jobs for this Vector Store can be listed using the command below. The PROGRESS tells us the state of the training job, as well as the recall improvement on the data.recall@k corresponds to the percentage of rows for which the correct (most relevant) answer was returned in the top k vector search resultsvectorstore.deep_memory.list_jobs()This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/activeloop-test/test-deepmemory-ivoID STATUS RESULTS PROGRESS 6525a94bbfacbf7e75a08c76 completed recall@10: 0.00% (+0.00%) eta: 45.5 seconds recall@10: 0.00% (+0.00%)6538186bc1d2ffd8e8cd3b49 completed recall@10: 85.81% (+21.78%) eta: 1.9 seconds recall@10: 85.81% (+21.78%)",activeloop Evaluating Deep Memory's Performance,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#evaluating-deep-memorys-performance,"Let's evaluate the recall improvement for an evaluation dataset that was not used in the training process. Deep Memory inference, and by extension this evaluation process, runs on the client.validation_claims, validation_relevances = preprocess_scifact(claims_dataset, dataset_type='validation')recalls = vectorstore.deep_memory.evaluate( queries = validation_claims, relevance = validation_relevances, embedding_function = embedding_function,)We observe that the recall has improved by p to 30%, depending on the k value.recalls---- Evaluating without model ---- Recall@1: 29.5%Recall@3: 45.0%Recall@5: 51.8%Recall@10: 58.1%Recall@50: 77.4%Recall@100: 84.9%---- Evaluating with model ---- Recall@1: 55.1%Recall@3: 68.2%Recall@5: 71.7%Recall@10: 77.9%Recall@50: 90.1%Recall@100: 92.6%",activeloop Using Deep Memory in your Application,https://docs.activeloop.ai/tutorials/vector-store/improving-search-accuracy-using-deep-memory#using-deep-memory-in-your-application,"To use Deep Memory in your applications, specify the deep_memory = True parameter during vector search. If you are using the LangChain integration, you may specify this parameter during Vector Store initialization. Let's try searching embedding using a prompt, with and without Deep Memory.prompt = 'Which diseases are inflammation-related processes'results = vectorstore.search(embedding_data = prompt)results['text']['Inflammation is a fundamental protective response that sometimes goes awry and becomes a major cofactor in the pathogenesis of many chronic human diseases, including cancer.', 'Kidney diseases, including chronic kidney disease (CKD) and acute kidney injury (AKI), are associated with inflammation.', 'BACKGROUND Persistent inflammation has been proposed to contribute to various stages in the pathogenesis of cardiovascular disease.', 'Inflammation accompanies obesity and its comorbidities-type 2 diabetes, non-alcoholic fatty liver disease and atherosclerosis, among others-and may contribute to their pathogenesis.']results_dm = vectorstore.search(embedding_data = prompt, deep_memory = True)results_dm['text']['Kidney diseases, including chronic kidney disease (CKD) and acute kidney injury (AKI), are associated with inflammation.', 'OBJECTIVES Calcific aortic valve (AV) disease is known to be an inflammation-related process.', 'Crohn's disease and ulcerative colitis, the two main types of chronic inflammatory bowel disease, are multifactorial conditions of unknown aetiology.', 'BACKGROUND Two inflammatory disorders, type 1 diabetes and celiac disease, cosegregate in populations, suggesting a common genetic origin.']We observe that there are overlapping results for both search methods, but 50% of the answers differ.Congrats! You just used Deep Memory to improve the accuracy of Vector Search on a specific use-case! 🎉",activeloop Deep Lake API Reference,https://docs.deeplake.ai/en/latest/index.html#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Vector Store Creating a Deep Lake Vector Store Vector Store Operations Vector Store Properties VectorStore.DeepMemory Creating a Deep Memory Deep Memory Operations Deep Memory Properties Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype 3D Bounding Box Htype Intrinsics Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Nifti Htype Point Cloud Htype Mesh Htype Embedding Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read MMDetection High-Performance Features Dataloader Sampler Tensor Query Language Random Split Deep Memory API Reference deeplake deeplake.VectorStore deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform deeplake.core.vectorstore.deep_memory deeplake.random.seed,activeloop Indices and tables,https://docs.deeplake.ai/en/latest/index.html#indices-and-tables,Index Module Index Search Page,activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes," Htype is the class of a tensor: image, bounding box, generic tensor, etc.The htype of a tensor can be specified at its creation Copy to clipboard If not specified, the tensor's htype defaults to 'generic'.Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of Deep Lake datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: Htype configs¶ HTYPE DTYPE COMPRESSION generic None None image uint8 Required arg image.rgb uint8 Required arg image.gray uint8 Required arg video uint8 Required arg audio float64 Required arg class_label uint32 None bbox float32 None segment_mask uint32 None binary_mask bool None keypoints_coco int32 None point int32 None polygon float32 None text str None json Any None list List None dicom None dcm point_cloud None las mesh None ply instance_label uint32 None link str None sequence None None Image Htype¶ Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard Video Htype¶ Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read() . Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard Audio Htype¶ Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard Class Label Htype¶ Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard Bounding Box Htype¶ Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode' . type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard Segmentation Mask Htype¶ Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'. Binary Mask Htype¶ Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"Each point cloud is a list / array of points . All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard Mesh Htype¶ Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard Sequence htype¶ A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([deeplake.read('img01.jpg'), deeplake.read('img02.jpg')]) Copy to clipboard Link htype¶ Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer . Point Htype¶ Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard Polygon Htype¶ Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Point Cloud Htype¶ Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read()",activeloop Creating an image tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-an-image-tensor,An image tensor can be created using Copy to clipboard OR Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise . The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard COCO Keypoints Htype¶ Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as: [x1, y1, v1, x2, y2, v2, …, xk, yk, vk] The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#htypes,"If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples . This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples >>> ds = deeplake.dataset('......') Copy to clipboard Adding credentials to the dataset You can add the names of the credentials you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard and populate the added names with credentials dictionaries >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload. For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. >>> ds.add_creds_key('my_s3_creds', managed=True) >>> ds.add_creds_key('my_gcs_creds', managed=True) Copy to clipboard Create a link tensor >>> ds.create_tensor('img', htype='link[image]', sample_compression='jpg') Copy to clipboard Populate the tensor with links >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='my_s3_key')) >>> ds.img.append(deeplake.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(deeplake.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(deeplake.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(deeplake.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='ENV')) # this will use creds from environment Copy to clipboard Accessing the data >>> for i in range(5): ... ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = deeplake.link('./data/cat.jpeg') Copy to clipboard",activeloop image.rgb and image.gray htypes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#image-rgb-and-image-gray-htypes,"image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format.image.rgb and image.gray tensors can be created using Copy to clipboard Copy to clipboard",activeloop Appending image samples¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-image-samples,"Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). ExamplesAppending pixel data with array Copy to clipboard Appening Deep Lake image sample Copy to clipboard You can append multiple samples at the same time using extend(). Copy to clipboard NoteIf the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression.",activeloop Image Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#image-htype,"Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard",activeloop Appending video samples¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-video-samples,"Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. ExamplesAppending Deep Lake video sample Copy to clipboard Extending with multiple videos Copy to clipboard",activeloop Video Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#video-htype,"Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard",activeloop Creating an audio tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-an-audio-tensor,An audio tensor can be created using Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: Copy to clipboard,activeloop Audio Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#audio-htype,"Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard",activeloop Appending audio samples¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-audio-samples,"Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. ExamplesAppending Deep Lake audio sample Copy to clipboard Extending with Deep Lake audio samples Copy to clipboard",activeloop Class Label Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#class-label-htype,"Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard",activeloop Creating a class label tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-class-label-tensor,"A class label tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use.",activeloop Bounding Box Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#bounding-box-htype,"Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard",activeloop Creating a bbox tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-bbox-tensor,"A bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH).",activeloop Appending bounding boxes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Segmentation Mask Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#segmentation-mask-htype,"Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a segment_mask tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-segment-mask-tensor,"A segment_mask tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Appending segmentation masks¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-segmentation-masks,"Segmentation masks can be appended as np.ndarray. Examples Copy to clipboard NoteSince each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Binary Mask Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#binary-mask-htype,"Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard",activeloop Creating a video tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-video-tensor,A video tensor can be created using Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Creating a binary_mask tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-binary-mask-tensor,"A binary_mask tensor can be created using Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Appending binary masks¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-binary-masks,Binary masks can be appended as np.ndarray. ExamplesAppending a binary mask with 5 objects Copy to clipboard,activeloop Creating a keypoints_coco tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-keypoints-coco-tensor,A keypoints_coco tensor can be created using Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. Copy to clipboard,activeloop Appending keypoints¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-keypoints,"Keypoints can be appended as np.ndarray or list. ExamplesAppending keypoints sample with 3 keypoints and 4 objects Copy to clipboard WarningIn order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop COCO Keypoints Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#coco-keypoints-htype,"Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as:[x1, y1, v1, x2, y2, v2, …, xk, yk, vk]The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Point Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#point-htype,"Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard",activeloop Creating a point tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-point-tensor,A point tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard,activeloop Appending class labels¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-class-labels,"Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. ExamplesAppending index Copy to clipboard Extending with list of indices Copy to clipboard Appending text labels Copy to clipboard",activeloop Polygon Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#polygon-htype,"Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard",activeloop Appending point samples¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-point-samples,Points can be appended as np.ndarray or list. ExamplesAppending 2 2-D points Copy to clipboard Appending 2 3-D points Copy to clipboard,activeloop Appending polygons¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-polygons,Polygons can be appended as a list of list of tuples or np.ndarray. ExamplesAppending polygons with 2-D points Copy to clipboard Appending polygons with 3-D points Copy to clipboard Appending polygons with numpy arrays Copy to clipboard Copy to clipboard,activeloop Point Cloud Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#point-cloud-htype,"Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each point cloud is a list / array of points. All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard",activeloop Appending point clouds¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-point-clouds,Point clouds can be appended as a np.ndarray. ExamplesAppending point clouds with numpy arrays Copy to clipboard Or we can use deeplake.read() method to add samples Copy to clipboard,activeloop Creating a point cloud tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-point-cloud-tensor,A point cloud tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Creating a polygon tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-polygon-tensor,A polygon tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Mesh Htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#mesh-htype,"Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard",activeloop Appending meshes¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#appending-meshes,ExamplesAppending a ply file contatining a mesh data to tensor Copy to clipboard Copy to clipboard,activeloop Sequence htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Link htype¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples Copy to clipboard Adding credentials to the datasetYou can add the names of the credentials you want to use (not needed for http/local urls) Copy to clipboard and populate the added names with credentials dictionaries Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload.For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. Copy to clipboard Create a link tensor Copy to clipboard Populate the tensor with links Copy to clipboard Accessing the data Copy to clipboard Updating a sample Copy to clipboard",activeloop Creating a mesh tensor¶,https://docs.deeplake.ai/en/v3.1.5/Htypes.html#creating-a-mesh-tensor,A mesh tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes," Htype is the class of a tensor: image, bounding box, generic tensor, etc.The htype of a tensor can be specified at its creation Copy to clipboard If not specified, the tensor's htype defaults to 'generic'.Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of Deep Lake datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: Htype configs¶ HTYPE DTYPE COMPRESSION generic None None image uint8 Required arg image.rgb uint8 Required arg image.gray uint8 Required arg video uint8 Required arg audio float64 Required arg class_label uint32 None bbox float32 None segment_mask uint32 None binary_mask bool None keypoints_coco int32 None point int32 None polygon float32 None text str None json Any None list List None dicom None dcm point_cloud None las mesh None ply instance_label uint32 None link str None sequence None None Image Htype¶ Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard Video Htype¶ Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode' . type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard Segmentation Mask Htype¶ Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'. Binary Mask Htype¶ Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read() . Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard Audio Htype¶ Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard Class Label Htype¶ Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard Bounding Box Htype¶ Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise . The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard COCO Keypoints Htype¶ Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as: [x1, y1, v1, x2, y2, v2, …, xk, yk, vk] The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"Each point cloud is a list / array of points . All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard Mesh Htype¶ Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard Sequence htype¶ A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([deeplake.read('img01.jpg'), deeplake.read('img02.jpg')]) Copy to clipboard Link htype¶ Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default",activeloop Image Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#image-htype,"Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer . Point Htype¶ Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard Polygon Htype¶ Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Point Cloud Htype¶ Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read()",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#htypes,"If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples . This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples >>> ds = deeplake.dataset('......') Copy to clipboard Adding credentials to the dataset You can add the names of the credentials you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard and populate the added names with credentials dictionaries >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload. For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. >>> ds.add_creds_key('my_s3_creds', managed=True) >>> ds.add_creds_key('my_gcs_creds', managed=True) Copy to clipboard Create a link tensor >>> ds.create_tensor('img', htype='link[image]', sample_compression='jpg') Copy to clipboard Populate the tensor with links >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='my_s3_key')) >>> ds.img.append(deeplake.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(deeplake.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(deeplake.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(deeplake.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='ENV')) # this will use creds from environment Copy to clipboard Accessing the data >>> for i in range(5): ... ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = deeplake.link('./data/cat.jpeg') Copy to clipboard",activeloop Appending image samples¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-image-samples,"Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). ExamplesAppending pixel data with array Copy to clipboard Appening Deep Lake image sample Copy to clipboard You can append multiple samples at the same time using extend(). Copy to clipboard NoteIf the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression.",activeloop Creating an image tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-an-image-tensor,An image tensor can be created using Copy to clipboard OR Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Creating a video tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-video-tensor,A video tensor can be created using Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Appending video samples¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-video-samples,"Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. ExamplesAppending Deep Lake video sample Copy to clipboard Extending with multiple videos Copy to clipboard",activeloop Audio Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#audio-htype,"Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard",activeloop Creating an audio tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-an-audio-tensor,An audio tensor can be created using Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: Copy to clipboard,activeloop Appending audio samples¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-audio-samples,"Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. ExamplesAppending Deep Lake audio sample Copy to clipboard Extending with Deep Lake audio samples Copy to clipboard",activeloop Class Label Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#class-label-htype,"Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard",activeloop image.rgb and image.gray htypes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#image-rgb-and-image-gray-htypes,"image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format.image.rgb and image.gray tensors can be created using Copy to clipboard Copy to clipboard",activeloop Creating a class label tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-class-label-tensor,"A class label tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use.",activeloop Video Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#video-htype,"Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard",activeloop Bounding Box Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#bounding-box-htype,"Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard",activeloop Creating a bbox tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-bbox-tensor,"A bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH).",activeloop Segmentation Mask Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#segmentation-mask-htype,"Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Appending class labels¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-class-labels,"Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. ExamplesAppending index Copy to clipboard Extending with list of indices Copy to clipboard Appending text labels Copy to clipboard",activeloop Appending segmentation masks¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-segmentation-masks,"Segmentation masks can be appended as np.ndarray. Examples Copy to clipboard NoteSince each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a segment_mask tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-segment-mask-tensor,"A segment_mask tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Binary Mask Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#binary-mask-htype,"Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard",activeloop Creating a binary_mask tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-binary-mask-tensor,"A binary_mask tensor can be created using Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop COCO Keypoints Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#coco-keypoints-htype,"Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as:[x1, y1, v1, x2, y2, v2, …, xk, yk, vk]The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Creating a keypoints_coco tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-keypoints-coco-tensor,A keypoints_coco tensor can be created using Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. Copy to clipboard,activeloop Appending keypoints¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-keypoints,"Keypoints can be appended as np.ndarray or list. ExamplesAppending keypoints sample with 3 keypoints and 4 objects Copy to clipboard WarningIn order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Point Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#point-htype,"Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard",activeloop Appending binary masks¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-binary-masks,Binary masks can be appended as np.ndarray. ExamplesAppending a binary mask with 5 objects Copy to clipboard,activeloop Appending bounding boxes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Appending point samples¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-point-samples,Points can be appended as np.ndarray or list. ExamplesAppending 2 2-D points Copy to clipboard Appending 2 3-D points Copy to clipboard,activeloop Polygon Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#polygon-htype,"Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard",activeloop Creating a polygon tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-polygon-tensor,A polygon tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Appending polygons¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-polygons,Polygons can be appended as a list of list of tuples or np.ndarray. ExamplesAppending polygons with 2-D points Copy to clipboard Appending polygons with 3-D points Copy to clipboard Appending polygons with numpy arrays Copy to clipboard Copy to clipboard,activeloop Point Cloud Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#point-cloud-htype,"Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each point cloud is a list / array of points. All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a point cloud tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-point-cloud-tensor,A point cloud tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Appending point clouds¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-point-clouds,Point clouds can be appended as a np.ndarray. ExamplesAppending point clouds with numpy arrays Copy to clipboard Or we can use deeplake.read() method to add samples Copy to clipboard,activeloop Creating a mesh tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-mesh-tensor,A mesh tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Mesh Htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#mesh-htype,"Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a point tensor¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#creating-a-point-tensor,A point tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard,activeloop Sequence htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Link htype¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples Copy to clipboard Adding credentials to the datasetYou can add the names of the credentials you want to use (not needed for http/local urls) Copy to clipboard and populate the added names with credentials dictionaries Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload.For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. Copy to clipboard Create a link tensor Copy to clipboard Populate the tensor with links Copy to clipboard Accessing the data Copy to clipboard Updating a sample Copy to clipboard",activeloop Appending meshes¶,https://docs.deeplake.ai/en/v3.1.0/Htypes.html#appending-meshes,ExamplesAppending a ply file contatining a mesh data to tensor Copy to clipboard Copy to clipboard,activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes," Htype is the class of a tensor: image, bounding box, generic tensor, etc.The htype of a tensor can be specified at its creation Copy to clipboard If not specified, the tensor's htype defaults to 'generic'.Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of Deep Lake datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: Htype configs¶ HTYPE DTYPE COMPRESSION generic None None image uint8 Required arg image.rgb uint8 Required arg image.gray uint8 Required arg video uint8 Required arg audio float64 Required arg class_label uint32 None bbox float32 None segment_mask uint32 None binary_mask bool None keypoints_coco int32 None point int32 None polygon float32 None text str None json Any None list List None dicom None dcm point_cloud None las mesh None ply instance_label uint32 None link str None sequence None None Image Htype¶ Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard Video Htype¶ Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode' . type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard Segmentation Mask Htype¶ Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'. Binary Mask Htype¶ Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read() . Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard Audio Htype¶ Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard Class Label Htype¶ Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard Bounding Box Htype¶ Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer . Point Htype¶ Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard Polygon Htype¶ Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Point Cloud Htype¶ Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read()",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise . The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard COCO Keypoints Htype¶ Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as: [x1, y1, v1, x2, y2, v2, …, xk, yk, vk] The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"Each point cloud is a list / array of points . All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard Mesh Htype¶ Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard Sequence htype¶ A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([deeplake.read('img01.jpg'), deeplake.read('img02.jpg')]) Copy to clipboard Link htype¶ Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default",activeloop Creating an image tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-an-image-tensor,An image tensor can be created using Copy to clipboard OR Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Image Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#image-htype,"Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard",activeloop Appending image samples¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-image-samples,"Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). ExamplesAppending pixel data with array Copy to clipboard Appening Deep Lake image sample Copy to clipboard You can append multiple samples at the same time using extend(). Copy to clipboard NoteIf the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression.",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#htypes,"If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples . This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples >>> ds = deeplake.dataset('......') Copy to clipboard Adding credentials to the dataset You can add the names of the credentials you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard and populate the added names with credentials dictionaries >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload. For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. >>> ds.add_creds_key('my_s3_creds', managed=True) >>> ds.add_creds_key('my_gcs_creds', managed=True) Copy to clipboard Create a link tensor >>> ds.create_tensor('img', htype='link[image]', sample_compression='jpg') Copy to clipboard Populate the tensor with links >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='my_s3_key')) >>> ds.img.append(deeplake.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(deeplake.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(deeplake.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(deeplake.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='ENV')) # this will use creds from environment Copy to clipboard Accessing the data >>> for i in range(5): ... ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = deeplake.link('./data/cat.jpeg') Copy to clipboard",activeloop Creating a video tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-video-tensor,A video tensor can be created using Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop image.rgb and image.gray htypes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#image-rgb-and-image-gray-htypes,"image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format.image.rgb and image.gray tensors can be created using Copy to clipboard Copy to clipboard",activeloop Audio Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#audio-htype,"Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard",activeloop Video Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#video-htype,"Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard",activeloop Creating an audio tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-an-audio-tensor,An audio tensor can be created using Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: Copy to clipboard,activeloop Class Label Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#class-label-htype,"Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard",activeloop Appending video samples¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-video-samples,"Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. ExamplesAppending Deep Lake video sample Copy to clipboard Extending with multiple videos Copy to clipboard",activeloop Creating a class label tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-class-label-tensor,"A class label tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use.",activeloop Bounding Box Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#bounding-box-htype,"Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard",activeloop Appending bounding boxes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Appending audio samples¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-audio-samples,"Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. ExamplesAppending Deep Lake audio sample Copy to clipboard Extending with Deep Lake audio samples Copy to clipboard",activeloop Segmentation Mask Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#segmentation-mask-htype,"Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a segment_mask tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-segment-mask-tensor,"A segment_mask tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Creating a bbox tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-bbox-tensor,"A bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH).",activeloop Appending segmentation masks¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-segmentation-masks,"Segmentation masks can be appended as np.ndarray. Examples Copy to clipboard NoteSince each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a binary_mask tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-binary-mask-tensor,"A binary_mask tensor can be created using Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Binary Mask Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#binary-mask-htype,"Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard",activeloop COCO Keypoints Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#coco-keypoints-htype,"Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as:[x1, y1, v1, x2, y2, v2, …, xk, yk, vk]The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Creating a keypoints_coco tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-keypoints-coco-tensor,A keypoints_coco tensor can be created using Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. Copy to clipboard,activeloop Appending keypoints¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-keypoints,"Keypoints can be appended as np.ndarray or list. ExamplesAppending keypoints sample with 3 keypoints and 4 objects Copy to clipboard WarningIn order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Point Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#point-htype,"Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard",activeloop Polygon Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#polygon-htype,"Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard",activeloop Appending point samples¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-point-samples,Points can be appended as np.ndarray or list. ExamplesAppending 2 2-D points Copy to clipboard Appending 2 3-D points Copy to clipboard,activeloop Creating a polygon tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-polygon-tensor,A polygon tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Appending polygons¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-polygons,Polygons can be appended as a list of list of tuples or np.ndarray. ExamplesAppending polygons with 2-D points Copy to clipboard Appending polygons with 3-D points Copy to clipboard Appending polygons with numpy arrays Copy to clipboard Copy to clipboard,activeloop Creating a point cloud tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-point-cloud-tensor,A point cloud tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Point Cloud Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#point-cloud-htype,"Sample dimensions: (# num_points, 3) Point cloud samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each point cloud is a list / array of points. All points in a sample should have the same number of co-ordinates. Different point clouds can have different number of points. Creating a point cloud tensor¶ A point cloud tensor can be created using >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Optional args: sample_compression Supported compressions: >>> [None, 'las'] Copy to clipboard Appending point clouds¶ Point clouds can be appended as a np.ndarray. Examples Appending point clouds with numpy arrays >>> import numpy as np >>> point_cloud1 = np.random.randint(0, 10, (5, 3)) >>> ds.point_clouds.append(point_cloud1) >>> point_cloud2 = np.random.randint(0, 10, (15, 3)) >>> ds.point_clouds.append(point_cloud2) >>> ds.point_clouds.shape >>> (2, None, 3) Copy to clipboard Or we can use deeplake.read() method to add samples >>> import deeplake as dp >>> sample = dp.read('example.las') # point cloud with 100 points >>> ds.point_cloud.append(sample) >>> ds.point_cloud.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a point tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-point-tensor,A point tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard,activeloop Appending point clouds¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-point-clouds,Point clouds can be appended as a np.ndarray. ExamplesAppending point clouds with numpy arrays Copy to clipboard Or we can use deeplake.read() method to add samples Copy to clipboard,activeloop Mesh Htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#mesh-htype,"Sample dimensions: (# num_points, 3) Mesh samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Each sample in a tensor of mesh htype is a mesh array (3-D object data). Each mesh is a list / array of points. Different meshes can have different number of points. Creating a mesh tensor¶ A mesh tensor can be created using >>> ds.create_tensor('mesh', htype='mesh', sample_compression='ply') Copy to clipboard Optional args: sample_compression Supported compressions: >>> ['ply'] Copy to clipboard Appending meshes¶ Examples Appending a ply file contatining a mesh data to tensor >>> import deeplake as dp >>> sample = dp.read('example.ply') # mesh with 100 points and 200 faces >>> ds.mesh.append(sample) Copy to clipboard >>> ds.mesh.shape >>> (1, 100, 3) Copy to clipboard",activeloop Creating a mesh tensor¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#creating-a-mesh-tensor,A mesh tensor can be created using Copy to clipboard Optional args: sample_compression Supported compressions: Copy to clipboard,activeloop Appending binary masks¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-binary-masks,Binary masks can be appended as np.ndarray. ExamplesAppending a binary mask with 5 objects Copy to clipboard,activeloop Appending meshes¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-meshes,ExamplesAppending a ply file contatining a mesh data to tensor Copy to clipboard Copy to clipboard,activeloop Sequence htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Link htype¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples Copy to clipboard Adding credentials to the datasetYou can add the names of the credentials you want to use (not needed for http/local urls) Copy to clipboard and populate the added names with credentials dictionaries Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload.For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. Copy to clipboard Create a link tensor Copy to clipboard Populate the tensor with links Copy to clipboard Accessing the data Copy to clipboard Updating a sample Copy to clipboard",activeloop Appending class labels¶,https://docs.deeplake.ai/en/v3.0.16/Htypes.html#appending-class-labels,"Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. ExamplesAppending index Copy to clipboard Extending with list of indices Copy to clipboard Appending text labels Copy to clipboard",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes," Htype is the class of a tensor: image, bounding box, generic tensor, etc.The htype of a tensor can be specified at its creation Copy to clipboard If not specified, the tensor's htype defaults to 'generic'.Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of Deep Lake datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: Htype configs¶ HTYPE DTYPE COMPRESSION generic None None image uint8 Required arg image.rgb uint8 Required arg image.gray uint8 Required arg video uint8 Required arg audio float64 Required arg class_label uint32 None bbox float32 None segment_mask uint32 None binary_mask bool None keypoints_coco int32 None point int32 None polygon float32 None text str None json Any None list List None dicom None dcm point_cloud None las instance_label uint32 None link str None sequence None None Image Htype¶ Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard Video Htype¶ Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes,"Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read() . Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard Audio Htype¶ Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard Class Label Htype¶ Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard Bounding Box Htype¶ Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes,"For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer . Point Htype¶ Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard Polygon Htype¶ Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Sequence htype¶ A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes,"Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode' . type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard Segmentation Mask Htype¶ Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'. Binary Mask Htype¶ Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes,"A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise . The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard COCO Keypoints Htype¶ Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as: [x1, y1, v1, x2, y2, v2, …, xk, yk, vk] The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order",activeloop Htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#htypes,"It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc . Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([deeplake.read('img01.jpg'), deeplake.read('img02.jpg')]) Copy to clipboard Link htype¶ Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples >>> ds = deeplake.dataset('......') Copy to clipboard Adding credentials to the dataset You can add the names of the credentials you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard and populate the added names with credentials dictionaries >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload. For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. >>> ds.add_creds_key('my_s3_creds', managed=True) >>> ds.add_creds_key('my_gcs_creds', managed=True) Copy to clipboard Create a link tensor >>> ds.create_tensor('img', htype='link[image]', sample_compression='jpg') Copy to clipboard Populate the tensor with links >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='my_s3_key')) >>> ds.img.append(deeplake.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(deeplake.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(deeplake.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(deeplake.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key >>> ds.img.append(deeplake.link('s3://abc/def.jpeg', creds_key='ENV')) # this will use creds from environment Copy to clipboard Accessing the data >>> for i in range(5): ... ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = deeplake.link('./data/cat.jpeg') Copy to clipboard",activeloop Appending image samples¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-image-samples,"Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). ExamplesAppending pixel data with array Copy to clipboard Appening Deep Lake image sample Copy to clipboard You can append multiple samples at the same time using extend(). Copy to clipboard NoteIf the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression.",activeloop Image Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#image-htype,"Sample dimensions: (height, width, # channels) or (height, width). Images can be stored in Deep Lake as compressed bytes or as raw arrays. Due to the high compression ratio for most image formats, it is highly recommended to store compressed images using the sample_compression input to the create_tensor method. Creating an image tensor¶ An image tensor can be created using >>> ds.create_tensor('images', htype='image', sample_compression='jpg') Copy to clipboard OR >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', ... 'webp', 'wmf', 'xbm', 'eps', 'fli', 'im', 'msp', 'mpo'] Copy to clipboard Appending image samples¶ Image samples can be of type np.ndarray or Deep Lake Sample which can be created using deeplake.read(). Examples Appending pixel data with array >>> ds.images.append(np.zeros((5, 5, 3), dtype=np.uint8)) Copy to clipboard Appening Deep Lake image sample >>> ds.images.append(deeplake.read('images/0001.jpg')) Copy to clipboard You can append multiple samples at the same time using extend(). >>> ds.images.extend([deeplake.read(f'images/000{i}.jpg') for i in range(10)]) Copy to clipboard Note If the compression format of the input sample does not match the sample_compression of the tensor, Deep Lake will decompress and recompress the image for storage, which may significantly slow down the upload process. The upload process is fastest when the image compression matches the sample_compression. image.rgb and image.gray htypes¶ image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format. image.rgb and image.gray tensors can be created using >>> ds.create_tensor('rgb_images', htype='image.rgb', sample_compression='...') Copy to clipboard >>> ds.create_tensor('gray_images', htype='image.gray', sample_compression='...') Copy to clipboard",activeloop Creating an image tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-an-image-tensor,An image tensor can be created using Copy to clipboard OR Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop image.rgb and image.gray htypes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#image-rgb-and-image-gray-htypes,"image.rgb and image.gray htypes can be used to force your samples to be of RGB or grayscale type. i.e., if RGB images are appened to an image.gray tensor, Deep Lake will convert them to grayscale and if grayscale images are appended to an image.rgb tensor, Deep Lake will convert them to RGB format.image.rgb and image.gray tensors can be created using Copy to clipboard Copy to clipboard",activeloop Video Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#video-htype,"Sample dimensions: (# frames, height, width, # channels) or (# frames, height, width) Creating a video tensor¶ A video tensor can be created using >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: >>> [None, 'mp4', 'mkv', 'avi'] Copy to clipboard Appending video samples¶ Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. Examples Appending Deep Lake video sample >>> ds.videos.append(deeplake.read('videos/0012.mp4')) Copy to clipboard Extending with multiple videos >>> ds.videos.extend([deeplake.read(f'videos/00{i}.mp4') for i in range(10)]) Copy to clipboard",activeloop Creating a video tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-video-tensor,A video tensor can be created using Copy to clipboard Optional args: dtype: Defaults to uint8. Supported compressions: Copy to clipboard,activeloop Appending video samples¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-video-samples,"Video samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Deep Lake does not support compression of raw video frames. Therefore, array of raw frames can only be appended to tensors with None compression. Recompression of samples read with deeplake.read is also not supported. ExamplesAppending Deep Lake video sample Copy to clipboard Extending with multiple videos Copy to clipboard",activeloop Audio Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#audio-htype,"Sample dimensions: (# samples in audio, # channels) or (# samples in audio,) Creating an audio tensor¶ An audio tensor can be created using >>> ds.create_tensor('audios', htype='audio', sample_compression='mp3') Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: >>> [None, 'mp3', 'wav', 'flac'] Copy to clipboard Appending audio samples¶ Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. Examples Appending Deep Lake audio sample >>> ds.audios.append(deeplake.read('audios/001.mp3')) Copy to clipboard Extending with Deep Lake audio samples >>> ds.audios.extend([deeplake.read(f'videos/00{i}.mp3') for i in range(10)]) Copy to clipboard",activeloop Class Label Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#class-label-htype,"Sample dimensions: (# labels,) Class labels are stored as numerical values in tensors, which are indices of the list tensor.info.class_names. Creating a class label tensor¶ A class label tensor can be created using >>> classes = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks'] >>> ds.create_tensor('labels', htype='class_label', class_names=classes, chunk_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['airplanes', 'cars', 'birds', 'cats', 'deer', 'dogs', 'frogs', 'horses', 'ships', 'trucks']) Copy to clipboard Note If specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use. Appending class labels¶ Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. Examples Appending index >>> ds.labels.append(0) >>> ds.labels.append(np.zeros((5,), dtype=np.uint32)) Copy to clipboard Extending with list of indices >>> ds.labels.extend([[0, 1, 2], [1, 3]]) Copy to clipboard Appending text labels >>> ds.labels.append(['cars', 'airplanes']) Copy to clipboard",activeloop Creating an audio tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-an-audio-tensor,An audio tensor can be created using Copy to clipboard Optional args: dtype: Defaults to float64. Supported compressions: Copy to clipboard,activeloop Appending audio samples¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-audio-samples,"Audio samples can be of type np.ndarray or Sample which is returned by deeplake.read(). Like videos, Deep Lake does not support compression or recompression of input audio samples. Thus, samples of type np.ndarray can only be appended to tensors with None compression. ExamplesAppending Deep Lake audio sample Copy to clipboard Extending with Deep Lake audio samples Copy to clipboard",activeloop Bounding Box Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#bounding-box-htype,"Sample dimensions: (# bounding boxes, 4) Bounding boxes have a variety of formats such as YOLO, COCO, Pascal-VOC and others. In order for bounding boxes to be correctly displayed by the visualizer, the format of the bounding box must be specified in the coords key in tensor meta information mentioned below. Creating a bbox tensor¶ A bbox tensor can be created using >>> ds.create_tensor('boxes', htype='bbox', coords={'type': 'fractional', 'mode': 'CCWH'}) Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(coords = {'type': 'pixel', 'LTRB'}) Copy to clipboard Note If the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH). Appending bounding boxes¶ Bounding boxes can be appended as np.ndarrays or list or lists of arrays. Examples Appending one bounding box >>> box array([[462, 123, 238, 98]]) >>> ds.boxes.append(box) Copy to clipboard Appending sample with 3 bounding boxes >>> boxes array([[965, 110, 262, 77], [462, 123, 238, 98], [688, 108, 279, 116]]) >>> boxes.shape (3, 4) >>> ds.boxes.append(boxes) Copy to clipboard",activeloop Creating a class label tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-class-label-tensor,"A class label tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression. dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf specifying compression, since the number of labels in one sample will be too low, chunk_compression would be the better option to use.",activeloop Appending bounding boxes¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-bounding-boxes,Bounding boxes can be appended as np.ndarrays or list or lists of arrays. ExamplesAppending one bounding box Copy to clipboard Appending sample with 3 bounding boxes Copy to clipboard,activeloop Segmentation Mask Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#segmentation-mask-htype,"Sample dimensions: (height, width) Segmentation masks are 2D representations of class labels where the numerical label data is encoded in an array of same shape as the image. The numerical values are indices of the list tensor.info.class_names. Creating a segment_mask tensor¶ A segment_mask tensor can be created using >>> classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle'] >>> ds.create_tensor('masks', htype='segment_mask', class_names=classes, sample_compression='lz4') Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set the class names after tensor creation. >>> ds.labels.info.update(class_names = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle']) Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending segmentation masks¶ Segmentation masks can be appended as np.ndarray. Examples >>> ds.masks.append(np.zeros((512, 512))) Copy to clipboard Note Since each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Creating a segment_mask tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-segment-mask-tensor,"A segment_mask tensor can be created using Copy to clipboard Optional args: class_names: This must be a list of strings. tensor.info.class_names will be set to this list. sample_compression or chunk_compression dtype: Defaults to uint32. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop Appending segmentation masks¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-segmentation-masks,"Segmentation masks can be appended as np.ndarray. Examples Copy to clipboard NoteSince each pixel can only be labeled once, segmentation masks are not appropriate for datasets where objects might overlap, or where multiple objects within the same class must be distinguished. For these use cases, please use htype = 'binary_mask'.",activeloop Binary Mask Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#binary-mask-htype,"Sample dimensions: (height, width, # objects in a sample) Binary masks are similar to segmentation masks, except that each object is represented by a channel in the mask. Each channel in the mask encodes values for a single object. A pixel in a mask channel should have a value of 1 if the pixel of the image belongs to this object and 0 otherwise. The labels corresponding to the channels should be stored in an adjacent tensor of htype class_label, in which the number of labels at a given index is equal to the number of objects (number of channels) in the binary mask. Creating a binary_mask tensor¶ A binary_mask tensor can be created using >>> ds.create_tensor('masks', htype='binary_mask', sample_compression='lz4') Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: >>> ['lz4'] Copy to clipboard Note Since segmentation masks often contain large amounts of data, it is recommended to compress them using lz4. Appending binary masks¶ Binary masks can be appended as np.ndarray. Examples Appending a binary mask with 5 objects >>> ds.masks.append(np.zeros((512, 512, 5), dtype='bool')) >>> ds.labels.append(['aeroplane', 'aeroplane', 'bottle', 'bottle', 'bird']) Copy to clipboard",activeloop Creating a binary_mask tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-binary-mask-tensor,"A binary_mask tensor can be created using Copy to clipboard Optional args: ref:sample_compression <sample_compression> or chunk_compression dtype: Defaults to bool. Supported compressions: Copy to clipboard NoteSince segmentation masks often contain large amounts of data, it is recommended to compress them using lz4.",activeloop COCO Keypoints Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#coco-keypoints-htype,"Sample dimensions: (3 x # keypoints, # objects in a sample) COCO keypoints are a convention for storing points of interest in an image. Each keypoint consists of 3 values: x - coordinate, y - coordinate and v - visibility. A set of K keypoints of an object is represented as:[x1, y1, v1, x2, y2, v2, …, xk, yk, vk]The visibility v can be one of three values: 0: keypoint not in image. 1: keypoint in image but not visible. 2: keypoint in image and visible. Creating a keypoints_coco tensor¶ A keypoints_coco tensor can be created using >>> ds.create_tensor('keypoints', htype='keypoints_coco', keypoints=['knee', 'elbow', 'head'], connections=[[0, 1], [1, 2]]) Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. >>> ds.keypoints.info.update(keypoints = ['knee', 'elbow',...]) >>> ds.keypoints.info.update(connections = [[0,1], [2,3], ...]) Copy to clipboard Appending keypoints¶ Keypoints can be appended as np.ndarray or list. Examples Appending keypoints sample with 3 keypoints and 4 objects >>> ds.keypoints.update(keypoints = ['left ear', 'right ear', 'nose']) >>> ds.keypoints.update(connections = [[0, 2], [1, 2]]) >>> kp_arr array([[465, 398, 684, 469], [178, 363, 177, 177], [ 2, 2, 2, 1], [454, 387, 646, 478], [177, 322, 137, 161], [ 2, 2, 2, 2], [407, 379, 536, 492], [271, 335, 150, 143], [ 2, 1, 2, 2]]) >>> kp_arr.shape (9, 4) >>> ds.keypoints.append(kp_arr) Copy to clipboard Warning In order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Appending binary masks¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-binary-masks,Binary masks can be appended as np.ndarray. ExamplesAppending a binary mask with 5 objects Copy to clipboard,activeloop Creating a keypoints_coco tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-keypoints-coco-tensor,A keypoints_coco tensor can be created using Copy to clipboard Optional args: keypoints: List of strings describing the i th keypoint. tensor.info.keypoints will be set to this list. connections: List of strings describing which points should be connected by lines in the visualizer. sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard You can also choose to set keypoints and / or connections after tensor creation. Copy to clipboard,activeloop Creating a bbox tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-bbox-tensor,"A bbox tensor can be created using Copy to clipboard Optional args: coords: A dictionary with keys 'type' and 'mode'. type: Specifies the units of bounding box coordinates. 'pixel': is in unit of pixels. 'fractional': is in units relative to the width and height of the image, such as in YOLO format. mode: Specifies the convention for the 4 coordinates 'LTRB': left_x, top_y, right_x, bottom_y 'LTWH': left_x, top_y, width, height 'CCWH': center_x, center_y, width, height dtype: Defaults to float32. sample_compression or chunk_compression. Supported compressions: Copy to clipboard You can also choose to set the class names after tensor creation. Copy to clipboard NoteIf the bounding box format is not specified, the visualizer will assume a YOLO format (fractional + CCWH) if the box coordinates are < 1 on average. Otherwise, it will assume the COCO format (pixel + LTWH).",activeloop Point Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#point-htype,"Sample dimensions: (# points, 2) in case of 2-D (X, Y) co-ordinates or (# points, 3) in case of 3-D (X, Y, Z) co-ordinates of the point. Points does not contain a fixed mapping across samples between the point order and real-world objects (i.e., point 0 is an elbow, point 1 is a knee, etc.). If you require such a mapping, use COCO Keypoints Htype. Creating a point tensor¶ A point tensor can be created using >>> ds.create_tensor('points', htype='point', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending point samples¶ Points can be appended as np.ndarray or list. Examples Appending 2 2-D points >>> ds.points.append([[0, 1], [1, 3]]) Copy to clipboard Appending 2 3-D points >>> ds.points.append(np.zeros((2, 3))) Copy to clipboard",activeloop Appending keypoints¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-keypoints,"Keypoints can be appended as np.ndarray or list. ExamplesAppending keypoints sample with 3 keypoints and 4 objects Copy to clipboard WarningIn order to correctly use the keypoints and connections metadata, it is critical that all objects in every sample have the same number of K keypoints in the same order. For keypoints that are not present in an image, they can be stored with dummy coordinates of x = 0, y = 0, and v = 0, and the visibility will prevent them from being drawn in the visualizer.",activeloop Appending class labels¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-class-labels,"Class labels can be appended as int, str, np.ndarray or list of int or str. In case of strings, tensor.info.class_names is updated automatically. ExamplesAppending index Copy to clipboard Extending with list of indices Copy to clipboard Appending text labels Copy to clipboard",activeloop Creating a point tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-point-tensor,A point tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to int32. Supported compressions: Copy to clipboard,activeloop Polygon Htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#polygon-htype,"Sample dimensions: (# polygons, # points per polygon, # co-ordinates per point) Each sample in a tensor of polygon htype is a list of polygons. Each polygon is a list / array of points. All points in a sample should have the same number of co-ordinates (eg., cannot mix 2-D points with 3-D points). Different samples can have different number of polygons. Different polygons can have different number of points. Creating a polygon tensor¶ A polygon tensor can be created using >>> ds.create_tensor('polygons', htype='polygon', sample_compression=None) Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: >>> ['lz4'] Copy to clipboard Appending polygons¶ Polygons can be appended as a list of list of tuples or np.ndarray. Examples Appending polygons with 2-D points >>> poly1 = [(1, 2), (2, 3), (3, 4)] >>> poly2 = [(10, 12), (14, 19)] >>> poly3 = [(33, 32), (54, 67), (67, 43), (56, 98)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with 3-D points >>> poly1 = [(10, 2, 9), (12, 3, 8), (12, 10, 4)] >>> poly2 = [(10, 1, 8), (5, 17, 11)] >>> poly3 = [(33, 33, 31), (45, 76, 13), (60, 24, 17), (67, 87, 83)] >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard Appending polygons with numpy arrays >>> import numpy as np >>> sample = np.random.randint(0, 10, (5, 7, 2)) # 5 polygons with 7 points >>> ds.polygons.append(sample) Copy to clipboard >>> import numpy as np >>> poly1 = np.random.randint(0, 10, (5, 2)) >>> poly2 = np.random.randint(0, 10, (8, 2)) >>> poly3 = np.random.randint(0, 10, (3, 2)) >>> sample = [poly1, poly2, poly3] >>> ds.polygons.append(sample) Copy to clipboard",activeloop Creating a polygon tensor¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#creating-a-polygon-tensor,A polygon tensor can be created using Copy to clipboard Optional args: sample_compression or chunk_compression dtype: Defaults to float32. Supported compressions: Copy to clipboard,activeloop Appending polygons¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-polygons,Polygons can be appended as a list of list of tuples or np.ndarray. ExamplesAppending polygons with 2-D points Copy to clipboard Appending polygons with 3-D points Copy to clipboard Appending polygons with numpy arrays Copy to clipboard Copy to clipboard,activeloop Link htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. This is True by default. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. This is True by default. If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read from them to verify the integrity of the link samples. This is True by default. If you do not want to verify your links, all three of verify, create_shape_tensor and create_sample_info_tensor have to be set to False. Examples Copy to clipboard Adding credentials to the datasetYou can add the names of the credentials you want to use (not needed for http/local urls) Copy to clipboard and populate the added names with credentials dictionaries Copy to clipboard These creds are only present temporarily and will have to be repopulated on every reload.For datasets connected to Activeloop Platform, you can store your credentials on the platform as Managed Credentials and use them just by adding the keys to your dataset. For example if you have managed credentials with names 'my_s3_creds', 'my_gcs_creds', you can add them to your dataset using Dataset.add_creds_key without having to populate them. Copy to clipboard Create a link tensor Copy to clipboard Populate the tensor with links Copy to clipboard Accessing the data Copy to clipboard Updating a sample Copy to clipboard",activeloop Sequence htype¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Appending point samples¶,https://docs.deeplake.ai/en/v3.0.15/Htypes.html#appending-point-samples,Points can be appended as np.ndarray or list. ExamplesAppending 2 2-D points Copy to clipboard Appending 2 3-D points Copy to clipboard,activeloop Htypes,https://docs.deeplake.ai/en/v2.8.5/Htypes.html#module-hub.htype,"Examples >>> ds = hub.dataset('......') Copy to clipboard Add the names of the creds you want to use (not needed for http/local urls) >>> ds.add_creds_key('MY_S3_KEY') >>> ds.add_creds_key('GCS_KEY') Copy to clipboard Populate the names added with creds dictionary These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('MY_S3_KEY', {}) # add creds here >>> ds.populate_creds('GCS_KEY', {}) # add creds here Copy to clipboard Create a tensor that can contain links >>> ds.create_tensor('img', htype='link[image]', verify=True, create_shape_tensor=False, create_sample_info_tensor=False) Copy to clipboard Populate the tensor with links >>> ds.img.append(hub.link('s3://abc/def.jpeg', creds_key='MY_S3_KEY')) >>> ds.img.append(hub.link('gcs://ghi/jkl.png', creds_key='GCS_KEY')) >>> ds.img.append(hub.link('https://picsum.photos/200/300')) # http path doesn't need creds >>> ds.img.append(hub.link('./path/to/cat.jpeg')) # local path doesn't need creds >>> ds.img.append(hub.link('s3://abc/def.jpeg')) # this will throw an exception as cloud paths always need creds_key >>> ds.img.append(hub.link('s3://abc/def.jpeg', creds_key='ENV')) # this will use creds from environment Copy to clipboard Accessing the data >>> for i in range(5): .. . ds.img[i].numpy() ... Copy to clipboard Updating a sample >>> ds.img[0] = hub.link('./data/cat.jpeg') Copy to clipboard",activeloop Htypes,https://docs.deeplake.ai/en/v2.8.5/Htypes.html#module-hub.htype," 'htype' is the class of a tensor: image, bounding box, generic tensor, etc.When not specified, the unspecified options will be inferred from the data: Copy to clipboard If you know beforehand, you can use htype at creation: Copy to clipboard Specifying an htype allows for strict settings and error handling, and it is critical for increasing the performance of hub datasets containing rich data such as images and videos.Supported htypes and their respective defaults are: HTYPE DTYPE COMPRESSION image uint8 None image.rgb uint8 None image.gray uint8 None class_label uint32 None bbox float32 None video uint8 None binary_mask bool None segment_mask uint32 None keypoints_coco int32 None point int32 None audio float64 None text str None json Any None list List None dicom None dcm link str None sequence None None Sequence htype A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples >>> ds.create_tensor('seq', htype='sequence') >>> ds.seq.append([1, 2, 3]) >>> ds.seq.append([4, 5, 6]) >>> ds.seq.numpy() array([[[1], [2], [3]], [[4], [5], [6]]]) Copy to clipboard >>> ds.create_tensor('image_seq', htype='sequence[image]', sample_compression='jpg') >>> ds.image_seq.append([hub.read('img01.jpg'), hub.read('img02.jpg')]) Copy to clipboard Link htype Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read to verify the integrity of the sample. If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read",activeloop Sequence htype,https://docs.deeplake.ai/en/v2.8.5/Htypes.html#sequence-htype,"A special meta htype for tensors where each sample is a sequence. The items in the sequence are samples of another htype. It is a wrapper htype that can wrap other htypes like sequence[image], sequence[video], sequence[text], etc. Examples Copy to clipboard Copy to clipboard",activeloop Link htype,https://docs.deeplake.ai/en/v2.8.5/Htypes.html#link-htype,"Link htype is a special meta htype that allows linking of external data (files) to the dataset, without storing the data in the dataset itself. Moreover, there can be variations in this htype, such as link[image], link[video], link[audio], etc. that would enable the activeloop visualizer to correctly display the data. No data is actually loaded until you try to read the sample from a dataset. There are a few exceptions to this:- If verify=True was specified during create_tensor of the tensor to which this is being added, some metadata is read to verify the integrity of the sample. If create_shape_tensor=True was specified during create_tensor of the tensor to which this is being added, the shape of the sample is read. If create_sample_info_tensor=True was specified during create_tensor of the tensor to which this is being added, the sample info is read. Examples Copy to clipboard Add the names of the creds you want to use (not needed for http/local urls) Copy to clipboard Populate the names added with creds dictionary These creds are only present temporarily and will have to be repopulated on every reload Copy to clipboard Create a tensor that can contain links Copy to clipboard Populate the tensor with links Copy to clipboard Accessing the data Copy to clipboard Updating a sample Copy to clipboard",activeloop How to query datasets using the Deep Lake Tensor Query Language (TQL),https://docs.activeloop.ai/enterprise-features/querying-datasets#how-to-query-datasets-using-the-deep-lake-tensor-query-language-tql,"Querying datasets is a critical aspect of data science workflows that enables users to filter datasets and focus their work on the most relevant data. Deep Lake offers a highly-performant query engine built in C++ and optimized for the Deep Lake data format. The Deep Lake query engine is only accessible to registered and authenticated users, and it applies usage restrictions based on your Deep Lake Plan.",activeloop Querying in the low-level Python API,https://docs.activeloop.ai/enterprise-features/querying-datasets#querying-in-the-low-level-python-api,Queries can also be performed in the Python API using:view = ds.query(<query_string>),activeloop Saving and utilizing dataset query results in the low-level Python API,https://docs.activeloop.ai/enterprise-features/querying-datasets#saving-and-utilizing-dataset-query-results-in-the-low-level-python-api,"The query results (Dataset Views) can be saved in the UI as shown above, or if the view is generated in Python, it can be saved using the Python API below. Full details are available here.ds_view.save_view(message = 'Samples with monarchs')In order to maintain data lineage, Dataset Views are immutable and are connected to specific commits. Therefore, views can only be saved if the dataset has a commit and there are no uncommitted changes in the HEAD. You can check for this using ds.has_head_changes can be loaded in the python API and they can passed to ML frameworks just like regular datasets:ds_view = ds.load_view(view_id, optimize = True, num_workers = 2)for data in ds_view.pytorch(): # Training loop hereThe optimize parameter in ds.load_view(..., optimize = True) materializes the Dataset View into a new sub-dataset that is optimized for streaming. If the original dataset uses linked tensors, the data will be copied to Deep Lake format.Optimizing the Dataset View is critical for achieving rapid streaming.If the saved Dataset View is no longer needed, it can be deleted using:ds.delete_view(view_id)",activeloop Query Syntax,https://docs.activeloop.ai/enterprise-features/querying-datasets#query-syntax,TQL Syntax,activeloop Querying in the Vector Store Python API,https://docs.activeloop.ai/enterprise-features/querying-datasets#querying-in-the-vector-store-python-api,"view = vector_store.search(query = <query_string>, exec_option = 'compute_engine')",activeloop deeplake.core.sample,https://docs.deeplake.ai/en/latest/deeplake.core.sample.html#deeplake-core-sample,"class deeplake.core.sample.Sample __init__(path: Optional[str] = None, array: Optional[ndarray] = None, buffer: Optional[Union[bytes, memoryview]] = None, compression: Optional[str] = None, verify: bool = False, shape: Optional[Tuple[int]] = None, dtype: Optional[str] = None, creds: Optional[Dict] = None, storage: Optional[StorageProvider] = None) Represents a single sample for a tensor. Provides all important meta information in one place. Note If self.is_lazy is True, this Sample doesn't actually have any data loaded. To read this data, simply try to read it into a numpy array (sample.array) Parameters path (str) – Path to a sample stored on the local file system that represents a single sample. If path is provided, array should not be. Implicitly makes self.is_lazy == True. array (np.ndarray) – Array that represents a single sample. If array is provided, path should not be. Implicitly makes self.is_lazy == False. buffer – (bytes): Byte buffer that represents a single sample. If compressed, compression argument should be provided. compression (str) – Specify in case of byte buffer. verify (bool) – If a path is provided, verifies the sample if True. shape (Tuple[int]) – Shape of the sample. dtype (optional, str) – Data type of the sample. creds (optional, Dict) – Credentials for s3, gcp and http urls. storage (optional, StorageProvider) – Storage provider. property array: ndarray Return numpy array corresponding to the sample. Decompresses the sample if necessary. Example >>> sample = deeplake.read('./images/dog.jpg') >>> arr = sample.array >>> arr.shape (323, 480, 3) Copy to clipboard compressed_bytes(compression: Optional[str]) → bytes Returns this sample as compressed bytes. Note If this sample is pointing to a path and the requested compression is the same as it's stored in, the data is returned without re-compressing. Parameters compression (Optional[str]) – self.array will be compressed into this format. If compression is None, return uncompressed_bytes(). Returns Bytes for the compressed sample. Contains all metadata required to decompress within these bytes. Return type bytes Raises ValueError – On recompression of unsupported formats. property pil: Image Return PIL image corresponding to the sample. Decompresses the sample if necessary. Example >>> sample = deeplake.read('./images/dog.jpg') >>> pil = sample.pil >>> pil.size (480, 323) Copy to clipboard uncompressed_bytes() → Optional[bytes] Returns uncompressed bytes.",activeloop deeplake.core.linked_sample,https://docs.deeplake.ai/en/latest/deeplake.core.linked-sample.html#deeplake-core-linked-sample,"class deeplake.core.linked_sample.LinkedSample(path: str, creds_key: Optional[str] = None) Represents a sample that is initialized using external links. See deeplake.link().",activeloop deeplake.core.partial_sample,https://docs.deeplake.ai/en/latest/deeplake.core.partial-sample.html#deeplake-core-partial-sample,"class deeplake.core.partial_sample.PartialSample(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Optional[Union[str, dtype]] = dtype('uint8')) Represents a sample that is initialized by just shape and the data is updated later.",activeloop deeplake.core.linked_tiled_sample,https://docs.deeplake.ai/en/latest/deeplake.core.linked-tiled-sample.html#deeplake-core-linked-tiled-sample,"class deeplake.core.linked_tiled_sample.LinkedTiledSample(path_array: ndarray, creds_key: Optional[str] = None) Represents a sample that is initialized using external links. See deeplake.link_tiled().",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage," Base Storage Provider class deeplake.core.storage.StorageProvider abstract __delitem__(path: str) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. abstract __getitem__(path: str) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. abstract __iter__() Generator function that iterates over the keys of the provider. Yields str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__() Returns the number of files present inside the root of the provider. Returns the number of files present inside the root. Return type int abstract __setitem__(path: str, value: bytes) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__ list of weak references to the object (if defined) abstract _all_keys() → Set[str] Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set _is_hub_path = False An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly() Raises an exception if the provider is in read-only mode. abstract clear(prefix='') Delete the contents of the provider. copy() Returns a copy of the provider. Returns A copy of the provider. Return type StorageProvider disable_readonly() Disables read-only mode for the provider. enable_readonly() Enables read-only mode for the provider. flush() Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush() Flush cache if autoflush has been enabled",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Called at the end of methods which write data, to ensure consistency as a default . set_bytes(path: str, value: bytes, start_byte: Optional[int] = None, overwrite: Optional[bool] = False) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode. LRU Cache class deeplake.core.storage.LRUCache Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str) Deletes the object present at the path from the cache and the underlying storage. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters path (str) – The path relative to the root of the underlying storage. Raises KeyError – if an object is not found at the path. Returns The bytes of the object present at the path. Return type bytes __getstate__() → Dict[str, Any] Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: Optional[StorageProvider], cache_size: int) Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__() Generator function that iterates over the keys of the cache and the underlying storage. Yields str – the path of the object that it is iterating over, relative to the root of the provider",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"__len__() Returns the number of files present in the cache and the underlying storage . Returns the number of files present inside the root. Return type int __setitem__(path: str, value: Union[bytes, DeepLakeMemoryObject]) Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any]) Recreates a cache with the same configuration as the state. Parameters state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents. In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible. _all_keys() Helper function that lists all the objects present in the cache and the underlying storage. Returns set of all the objects found in the cache and the underlying storage. Return type set _flush_if_not_read_only() Flushes the cache if not in read-only mode. _forward(path) Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value) Forwards a path-value pair to the next storage, and un-marks its key. Parameters path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int) Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: Union[bytes, DeepLakeMemoryObject]) Helper function that adds a key value pair to the cache. Parameters path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. _pop_from_cache() Helper function that pops the least recently used key, value pair from the cache clear(prefix='') Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache() Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects() Removes all DeepLakeMemoryObjects from the cache. flush() Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range . Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetAccessError – Invalid credentials for the object path storage. S3GetError – Any other error while retrieving the object. need_to_reload_creds(err: ClientError) → bool Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root) Rename root folder. Google Cloud Storage Provider class deeplake.core.storage.GCSProvider Bases: StorageProvider Provider class for using GC storage. __contains__(key) Checks if key exists in mapping. __delitem__(key) Remove key. __getitem__(key) Retrieve data. __init__(root: str, token: Optional[Union[str, Dict]] = None, project: Optional[str] = None) Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises ModuleNotFoundError – If google cloud packages aren't installed. __iter__() Iterating over the structure. __len__() Returns length of the structure. __setitem__(key, value) Store value in key. _all_keys() Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. clear(prefix='') Remove all keys with given prefix below root - empties out mapping",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Parameters path (str) – The path relative to the root of the provider . start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. get_deeplake_object(path: str, expected_class, meta: Optional[Dict] = None, url=False, partial_bytes: int = 0) If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache. Parameters path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject) Registers a new object in the cache. remove_deeplake_object(path: str) Removes a DeepLakeMemoryObject from the cache. S3 Storage Provider class deeplake.core.storage.S3Provider Bases: StorageProvider Provider class for using S3 storage. __delitem__(path) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path) Gets the object present at the path. Parameters path (str) – the path relative to the root of the S3Provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, endpoint_url: Optional[str] = None, aws_region: Optional[str] = None, profile_name: Optional[str] = None, token: Optional[str] = None, **kwargs) Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user . aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. **kwargs – Additional arguments to pass to the S3 client. Includes: expiration. __iter__() Generator function that iterates over the keys of the S3Provider. Yields str – the name of the object that it is iterating over. __len__() Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns the number of files present inside the root. Return type int Raises S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode. _all_keys() Helper function that lists all the objects present at the root of the S3Provider. Returns set of all the objects found at the root of the S3Provider. Return type set Raises S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False) If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. _state_keys() Keys used to store the state of the provider. clear(prefix='') Deletes ALL data with keys having given prefix on the s3 bucket (under self.root)",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"sync() Sync provider keys with actual storage Local Storage Provider class deeplake.core.storage.LocalProvider Bases: StorageProvider Provider class for using the local filesystem . __delitem__(path: str) Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) Gets the object present at the path within the given byte range. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str) Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises FileAtPathException – If the root is a file instead of a directory. __iter__() Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns the number of files present inside the root. Return type int __setitem__(path: str, value: bytes) Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str] Lists all the objects present at the root of the Provider. Parameters refresh (bool) – refresh keys Returns set of all the objects found at the root of the Provider. Return type set _check_is_file(path: str) Checks if the path is a file. Returns the full_path to file if True",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns the number of files present inside the root . Return type int __setitem__(path: str, value: bytes) Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. _all_keys() Lists all the objects present at the root of the Provider. Returns set of all the objects found at the root of the Provider. Return type set clear(prefix='') Clears the provider.",activeloop Base Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#base-storage-provider," class deeplake.core.storage.StorageProvider abstract __delitem__(path: str) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. abstract __getitem__(path: str) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. abstract __iter__() Generator function that iterates over the keys of the provider. Yields str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__() Returns the number of files present inside the root of the provider. Returns the number of files present inside the root. Return type int abstract __setitem__(path: str, value: bytes) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__ list of weak references to the object (if defined) abstract _all_keys() → Set[str] Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set _is_hub_path = False An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly() Raises an exception if the provider is in read-only mode. abstract clear(prefix='') Delete the contents of the provider. copy() Returns a copy of the provider. Returns A copy of the provider. Return type StorageProvider disable_readonly() Disables read-only mode for the provider. enable_readonly() Enables read-only mode for the provider. flush() Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush() Flush cache if autoflush has been enabled. Called at the end of methods which write data, to ensure consistency as a default",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Parameters path (str) – the path to the object relative to the root of the provider . Returns the full path to the requested file. Return type str Raises DirectoryAtPathException – If a directory is found at the path. _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. clear(prefix='') Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path) Renames root folder Memory Provider class deeplake.core.storage.MemoryProvider Bases: StorageProvider Provider class for using the memory. __delitem__(path: str) Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. __getstate__() → str Does NOT save the in memory data in state. __init__(root: str = '') __iter__() Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider",activeloop LRU Cache,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#lru-cache,"get_deeplake_object(path: str, expected_class, meta: Optional[Dict] = None, url=False, partial_bytes: int = 0) If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache . Parameters path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject) Registers a new object in the cache. remove_deeplake_object(path: str) Removes a DeepLakeMemoryObject from the cache.",activeloop LRU Cache,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#lru-cache,"In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible . _all_keys() Helper function that lists all the objects present in the cache and the underlying storage. Returns set of all the objects found in the cache and the underlying storage. Return type set _flush_if_not_read_only() Flushes the cache if not in read-only mode. _forward(path) Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value) Forwards a path-value pair to the next storage, and un-marks its key. Parameters path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int) Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: Union[bytes, DeepLakeMemoryObject]) Helper function that adds a key value pair to the cache. Parameters path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. _pop_from_cache() Helper function that pops the least recently used key, value pair from the cache clear(prefix='') Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache() Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects() Removes all DeepLakeMemoryObjects from the cache. flush() Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path",activeloop S3 Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#s3-storage-provider," class deeplake.core.storage.S3Provider Bases: StorageProvider Provider class for using S3 storage. __delitem__(path) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path) Gets the object present at the path. Parameters path (str) – the path relative to the root of the S3Provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, endpoint_url: Optional[str] = None, aws_region: Optional[str] = None, profile_name: Optional[str] = None, token: Optional[str] = None, **kwargs) Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user. aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user. aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. **kwargs – Additional arguments to pass to the S3 client. Includes: expiration. __iter__() Generator function that iterates over the keys of the S3Provider. Yields str – the name of the object that it is iterating over. __len__() Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns the number of files present inside the root. Return type int Raises S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode",activeloop S3 Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#s3-storage-provider,"_all_keys() Helper function that lists all the objects present at the root of the S3Provider . Returns set of all the objects found at the root of the S3Provider. Return type set Raises S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False) If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. _state_keys() Keys used to store the state of the provider. clear(prefix='') Deletes ALL data with keys having given prefix on the s3 bucket (under self.root). Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetAccessError – Invalid credentials for the object path storage. S3GetError – Any other error while retrieving the object. need_to_reload_creds(err: ClientError) → bool Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root) Rename root folder.",activeloop Google Cloud Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#google-cloud-storage-provider,"class deeplake.core.storage.GCSProvider Bases: StorageProvider Provider class for using GC storage. __contains__(key) Checks if key exists in mapping. __delitem__(key) Remove key. __getitem__(key) Retrieve data. __init__(root: str, token: Optional[Union[str, Dict]] = None, project: Optional[str] = None) Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises ModuleNotFoundError – If google cloud packages aren't installed. __iter__() Iterating over the structure. __len__() Returns length of the structure. __setitem__(key, value) Store value in key. _all_keys() Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. clear(prefix='') Remove all keys with given prefix below root - empties out mapping. Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(root) Rename root folder.",activeloop Google Drive Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#google-drive-storage-provider,"class deeplake.core.storage.GDriveProvider Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. __getitem__(path) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. __init__(root: str, token: Optional[Union[str, Dict]] = None, makemap: bool = True) Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__() Generator function that iterates over the keys of the provider. Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider. Returns the number of files present inside the root. Return type int __setitem__(path, content) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys() Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set clear(prefix='') Delete the contents of the provider. sync() Sync provider keys with actual storage",activeloop Local Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#local-storage-provider," class deeplake.core.storage.LocalProvider Bases: StorageProvider Provider class for using the local filesystem. __delitem__(path: str) Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) Gets the object present at the path within the given byte range. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str) Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises FileAtPathException – If the root is a file instead of a directory. __iter__() Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns the number of files present inside the root. Return type int __setitem__(path: str, value: bytes) Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str] Lists all the objects present at the root of the Provider. Parameters refresh (bool) – refresh keys Returns set of all the objects found at the root of the Provider. Return type set _check_is_file(path: str) Checks if the path is a file. Returns the full_path to file if True. Parameters path (str) – the path to the object relative to the root of the provider",activeloop Local Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#local-storage-provider,"Returns the full path to the requested file . Return type str Raises DirectoryAtPathException – If a directory is found at the path. _set_hub_creds_info(hub_path: str, expiration: str, db_engine: bool = True, repository: Optional[str] = None) Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. db_engine (bool) – Whether Activeloop DB Engine enabled. repository (str, Optional) – Backend repository where the dataset is stored. clear(prefix='') Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path) Renames root folder",activeloop Memory Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#memory-provider,"class deeplake.core.storage.MemoryProvider Bases: StorageProvider Provider class for using the memory. __delitem__(path: str) Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. __getstate__() → str Does NOT save the in memory data in state. __init__(root: str = '') __iter__() Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns the number of files present inside the root. Return type int __setitem__(path: str, value: bytes) Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. _all_keys() Lists all the objects present at the root of the Provider. Returns set of all the objects found at the root of the Provider. Return type set clear(prefix='') Clears the provider.",activeloop deeplake.core.storage,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#deeplake-core-storage,"Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None) Gets the object present at the path within the given byte range . Parameters path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns The bytes of the object present at the path within the given byte range. Return type bytes Raises InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(root) Rename root folder. Google Drive Storage Provider class deeplake.core.storage.GDriveProvider Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path) Delete the object present at the path. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. __getitem__(path) Gets the object present at the path within the given byte range. Parameters path (str) – The path relative to the root of the provider. Returns The bytes of the object present at the path. Return type bytes Raises KeyError – If an object is not found at the path. __init__(root: str, token: Optional[Union[str, Dict]] = None, makemap: bool = True) Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__() Generator function that iterates over the keys of the provider. Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present inside the root of the provider. Returns the number of files present inside the root. Return type int __setitem__(path, content) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys() Generator function that iterates over the keys of the provider. Returns set of all keys present at the root of the provider. Return type set clear(prefix='') Delete the contents of the provider",activeloop Base Storage Provider,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#base-storage-provider,"set_bytes(path: str, value: bytes, start_byte: Optional[int] = None, overwrite: Optional[bool] = False) Sets the object present at the path with the value Parameters path (str) – the path relative to the root of the provider . value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode.",activeloop LRU Cache,https://docs.deeplake.ai/en/latest/deeplake.core.storage.html#lru-cache," class deeplake.core.storage.LRUCache Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str) Deletes the object present at the path from the cache and the underlying storage. Parameters path (str) – the path to the object relative to the root of the provider. Raises KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str) If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters path (str) – The path relative to the root of the underlying storage. Raises KeyError – if an object is not found at the path. Returns The bytes of the object present at the path. Return type bytes __getstate__() → Dict[str, Any] Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: Optional[StorageProvider], cache_size: int) Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__() Generator function that iterates over the keys of the cache and the underlying storage. Yields str – the path of the object that it is iterating over, relative to the root of the provider. __len__() Returns the number of files present in the cache and the underlying storage. Returns the number of files present inside the root. Return type int __setitem__(path: str, value: Union[bytes, DeepLakeMemoryObject]) Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any]) Recreates a cache with the same configuration as the state. Parameters state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents",activeloop deeplake.core.index,https://docs.deeplake.ai/en/latest/deeplake.core.index.html#deeplake-core-index,"deeplake.core.index.slice_length(s: slice, parent_length: int) → int Returns the length of a slice given the length of its parent. ",activeloop deeplake.core.index,https://docs.deeplake.ai/en/latest/deeplake.core.index.html#deeplake-core-index,"__str__() Return str(self) . __weakref__ list of weak references to the object (if defined) apply(samples: List[ndarray]) Applies an Index to a list of ndarray samples with the same number of entries as the first entry in the Index. apply_squeeze(samples: List[ndarray]) Applies the primary axis of an Index to a list of ndarray samples. Will either return the list as given, or return the first sample. compose_at(item: Union[int, slice, Tuple[int, ...]], i: Optional[int] = None) Returns a new Index representing the addition of an IndexValue, or the composition with a given axis. Examples >>> Index([slice(None), slice(None)]).compose_at(5) Index([slice(None), slice(None), 5]) Copy to clipboard >>> Index([slice(None), slice(5, 10), slice(None)]).compose_at(3, 1) Index([slice(None), 8, slice(None)]) Copy to clipboard Parameters item (IndexValue) – The value to append or compose with the Index. i (int, optional) – The axis to compose with the given item. Defaults to None, meaning that the item will be appended instead. Returns The result of the addition or composition. Return type Index downsample(factor: int, shape: Tuple[int, ...]) Downsamples an Index by the given factor. Parameters factor (int) – The factor to downsample by. shape (Tuple[int, ...]) – The shape of the downsampled data. Returns The downsampled Index. Return type Index find_axis(offset: int = 0) Returns the index for the nth subscriptable axis in the values of an Index. Parameters offset (int) – The number of subscriptable axes to skip before returning. Defaults to 0, meaning that the first valid axis is returned. Returns The index of the found axis, or None if no match is found. Return type int is_trivial() Checks if an Index is equivalent to the trivial slice [:], aka slice(None). length(parent_length: int) Returns the primary length of an Index given the length of the parent it is indexing. See: IndexEntry.length() validate(parent_length) Checks that the index is not accessing values outside the range of the parent. deeplake.core.index.merge_slices(existing_slice: slice, new_slice: slice) → slice Compose two slice objects Given an iterable x, the following should be equivalent: x[existing_slice][new_slice] == x[merge_slices(existing_slice, new_slice)] Parameters existing_slice (slice) – The existing slice to be restricted. new_slice (slice) – The new slice to be applied to the existing slice. Returns the composition of the given slices Return type slice Raises NotImplementedError – Composing slices with negative values is not supported. Negative indexing for slices is only supported for the first slice. deeplake.core.index.slice_at_int(s: slice, i: int) Returns the i th element of a slice s. Examples >>> slice_at_int(slice(None), 10) 10 Copy to clipboard >>> slice_at_int(slice(10, 20, 2), 3) 16 Copy to clipboard Parameters s (slice) – The slice to index into. i (int) – The integer offset into the slice. Returns The index corresponding to the offset into the slice. Return type int Raises NotImplementedError – Nontrivial slices should not be indexed with negative integers. IndexError – If step is negative and start is not greater than stop",activeloop deeplake.core.index,https://docs.deeplake.ai/en/latest/deeplake.core.index.html#deeplake-core-index," class deeplake.core.index.IndexEntry(value: Union[int, slice, Tuple[int, ...]] = slice(None, None, None)) __getitem__(item: Union[int, slice, Tuple[int, ...]]) Combines the given item and this IndexEntry. Returns a new IndexEntry representing the composition of the two. Examples >>> IndexEntry()[0:100] IndexEntry(slice(0, 100, None)) Copy to clipboard >>> IndexEntry()[100:200][5] IndexEntry(105) Copy to clipboard >>> IndexEntry()[(0, 1, 2, 3)] IndexEntry((0, 1, 2, 3)) Copy to clipboard >>> IndexEntry()[1, 2, 3] IndexEntry((0, 1, 2, 3)) Copy to clipboard Parameters item – The desired sub-index to be composed with this IndexEntry. Can be an int, a slice, or a tuple of ints. Returns The new IndexEntry object. Return type IndexEntry Raises TypeError – An integer IndexEntry should not be indexed further. __init__(value: Union[int, slice, Tuple[int, ...]] = slice(None, None, None)) __str__() Return str(self). __weakref__ list of weak references to the object (if defined) downsample(factor: int, length: int) Downsamples an IndexEntry by a given factor. Parameters factor (int) – The factor by which to downsample. length (int) – The length of the downsampled IndexEntry. Returns The downsampled IndexEntry. Return type IndexEntry Raises TypeError – If the IndexEntry cannot be downsampled. indices(length: int) Generates the sequence of integer indices for a target of a given length. is_trivial() Checks if an IndexEntry represents the entire slice length(parent_length: int) → int Returns the length of an IndexEntry given the length of the parent it is indexing. Examples >>> IndexEntry(slice(5, 10)).length(100) 5 >>> len(list(range(100))[5:10]) 5 >>> IndexEntry(slice(5, 100)).length(50) 45 >>> len(list(range(50))[5:100]) 45 >>> IndexEntry(0).length(10) 1 Copy to clipboard Parameters parent_length (int) – The length of the target that this IndexEntry is indexing. Returns The length of the index if it were applied to a parent of the given length. Return type int subscriptable() Returns whether an IndexEntry can be further subscripted. validate(parent_length: int) Checks that the index is not accessing values outside the range of the parent. class deeplake.core.index.Index(item: Union[int, slice, Tuple[int, ...], Index, List[IndexEntry]] = slice(None, None, None)) __getitem__(item: Union[int, slice, List[int], Tuple[Union[int, slice, Tuple[int, ...]]], Index]) Returns a new Index representing a subscripting with the given item. Modeled after NumPy's advanced integer indexing. See: https://numpy.org/doc/stable/reference/arrays.indexing.html Examples >>> Index([5, slice(None)])[5] Index([5, 5]) Copy to clipboard >>> Index([5])[5:6] Index([5, slice(5, 6)]) Copy to clipboard >>> Index()[0, 1, 2:5, 3] Index([0, 1, slice(2, 5), 3]) Copy to clipboard >>> Index([slice(5, 6)])[(0, 1, 2:5, 3),] Index([(5, 1, slice(2, 5), 3)]) Copy to clipboard Parameters item – The contents of the subscript expression to add to this Index. Returns The Index representing the result of the subscript operation. Return type Index Raises TypeError – Given item should be another Index, or compatible with NumPy's advanced integer indexing. __init__(item: Union[int, slice, Tuple[int, ...], Index, List[IndexEntry]] = slice(None, None, None)) Initializes an Index from an IndexValue, another Index, or the values from another Index. Represents a list of IndexEntry objects corresponding to indexes into each axis of an ndarray. __repr__() Return repr(self)",activeloop How Shuffling Works in Deep Lake's PyTorch DataLoader,https://docs.activeloop.ai/how-it-works/shuffling-in-ds.pytorch#how-shuffling-works-in-deep-lakes-pytorch-dataloader,"The Deep Lake shuffling algorithm is based upon a shuffle buffer that preloads a specified amount of data (in MB) determined by the buffer_size parameter in ds.pytorch(buffer_size = 2048). First, the dataloader randomly selects chunks from the applicable tensors until the shuffle buffer is full. Next, the indices in shuffle buffer are randomly sampled to construct the batches that are returned by the dataloader. As the data in the shuffle buffer is consumed, new chunks are randomly selected and added to the buffer.In the OSS dataloader, the shuffle buffer contains the decompressed, decoded, and transformed samples. When using the PyTorch dataloaders, this corresponds to torch tensors. In the Performant dataloader, the shuffle buffer contains the non-decompressed data in the format they are stored in. For images, this typically corresponds to compressed bytes in jpeg, png, or other compressions. Since compressed data is stored more efficiently than uncompressed data, there are typically more distinct samples of data in the Performant dataloader shuffle buffer compared to the OSS shuffle buffer. If many chunks in the buffer contain data from the same class, which may occur if data was uploaded in non-random order, the shuffle buffer may contain fewer unique classes than if the samples were chosen fully randomly based on index. The most extreme case of reduced randomness occurs when datasets are much larger than the shuffle buffer, when they have many classes, and when those classes occur in sequence within the dataset indices. One example dataset is Unshuffled ImageNet, which has 1000 classes, 1.2M images, 140GB of data, and approximately 140 images per 16MB chunk. When the images are uploaded in sequence, the plot below shows how many unique classes are returned by the loader vs the number of images that have been returned in total. It is evident that fully randomly sampling returns more unique values than the Deep Lake dataloader. If reduced randomness has an impact on model performance in your workflows, the recommended countermeasures are:Store the dataset in a shuffled fashion such that the data does not appear in order by class. This completely mitigates the randomness concerns at the output of the data loader.Store the dataset with a smaller chunk size. This increases randomness because the shuffle buffer selects more discreet chunks before filling up. The current default size is 8, and reducing chunk size to 4MB significantly increases randomness (see plot above) with only a modest slowdown in data transfer speed.Increase the size of the shuffle buffer. This mitigates the randomness concerns but may not completely alleviate them.",activeloop deeplake.api.dataset,https://docs.deeplake.ai/en/latest/deeplake.api.dataset.html#deeplake-api-dataset,"static ingest_dataframe(src, dest: Union[str, pathlib.Path], column_params: Optional[Dict] = None, src_creds: Optional[Dict] = None, dest_creds: Optional[Dict] = None, creds_key: Optional[Dict] = None, progressbar: bool = True, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) See deeplake.ingest_dataframe() . static list(workspace: str = '', token: Optional[str] = None) → None See deeplake.list().",activeloop deeplake.api.dataset,https://docs.deeplake.ai/en/latest/deeplake.api.dataset.html#deeplake-api-dataset," class deeplake.api.dataset.dataset static exists(path: Union[str, pathlib.Path], creds: Optional[dict] = None, token: Optional[str] = None) → bool See deeplake.exists(). static empty(path: Union[str, pathlib.Path], overwrite: bool = False, public: bool = False, memory_cache_size: int = DEFAULT_MEMORY_CACHE_SIZE, local_cache_size: int = DEFAULT_LOCAL_CACHE_SIZE, creds: Optional[dict] = None, token: Optional[str] = None) → Dataset See deeplake.empty(). static load(path: Union[str, pathlib.Path], read_only: Optional[bool] = None, memory_cache_size: int = DEFAULT_MEMORY_CACHE_SIZE, local_cache_size: int = DEFAULT_LOCAL_CACHE_SIZE, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = True, access_method: str = 'stream') → Dataset See deeplake.load(). static rename(old_path: Union[str, pathlib.Path], new_path: Union[str, pathlib.Path], creds: Optional[dict] = None, token: Optional[str] = None) → Dataset See deeplake.rename(). static delete(path: Union[str, pathlib.Path], force: bool = False, large_ok: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = False) → None See deeplake.delete(). static like(dest: Union[str, pathlib.Path], src: Union[str, Dataset, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, public: bool = False) → Dataset See deeplake.like(). static copy(src: Union[str, pathlib.Path, Dataset], dest: Union[str, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, dest_creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True) See deeplake.copy(). static deepcopy(src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, dest_creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False, verbose: bool = True) See deeplake.deepcopy(). static connect(src_path: str, creds_key: str, dest_path: Optional[str], org_id: Optional[str], ds_name: Optional[str], token: Optional[str]) See deeplake.connect(). static ingest_classification(src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], image_params: Optional[Dict] = None, label_params: Optional[Dict]: None, dest_creds: Optional[Dict] = None, progressbar: bool = True, summary: bool = True, num_workers: int = 0, shuffle: bool = False, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset See deeplake.ingest_classification(). static ingest_coco(images_directory: Union[str, pathlib.Path], annotation_files: Union[str, pathlib.Path, List[str]], dest: Union[str, pathlib.Path], key_to_tensor_mapping: Optional[Dict] = None, file_to_group_mapping: Optional[Dict] = None, ignore_one_group: bool = False, ignore_keys: Optional[List[str]] = None, image_settings: Optional[Dict] = None, src_creds: Optional[Dict] = None, dest_creds: Optional[Dict] = None, inspect_limit: int = 1000000, progressbar: bool = True, shuffle: bool = False, num_workers: int = 0, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) → Dataset See deeplake.ingest_coco(). def ingest_yolo(data_directory: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], class_names_file: Optional[Union[str, pathlib.Path]] = None, annotations_directory: Optional[Union[str, pathlib.Path]] = None, allow_no_annotation: bool = False, image_params: Optional[Dict] = None, label_params: Optional[Dict] = None, coordinates_params: Optional[Dict] = None, src_creds: Optional[Dict] = None, dest_creds: Optional[Dict] = None, image_creds_key: Optional[str] = None, inspect_limit: int = 1000, progressbar: bool = True, shuffle: bool = False, num_workers: int = 0, token: Optional[str] = None, connect_kwargs: Optional[Dict] = None, **dataset_kwargs) -> Dataset: See deeplake.ingest_yolo() static ingest_kaggle(tag: str, src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], exist_ok: bool = False, images_compression: str = 'auto', dest_creds: dict = None, kaggle_credentials: dict = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset See deeplake.ingest_kaggle()",activeloop deeplake.api.info,https://docs.deeplake.ai/en/latest/deeplake.api.info.html#module-deeplake.api.info,"class deeplake.api.info.Info Contains optional key-value pairs that can be stored for datasets/tensors. clear() Clear info. get(key, default=None) Get value for key from info. items() Return all items in info. keys() Return all keys in info. property nbytes Returns size of info stored in bytes. pop(key, default=None) Pop item from info by key. popitem() Pop item from info. replace_with(d) Replace info with another dictionary. setdefault(key, default=None) Set default value for a key in info. update(*args, **kwargs) Update info. values() Return all values in info.",activeloop deeplake.api.link,https://docs.deeplake.ai/en/latest/deeplake.api.link.html#deeplake-api-link,"deeplake.api.link.link(path: str, creds_key: Optional[str] = None) → LinkedSample See deeplake.link().",activeloop deeplake.api.read,https://docs.deeplake.ai/en/latest/deeplake.api.read.html#deeplake-api-read,"deeplake.api.read.read(path: str, verify: bool = False, creds: Optional[Dict] = None, compression: Optional[str] = None, storage: Optional[StorageProvider] = None) → Sample See deeplake.read().",activeloop deeplake.api.tiled,https://docs.deeplake.ai/en/latest/deeplake.api.tiled.html#deeplake-api-tiled,"deeplake.api.tiled.tiled(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Union[str, np.dtype] = np.dtype('uint8')) See deeplake.tiled().",activeloop deeplake.api.link_tiled,https://docs.deeplake.ai/en/latest/deeplake.api.link_tiled.html#deeplake-api-link-tiled,"deeplake.api.link_tiled.link_tiled(path_array: np.ndarray, creds_key: Optional[str] = None) → LinkedTiledSample: See deeplake.link_tiled().",activeloop deeplake.auto.structured,https://docs.deeplake.ai/en/latest/deeplake.auto.structured.html#deeplake-auto-structured,deeplake.auto.structured.base StructuredDataset deeplake.auto.structured.dataframe DataFrame,activeloop deeplake.auto.unstructured,https://docs.deeplake.ai/en/latest/deeplake.auto.unstructured.html#deeplake-auto-unstructured,deeplake.auto.unstructured.base UnstructuredDataset deeplake.auto.unstructured.image_classification ImageClassification deeplake.auto.unstructured.kaggle download_kaggle_dataset(),activeloop deeplake.auto.structured.base,https://docs.deeplake.ai/en/latest/deeplake.auto.structured.base.html#deeplake-auto-structured-base,class deeplake.auto.structured.base.StructuredDataset(source) Initializes a structured dataset. Parameters source (str) – The local path to folder or file containing a structured dataset and of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset.,activeloop deeplake.auto.structured.dataframe,https://docs.deeplake.ai/en/latest/deeplake.auto.structured.dataframe.html#deeplake-auto-structured-dataframe,"class deeplake.auto.structured.dataframe.DataFrame(source, column_params=None, creds=None, creds_key=None) fill_dataset(ds: Dataset, progressbar: bool = True) → Dataset Fill dataset with data from the dataframe - one tensor per column Parameters ds (Dataset) – A Deep Lake dataset object. progressbar (bool) – Defines if the method uses a progress bar. Defaults to True. Returns A Deep Lake dataset.",activeloop deeplake.auto.unstructured.base,https://docs.deeplake.ai/en/latest/deeplake.auto.unstructured.base.html#deeplake-auto-unstructured-base,class deeplake.auto.unstructured.base.UnstructuredDataset(source: str),activeloop deeplake.auto.unstructured.image_classification,https://docs.deeplake.ai/en/latest/deeplake.auto.unstructured.image-classification.html#deeplake-auto-unstructured-image-classification,"class deeplake.auto.unstructured.image_classification.ImageClassification(source: str) structure(ds: Dataset, progressbar: bool = True, generate_summary: bool = True, shuffle: bool = True, image_tensor_args: dict = {}, label_tensor_args: dict = {}, num_workers: int = 0) → Dataset Create a structured dataset. Parameters ds (Dataset) – A Deep Lake dataset object. progressbar (bool) – Defines if the method uses a progress bar. Defaults to True. generate_summary (bool) – Defines if the method generates ingestion summary. Defaults to True. shuffle (bool) – Defines if the file paths should be shuffled prior to ingestion. Defaults to True. image_tensor_args (dict) – Defines the parameters for the images tensor. label_tensor_args (dict) – Defines the parameters for the class_labels tensor. num_workers (int) – The number of workers passed to compute. Returns A Deep Lake dataset.",activeloop deeplake.auto.unstructured.kaggle,https://docs.deeplake.ai/en/latest/deeplake.auto.unstructured.kaggle.html#deeplake-auto-unstructured-kaggle,"deeplake.auto.unstructured.kaggle.download_kaggle_dataset(tag: str, local_path: str, kaggle_credentials: Optional[dict] = None, exist_ok: bool = False) Calls the kaggle API (https://www.kaggle.com/docs/api) to download a kaggle dataset and unzip it's contents. Parameters tag (str) – Kaggle dataset tag. Example: 'coloradokb/dandelionimages' points to https://www.kaggle.com/coloradokb/dandelionimages local_path (str) – Path where the kaggle dataset will be downloaded and unzipped. Only local path downloading is supported. kaggle_credentials (dict) – Credentials are gathered from the environment variables or ~/kaggle.json. If those don't exist, the kaggle_credentials argument will be used. exist_ok (bool) – If the kaggle dataset was already downloaded, and exist_ok is True, no error is thrown. Raises KaggleMissingCredentialsError – If no kaggle credentials are found. KaggleDatasetAlreadyDownloadedError – If the dataset tag already exists in local_path.",activeloop deeplake.util.shape_interval,https://docs.deeplake.ai/en/latest/deeplake.util.shape-interval.html#deeplake-util-shape-interval,"class deeplake.util.shape_interval.ShapeInterval __init__(lower: Sequence[int], upper: Optional[Sequence[int]] = None) Shapes in Deep Lake are best represented as intervals, this is to support dynamic tensors. Instead of having a single tuple of integers representing shape, we use 2 tuples of integers to represent the lower and upper bounds of the representing shape. If lower == upper for all cases, the shape is considered 'fixed'. If lower != upper for any cases, the shape is considered 'dynamic'. Parameters lower (sequence) – Sequence of integers that represent the lower-bound shape. upper (sequence) – Sequence of integers that represent the upper-bound shape. If None is provided, lower is used as upper (implicitly fixed-shape). Raises InvalidShapeIntervalError – If the provided lower/upper bounds are incompatible to represent a shape.",activeloop deeplake.util.remove_cache,https://docs.deeplake.ai/en/latest/deeplake.util.remove-cache.html#deeplake-util-remove-cache,"deeplake.util.remove_cache.remove_memory_cache(storage: StorageProvider) Removes the memory cache. deeplake.util.remove_cache.get_base_storage(storage: StorageProvider) Removes all layers of caching and returns the underlying storage. deeplake.util.remove_cache.get_dataset_with_zero_size_cache(ds) Returns a dataset with same storage but cache size set to zero. deeplake.util.remove_cache.create_read_copy_dataset(dataset, commit_id: Optional[str] = None) Creates a read-only copy of the given dataset object, without copying underlying data. Parameters dataset – The Dataset object to copy. commit_id – The commit id to checkout the new read-only copy to. Returns A new Dataset object in read-only mode.",activeloop deeplake.util.notebook,https://docs.deeplake.ai/en/latest/deeplake.util.notebook.html#deeplake-util-notebook,deeplake.util.notebook.is_notebook() Whether running in a notebook. deeplake.util.notebook.is_jupyter() Whether running in a Jupyter notebook. deeplake.util.notebook.is_colab() Whether running in a colab notebook.,activeloop deeplake.util.exceptions,https://docs.deeplake.ai/en/latest/deeplake.util.exceptions.html#deeplake-util-exceptions," class deeplake.util.exceptions.ExternalCommandError(command: str, status: int) Bases: Exception class deeplake.util.exceptions.KaggleError Bases: Exception class deeplake.util.exceptions.KaggleMissingCredentialsError(env_var_name: str) Bases: KaggleError class deeplake.util.exceptions.KaggleDatasetAlreadyDownloadedError(tag: str, path: str) Bases: KaggleError class deeplake.util.exceptions.InvalidPathException(directory) Bases: Exception class deeplake.util.exceptions.AutoCompressionError(directory) Bases: Exception class deeplake.util.exceptions.InvalidFileExtension(directory) Bases: Exception class deeplake.util.exceptions.SamePathException(directory) Bases: Exception class deeplake.util.exceptions.TensorInvalidSampleShapeError(shape: Sequence[int], expected_dims: int) Bases: Exception class deeplake.util.exceptions.TensorMetaMissingKey(key: str, meta: dict) Bases: Exception class deeplake.util.exceptions.TensorDoesNotExistError(tensor_name: str) Bases: KeyError, AttributeError class deeplake.util.exceptions.TensorAlreadyExistsError(key: str) Bases: Exception class deeplake.util.exceptions.TensorGroupDoesNotExistError(group_name: str) Bases: KeyError class deeplake.util.exceptions.TensorGroupAlreadyExistsError(key: str) Bases: Exception class deeplake.util.exceptions.InvalidTensorNameError(name: str) Bases: Exception class deeplake.util.exceptions.InvalidTensorGroupNameError(name: str) Bases: Exception class deeplake.util.exceptions.DynamicTensorNumpyError(key: str, index, property_key: str) Bases: Exception class deeplake.util.exceptions.InvalidShapeIntervalError(message: str, lower: Optional[Sequence[int]] = None, upper: Optional[Sequence[int]] = None) Bases: Exception class deeplake.util.exceptions.InvalidKeyTypeError(item: Any) Bases: TypeError class deeplake.util.exceptions.UnsupportedTensorTypeError(item: Any) Bases: TypeError class deeplake.util.exceptions.InvalidBytesRequestedError Bases: Exception class deeplake.util.exceptions.ProviderListEmptyError Bases: Exception class deeplake.util.exceptions.DirectoryAtPathException Bases: Exception class deeplake.util.exceptions.FileAtPathException(path) Bases: Exception class deeplake.util.exceptions.ProviderSizeListMismatch Bases: Exception class deeplake.util.exceptions.ModuleNotInstalledException(message) Bases: Exception class deeplake.util.exceptions.LoginException(message='Error while logging in, invalid auth token. Please try logging in again.') Bases: Exception class deeplake.util.exceptions.UserNotLoggedInException Bases: Exception class deeplake.util.exceptions.InvalidHubPathException(path) Bases: Exception class deeplake.util.exceptions.PathNotEmptyException(use_hub=True) Bases: Exception class deeplake.util.exceptions.AuthenticationException(message='Authentication failed. Please try logging in again.') Bases: Exception class deeplake.util.exceptions.AuthorizationException(message='You are not authorized to access this resource on Activeloop Server.', response=None) Bases: Exception class deeplake.util.exceptions.InvalidPasswordException(message='The password you provided was invalid.') Bases: AuthorizationException class deeplake.util.exceptions.CouldNotCreateNewDatasetException(path: str) Bases: AuthorizationException class deeplake.util.exceptions.ResourceNotFoundException(message='The resource you are looking for was not found",activeloop deeplake.util.exceptions,https://docs.deeplake.ai/en/latest/deeplake.util.exceptions.html#deeplake-util-exceptions,"Check if the name or id is correct.') Bases: Exception class deeplake.util.exceptions.BadRequestException(message) Bases: Exception class deeplake.util.exceptions.OverLimitException(message='You are over the allowed limits for this operation.') Bases: Exception class deeplake.util.exceptions.ServerException(message='Internal Activeloop server error.') Bases: Exception class deeplake.util.exceptions.BadGatewayException(message='Invalid response from Activeloop server.') Bases: Exception class deeplake.util.exceptions.GatewayTimeoutException(message='Activeloop server took too long to respond.') Bases: Exception class deeplake.util.exceptions.WaitTimeoutException(message='Timeout waiting for server state update.') Bases: Exception class deeplake.util.exceptions.LockedException(message='The resource is currently locked.') Bases: Exception class deeplake.util.exceptions.UnexpectedStatusCodeException(message) Bases: Exception class deeplake.util.exceptions.EmptyTokenException(message='The authentication token is empty.') Bases: Exception class deeplake.util.exceptions.S3Error Bases: Exception Catchall for all errors encountered while working with S3 class deeplake.util.exceptions.S3GetError Bases: S3Error Catchall for all errors encountered while working getting an object from S3 class deeplake.util.exceptions.S3SetError Bases: S3Error Catchall for all errors encountered while working setting an object in S3 class deeplake.util.exceptions.S3DeletionError Bases: S3Error Catchall for all errors encountered while working deleting an object in S3 class deeplake.util.exceptions.S3ListError Bases: S3Error Catchall for all errors encountered while retrieving a list of objects present in S3 class deeplake.util.exceptions.UnsupportedCompressionError(compression: Optional[str], htype: Optional[str] = None) Bases: CompressionError class deeplake.util.exceptions.SampleCompressionError(sample_shape: Tuple[int, ...], compression_format: Optional[str], message: str) Bases: CompressionError class deeplake.util.exceptions.SampleDecompressionError(path: Optional[str] = None) Bases: CompressionError class deeplake.util.exceptions.InvalidImageDimensions(actual_dims, expected_dims) Bases: Exception class deeplake.util.exceptions.TensorUnsupportedSampleType Bases: Exception class deeplake.util.exceptions.MetaError Bases: Exception class deeplake.util.exceptions.MetaDoesNotExistError(key: str) Bases: MetaError class deeplake.util.exceptions.MetaAlreadyExistsError(key: str, required_meta: dict) Bases: MetaError class deeplake.util.exceptions.MetaInvalidKey(name: str, available_keys: List[str]) Bases: MetaError class deeplake.util.exceptions.MetaInvalidRequiredMetaKey(key: str, subclass_name: str) Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtype(htype: str, available_htypes: Sequence[str]) Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtypeOverwriteValue(key: str, value: Any, explanation: str = '') Bases: MetaError class deeplake.util.exceptions.TensorMetaMissingRequiredValue(htype: str, key: Union[str, List[str]]) Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtypeOverwriteKey(htype: str, key: str, available_keys: Sequence[str]) Bases: MetaError class deeplake.util.exceptions.TensorDtypeMismatchError(expected: Union[dtype, str], actual: str, htype: str) Bases: MetaError class deeplake.util.exceptions.InvalidTensorLinkError(msg='Invalid tensor link.') Bases: MetaError class deeplake.util.exceptions.TensorMetaMutuallyExclusiveKeysError(keys: Optional[List[str]] = None, custom_message: Optional[str] = None) Bases: MetaError class deeplake.util.exceptions.ReadOnlyModeError(custom_message: Optional[str] = None) Bases: Exception class deeplake.util.exceptions.TransformError(index=None, sample=None, samples_processed=0, suggest=False) Bases: Exception class deeplake.util.exceptions.FilterError Bases: Exception class deeplake.util.exceptions.InvalidInputDataError(operation) Bases: TransformError class deeplake.util.exceptions.UnsupportedSchedulerError(scheduler) Bases: TransformError class deeplake.util.exceptions.TensorMismatchError(tensors, output_keys, skip_ok=False) Bases: TransformError class deeplake.util.exceptions.InvalidOutputDatasetError(message='The output Dataset to transform should not be `read_only`.') Bases: TransformError class deeplake.util.exceptions.InvalidTransformDataset(message='The TransformDataset (2nd argument to transform function) of one of the functions is invalid ",activeloop deeplake.util.exceptions,https://docs.deeplake.ai/en/latest/deeplake.util.exceptions.html#deeplake-util-exceptions,"All the tensors should have equal length for it to be valid.') Bases: TransformError class deeplake.util.exceptions.HubComposeEmptyListError(message='Cannot deeplake.compose an empty list.') Bases: TransformError class deeplake.util.exceptions.HubComposeIncompatibleFunction(index: int) Bases: TransformError class deeplake.util.exceptions.DatasetUnsupportedPytorch(reason) Bases: Exception class deeplake.util.exceptions.CorruptedMetaError Bases: Exception class deeplake.util.exceptions.ChunkEngineError Bases: Exception class deeplake.util.exceptions.FullChunkError Bases: ChunkEngineError class deeplake.util.exceptions.ChunkIdEncoderError Bases: ChunkEngineError class deeplake.util.exceptions.ChunkSizeTooSmallError(message='If the size of the last chunk is given, it must be smaller than the requested chunk size.') Bases: ChunkEngineError class deeplake.util.exceptions.DatasetHandlerError(message) Bases: Exception class deeplake.util.exceptions.MemoryDatasetCanNotBePickledError Bases: Exception class deeplake.util.exceptions.CorruptedSampleError(compression, path: Optional[str] = None) Bases: Exception class deeplake.util.exceptions.VersionControlError Bases: Exception class deeplake.util.exceptions.MergeError Bases: Exception class deeplake.util.exceptions.MergeNotSupportedError Bases: MergeError class deeplake.util.exceptions.MergeMismatchError(tensor_name, mismatch_type, original_value, target_value) Bases: MergeError class deeplake.util.exceptions.MergeConflictError(conflict_tensors=None, message='') Bases: MergeError class deeplake.util.exceptions.CheckoutError Bases: VersionControlError class deeplake.util.exceptions.CommitError Bases: VersionControlError class deeplake.util.exceptions.EmptyCommitError Bases: CommitError class deeplake.util.exceptions.TensorModifiedError Bases: Exception class deeplake.util.exceptions.GCSDefaultCredsNotFoundError Bases: Exception class deeplake.util.exceptions.InvalidOperationError(method: str, type: str) Bases: Exception class deeplake.util.exceptions.AgreementError Bases: Exception class deeplake.util.exceptions.AgreementNotAcceptedError(agreements=None) Bases: AgreementError class deeplake.util.exceptions.NotLoggedInError(msg=None) Bases: AgreementError class deeplake.util.exceptions.RenameError(msg='Only name of the dataset can be different in new path.') Bases: Exception class deeplake.util.exceptions.BufferError Bases: Exception class deeplake.util.exceptions.InfoError Bases: Exception class deeplake.util.exceptions.OutOfChunkCountError Bases: Exception class deeplake.util.exceptions.OutOfSampleCountError Bases: Exception class deeplake.util.exceptions.SampleHtypeMismatchError(htype, sample_type) Bases: Exception class deeplake.util.exceptions.EmptyTensorError(message) Bases: Exception class deeplake.util.exceptions.DatasetViewSavingError Bases: Exception class deeplake.util.exceptions.ManagedCredentialsNotFoundError(org_id, creds_key) Bases: Exception class deeplake.util.exceptions.UnableToReadFromUrlError(url, status_code) Bases: Exception class deeplake.util.exceptions.InvalidTokenException Bases: Exception class deeplake.util.exceptions.TokenPermissionError(message=None) Bases: Exception ",activeloop Installation¶,https://docs.deeplake.ai/en/v3.1.5/Installation.html#installation,"Deep Lake can be installed with pip Copy to clipboard Deep Lake has the following extras that you can choose to install according to your needs. Installation commands¶ Install command Description Dependencies installed pip install 'deeplake[av]' Audio and video support via PyAV av pip install 'deeplake[visualizer]' Visualize Deep Lake datasets within notebooks. This is required for Dataset.visualize to work. IPython, flask pip install 'deeplake[gcp]' GCS support google-cloud-storage, google-auth, google-auth-oauthlib pip install 'deeplake[dicom]' DICOM data support pydicom pip install 'deeplake[gdrive]' Google Drive support google-api-python-client, oauth2client, google-auth, google-auth-oauthlib pip install 'deeplake[point_cloud]' Support for LiDAR point cloud data laspy pip install 'deeplake[enterprise]' Installs Deep Lake's enterprise features libdeeplake pip install 'deeplake[all]' Installs all of the above",activeloop Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#datasets," Creating Datasets¶ deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format. Loading Datasets¶ deeplake.load Loads an existing dataset Deleting and Renaming Datasets¶ deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path. Copying Datasets¶ deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history. Dataset Operations¶ Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset. Dataset Visualization¶ Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset Credentials¶ Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset. Dataset Properties¶ Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor. Dataset Version Control¶ Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit",activeloop Deleting and Renaming Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#deleting-and-renaming-datasets,deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path.,activeloop Loading Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#loading-datasets,deeplake.load Loads an existing dataset,activeloop Copying Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#copying-datasets,deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history.,activeloop Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#datasets,"Dataset.commit_id The lasted committed commit id of the dataset . Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset. Dataset Views¶ A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here. Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated. Example >>> import deeplake >>> # load dataset >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> # filter dataset >>> zeros = ds.filter('labels == 0') >>> # save view >>> zeros.save_view(id='zeros') >>> # load_view >>> zeros = ds.load_view(id='zeros') >>> len(zeros) 5923 Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Dataset Visualization¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-visualization,Dataset.visualize Visualizes the dataset in the Jupyter notebook.,activeloop Creating Datasets¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#creating-datasets,deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format.,activeloop Dataset Operations¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-operations,Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset.,activeloop Dataset Credentials¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-credentials,Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset.,activeloop Dataset Views¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-views,"A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here.Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated.Example Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Dataset Properties¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-properties,"Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor.",activeloop Dataset Version Control¶,https://docs.deeplake.ai/en/v3.1.5/Datasets.html#dataset-version-control,Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset. Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset.,activeloop Tensors¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#tensors,Creating Tensors¶ Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it. Deleting and Renaming Tensors¶ Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name Adding and deleting samples¶ Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values. Retrieving samples¶ Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index. Tensor Properties¶ Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added. Info¶ Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor. Video features¶ Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Creating Tensors¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#creating-tensors,Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it.,activeloop Retrieving samples¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#retrieving-samples,Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index.,activeloop Deleting and Renaming Tensors¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#deleting-and-renaming-tensors,Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name,activeloop Adding and deleting samples¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#adding-and-deleting-samples,Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values.,activeloop Video features¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#video-features,Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Tensor Properties¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#tensor-properties,Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added.,activeloop Info¶,https://docs.deeplake.ai/en/v3.1.5/Tensors.html#info,Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor.,activeloop Compressions¶,https://docs.deeplake.ai/en/v3.1.5/Compressions.html#compressions,"Deep Lake can read, compress, decompress and recompress data to different formats. The supported htype-compression configurations are given below. Sample Type Htype Compressions Image image bmp, dib, gif, ico, jpeg, jpeg2000, pcx, png, ppm, sgi, tga, tiff, webp, wmf, xbm, eps, fli, im, msp, mpo, apng Video video mp4, mkv, avi Audio audio flac, mp3, wav Dicom dicom dcm Point Cloud point_cloud las Mesh mesh ply Other bbox, text, list, json, generic, etc. lz4 Sample Compression¶ If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed. Note For audio and video, we don't support compressing raw frames but only reading compressed audio and video data. Examples: >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Structure of sample-wise compressed tensor.¶ Chunk Compression¶ If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise. Note Chunk-wise compression is not supported for audio, video and point_cloud htypes. Examples: >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') >>> ds.create_tensor('boxes', htype='bbox', chunk_compression='lz4') Copy to clipboard Structure of chunk-wise compressed tensor.¶ Note See deeplake.read() to learn how to read data from files and populate these tensors.",activeloop Sample Compression¶,https://docs.deeplake.ai/en/v3.1.5/Compressions.html#sample-compression,"If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed.NoteFor audio and video, we don't support compressing raw frames but only reading compressed audio and video data.Examples: Copy to clipboard Structure of sample-wise compressed tensor.¶",activeloop Chunk Compression¶,https://docs.deeplake.ai/en/v3.1.5/Compressions.html#chunk-compression,"If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise.NoteChunk-wise compression is not supported for audio, video and point_cloud htypes.Examples: Copy to clipboard Structure of chunk-wise compressed tensor.¶ NoteSee deeplake.read() to learn how to read data from files and populate these tensors.",activeloop PyTorch and Tensorflow Support¶,https://docs.deeplake.ai/en/v3.1.5/Pytorch-and-Tensorflow-Support.html#pytorch-and-tensorflow-support,Deep Lake datasets can be easily converted to Torch dataloaders or Tensorflow datasets for training. Dataset.pytorch Converts the dataset into a pytorch Dataloader. Dataset.tensorflow Converts the dataset into a tensorflow compatible format.,activeloop Utility Functions¶,https://docs.deeplake.ai/en/v3.1.5/Utility-Functions.html#utility-functions,General Functions¶ list List all available Deep Lake cloud datasets. exists Checks if a dataset exists at the given path. Making Deep Lake Samples¶ read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data. Parallelism¶ compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop General Functions¶,https://docs.deeplake.ai/en/v3.1.5/Utility-Functions.html#general-functions,list List all available Deep Lake cloud datasets. exists Checks if a dataset exists at the given path.,activeloop Parallelism¶,https://docs.deeplake.ai/en/v3.1.5/Utility-Functions.html#parallelism,compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop Making Deep Lake Samples¶,https://docs.deeplake.ai/en/v3.1.5/Utility-Functions.html#making-deep-lake-samples,read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data.,activeloop Logging Dataset Creation¶,https://docs.deeplake.ai/en/v3.1.5/Weights-and-Biases.html#logging-dataset-creation,"If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. Copy to clipboard NoteIf you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes.NoteIf you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state.",activeloop Weights and Biases¶,https://docs.deeplake.ai/en/v3.1.5/Weights-and-Biases.html#module-deeplake.integrations.wandb.wandb,"Deep Lake's Weights and Biases integration allows you to track and improve reproducibility of your machine learning experiments. Deep Lake will automatically push all information required to reproduce the snapshot of the data like your dataset's URI, commit ID, and view IDs of any views that you have used in your training workflow.Learn more about Weights and Biases here. Logging Dataset Creation¶ If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. >>> run = wandb.init(project='deeplake_wandb', job_type='dataset_upload') >>> ds = deeplake.empty('hub://fayazrahman4u/my_dataset') # create dataset >>> ds.create_tensor('images', htype='image', sample_compression='jpg') # create a tensor >>> ds.images.append(deeplake.read('files/images/dog.jpg')) # add a sample >>> ds.commit('creation') # commit -> trigger logging >>> run.finish() Copy to clipboard Note If you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes. Note If you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state. Logging Dataset Read¶ A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. >>> run = wandb.init(project='deeplake_wandb', job_type='torch dataloader') >>> train_loader = ds.pytorch() >>> run.finish() Copy to clipboard >>> run = wandb.init(project='deeplake_wandb', job_type='iteration') >>> for sample in ds: >>> print(sample['images'].shape) >>> run.finish() Copy to clipboard",activeloop Logging Dataset Read¶,https://docs.deeplake.ai/en/v3.1.5/Weights-and-Biases.html#logging-dataset-read,A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. Copy to clipboard Copy to clipboard,activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#dataloader,"Return type: DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights . >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048)¶ Returns a shuffled DeepLakeDataLoader object. Parameters: shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True transform(transform: Callable | Dict[str, Callable | None], **kwargs: Dict)¶ Returns a transformed DeepLakeDataLoader object. Parameters: transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data. kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .transform() has already been called.",activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#dataloader,"Defaults to 2 . distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch(decode_method={'images': 'pil'}) # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard query(query_string: str)¶ Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: str | list | tuple | ndarray, replace: bool | None = True, size: int | None = None)¶ Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters: weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True). size – Optional[int] The length of the result view. (default: len(dataset)) Returns: A DeepLakeDataLoader object",activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#dataloader," Train your models using the new high performance C++ dataloader. See the dataloader method on how to create dataloaders from your datasets: Dataset.dataloader Returns a DeepLakeDataLoader object. DeepLakeDataLoader¶ class deeplake.enterprise.DeepLakeDataLoader¶ batch(batch_size: int, drop_last: bool = False)¶ Returns a batched DeepLakeDataLoader object. Parameters: batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .batch() has already been called. close()¶ Shuts down the workers and releases the resources. numpy(num_workers: int = 0, tensors: List[str] | None = None, num_threads: int | None = None, prefetch_factor: int = 2, decode_method: Dict[str, str] | None = None, persistent_workers: bool = False)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. pytorch(num_workers: int = 0, collate_fn: Callable | None = None, tensors: List[str] | None = None, num_threads: int | None = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Dict[str, str] | None = None, persistent_workers: bool = False)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker",activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#deeplakedataloader," class deeplake.enterprise.DeepLakeDataLoader¶ batch(batch_size: int, drop_last: bool = False)¶ Returns a batched DeepLakeDataLoader object. Parameters: batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .batch() has already been called. close()¶ Shuts down the workers and releases the resources. numpy(num_workers: int = 0, tensors: List[str] | None = None, num_threads: int | None = None, prefetch_factor: int = 2, decode_method: Dict[str, str] | None = None, persistent_workers: bool = False)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. pytorch(num_workers: int = 0, collate_fn: Callable | None = None, tensors: List[str] | None = None, num_threads: int | None = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Dict[str, str] | None = None, persistent_workers: bool = False)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False",activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#deeplakedataloader,"Return type: DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights . >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048)¶ Returns a shuffled DeepLakeDataLoader object. Parameters: shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True transform(transform: Callable | Dict[str, Callable | None], **kwargs: Dict)¶ Returns a transformed DeepLakeDataLoader object. Parameters: transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data. kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .transform() has already been called.",activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.5/Dataloader.html#deeplakedataloader,"return_index (bool) – Used to idnetify where loader needs to retur sample index or not . Defaults to True. persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. Defaults to False. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. Examples >>> import deeplake >>> from torchvision import transforms >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch(decode_method={'images': 'pil'}) # return samples as PIL images for transforms ... >>> # iterate over dataloader >>> for i, sample in enumerate(train_loader): ... pass ... Copy to clipboard query(query_string: str)¶ Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: str | list | tuple | ndarray, replace: bool | None = True, size: int | None = None)¶ Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters: weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True). size – Optional[int] The length of the result view. (default: len(dataset)) Returns: A DeepLakeDataLoader object",activeloop Sampler¶,https://docs.deeplake.ai/en/v3.1.5/Sampler.html#sampler,"The sampler applies weighted sampling on the dataset and returns the sampled view. It creates a discrete distribution with given weights and randomly picks samples based on it. The resulting view is generated in such a way that when creating a dataloader from the view and training on it, the performance impact is minimal. See the sample_by method on how to use this feature: Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied.",activeloop Tensor Query Language¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#tensor-query-language," This page describes the Tensor Query Language (TQL), an SQL-like language used for Querying in Activeloop Platform as well as in ds.query in our Python API. To use queries, install deeplake with pip install deeplake[enterprise]. Querying datasets is part of our Growth and Enterprise Plan. Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. Language¶ SELECT¶ TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE¶ WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY¶ ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET¶ LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions¶ TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]",activeloop Language¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#language,"For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions¶ There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number . May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT¶ Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop Tensor Query Language¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#tensor-query-language,"This last expression returns array containing the third elements of the initial two dimensional array boxes . TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions. For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions¶ There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number. May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT¶ Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT. Examples¶ Querying for images containing 0 in MNIST Train Dataset with ds.query. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> result = ds.query('select * where labels == 0') >>> len(result) 5923 Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/coco-train') >>> result = ds.query('(select * where contains(categories, 'car')) union (select * where contains(categories, 'motorcycle'))') >>> len(result) 14376 Copy to clipboard",activeloop Language¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#language," SELECT¶ TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE¶ WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY¶ ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET¶ LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions¶ TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes. TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions",activeloop SELECT¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#select,"TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: Copy to clipboard Each part of the SELECT statement can be omitted.FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context",activeloop ORDER BY¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#order-by,"ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string.ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default.",activeloop Expressions¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#expressions,"TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value.The value can be numeric scalar or array as well as string value.String literal should be provided within single quotes (') and can be used on class_label, json and text tensors.For class labels it will get corresponding numeric value from the class_names list and do numeric comparison.For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array.Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes.TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions. For example, Copy to clipboard From SQL we also support the following two keywords: BETWEEN Copy to clipboard IN Copy to clipboard",activeloop Functions¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#functions,"There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number. May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes.",activeloop "UNION, INTERSECT, EXCEPT¶",https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#union-intersect-except,"Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop Examples¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#examples,Querying for images containing 0 in MNIST Train Dataset with ds.query. Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. Copy to clipboard,activeloop WHERE¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#where,"WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error.",activeloop LIMIT OFFSET¶,https://docs.deeplake.ai/en/v3.1.5/Tensor-Query-Language.html#limit-offset,"LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL.",activeloop Random Split¶,https://docs.deeplake.ai/en/v3.1.5/Random-Split.html#random-split,"Splits the dataset into non overlapping new datasets of given lengths. The resulting datasets are generated in such a way that when creating a dataloader from the view and training on it, the performance impact is minimal. Using the outputs of this function with .pytorch method of dataset (instead of .dataloader) may result in poor performance. See the random_split method on how to use this feature: Dataset.random_split Splits the dataset into non-overlapping Dataset objects of given lengths.",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"local_cache_size (int) – The size of the local filesystem cache to be used in MB . creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Dataset created using the arguments provided. Return type: Dataset Raises: DatasetHandlerError – If a Dataset already exists at the given path and overwrite is False. UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter. deeplake.like(dest: str | Path, src: str | Dataset | Path, tensors: List[str] | None = None, overwrite: bool = False, creds: dict | None = None, token: str | None = None, public: bool = False) → Dataset¶ Creates a new dataset by copying the source dataset's structure to a new location. No samples are copied, only the meta/info for the dataset and it's tensors. Parameters: dest – Empty Dataset or Path where the new dataset will be created. src (Union[str, Dataset]) – Path or dataset object that will be used as the template for the new dataset. tensors (List[str], optional) – Names of tensors (and groups) to be replicated. If not specified all tensors in source dataset are considered. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist . The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. Returns: Dataset created using the arguments provided. Return type: Dataset Raises: AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified token is invalid TokenPermissionError – When there are permission or other errors related to token Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter. Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded. Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.empty(path: str | Path, overwrite: bool = False, public: bool = False, memory_cache_size: int = 256, local_cache_size: int = 0, creds: dict | None = None, token: str | None = None, verbose: bool = True) → Dataset¶ Creates an empty dataset Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. memory_cache_size (int) – The size of the memory cache to be used in MB",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"Return type: Dataset deeplake.ingest(src: str | Path, dest: str | Path, images_compression: str = 'auto', dest_creds: Dict | None = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ Ingests a dataset from a source and stores it as a structured dataset to destination . Parameters: src (str, pathlib.Path) – Local path to where the unstructured dataset is stored or path to csv file. dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. images_compression (str) – For image classification datasets, this compression will be used for the images tensor. If images_compression is 'auto', compression will be automatically determined by the most common extension in the directory. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. progressbar (bool) – Enables or disables ingestion progress bar. Defaults to True. summary (bool) – If True, a summary of skipped files will be printed after completion. Defaults to True. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. Returns: New dataset object with structured dataset. Return type: Dataset Raises: InvalidPathException – If the source directory does not exist. SamePathException – If the source and destination path are same. AutoCompressionError – If the source director is empty or does not contain a valid extension. InvalidFileExtension – If the most frequent file extension is found to be ‘None' during auto-compression. Note Currently only local source paths and image classification datasets / csv files are supported for automatic ingestion. Supported filetypes: png/jpeg/jpg/csv. All files and sub-directories with unsupported filetypes are ignored. Valid source directory structures for image classification look like: data/ img0.jpg img1.jpg ... Copy to clipboard or: data/ class0/ cat0.jpg ... class1/ dog0.jpg ... ... Copy to clipboard or: data/ train/ class0/ img0.jpg ... ... val/ class0/ img0.jpg ... ... ... Copy to clipboard Classes defined as sub-directories can be accessed at ds['test/labels'].info.class_names. Support for train and test sub directories is present under ds['train/images'], ds['train/labels'] and ds['test/images'], ds['test/labels']. Mapping filenames to classes from an external file is currently not supported",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake," The deeplake package provides a database which stores data as compressed chunked arrays that can be stored anywhere and later streamed to deep learning models. deeplake.dataset(path: str | Path, read_only: bool | None = None, overwrite: bool = False, public: bool = False, memory_cache_size: int = 256, local_cache_size: int = 0, creds: str | Dict | None = None, token: str | None = None, verbose: bool = True, access_method: str = 'stream')¶ Returns a Dataset object referencing either a new or existing dataset. Examples >>> ds = deeplake.dataset('hub://username/dataset') >>> ds = deeplake.dataset('s3://mybucket/my_dataset') >>> ds = deeplake.dataset('./datasets/my_dataset', overwrite=True) Copy to clipboard Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to True. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset . a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.dataset(). Returns: New dataset created from the dataframe. Return type: Dataset Raises: Exception – If src is not a valid pandas dataframe object. deeplake.ingest_huggingface(src, dest, use_progressbar=True) → Dataset¶ Converts Hugging Face datasets to Deep Lake format. Parameters: src (hfDataset, DatasetDict) – Hugging Face Dataset or DatasetDict to be converted. Data in different splits of a DatasetDict will be stored under respective tensor groups. dest (Dataset, str, pathlib.Path) – Destination dataset or path to it. use_progressbar (bool) – Defines if progress bar should be used to show conversion progress. Returns: The destination Deep Lake dataset. Return type: Dataset Note if DatasetDict looks like: >>> { ... train: Dataset({ ... features: ['data'] ... }), ... validation: Dataset({ ... features: ['data'] ... }), ... test: Dataset({ ... features: ['data'] ... }), ... } Copy to clipboard it will be converted to a Deep Lake Dataset with tensors ['train/data', 'validation/data', 'test/data']. Features of the type Sequence(feature=Value(dtype='string')) are not supported. Columns of such type are skipped. deeplake.load(path: str | Path, read_only: bool | None = None, memory_cache_size: int = 256, local_cache_size: int = 0, creds: dict | None = None, token: str | None = None, verbose: bool = True, access_method: str = 'stream') → Dataset¶ Loads an existing dataset Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. memory_cache_size (int) – The size of the memory cache to be used in MB",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"local_cache_size (int) – The size of the local filesystem cache to be used in MB . creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist. The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. Returns: Dataset loaded using the arguments provided. Return type: Dataset Raises: DatasetHandlerError – If a Dataset does not exist at the given path. AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded. Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.delete(path: str | Path, force: bool = False, large_ok: bool = False, creds: dict | None = None, token: str | None = None, verbose: bool = False) → None¶ Deletes a dataset at a given path. Parameters: path (str, pathlib.Path) – The path to the dataset to be deleted. force (bool) – Delete data regardless of whether it looks like a deeplake dataset",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"All data at the path will be removed if set to True . large_ok (bool) – Delete datasets larger than 1GB. Disabled by default. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. Raises: DatasetHandlerError – If a Dataset does not exist at the given path and force = False. NotImplementedError – When attempting to delete a managed view. Warning This is an irreversible operation. Data once deleted cannot be recovered. deeplake.rename(old_path: str | Path, new_path: str | Path, creds: dict | None = None, token: str | None = None) → Dataset¶ Renames dataset at old_path to new_path. Examples >>> deeplake.rename('hub://username/image_ds', 'hub://username/new_ds') >>> deeplake.rename('s3://mybucket/my_ds', 's3://mybucket/renamed_ds') Copy to clipboard Parameters: old_path (str, pathlib.Path) – The path to the dataset to be renamed. new_path (str, pathlib.Path) – Path to the dataset after renaming. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. This takes precedence over credentials present in the environment. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url' and ‘aws_region' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. Returns: The renamed Dataset. Return type: Dataset Raises: DatasetHandlerError – If a Dataset does not exist at the given path or if new path is to a different directory. deeplake.copy(src: str | Path | Dataset, dest: str | Path, tensors: List[str] | None = None, overwrite: bool = False, src_creds=None, token=None, dest_creds=None, num_workers: int = 0, scheduler='threaded', progressbar=True, **kwargs)¶ Copies dataset at src to dest. Version control history is not included. Parameters: src (Union[str, Dataset, pathlib.Path]) – The Dataset or the path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at dest, it will be overwritten. Defaults to False. src_creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"deeplake.read(path: str | Path, verify: bool = False, creds: Dict | None = None, compression: str | None = None, storage: StorageProvider | None = None) → Sample¶ Utility that reads raw data from supported files into Deep Lake format . Recompresses data into format required by the tensor if permitted by the tensor htype. Simply copies the data in the file if file format matches sample_compression of the tensor, thus maximizing upload speeds. Examples >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('path/to/cat.jpg')) >>> ds.images.shape (1, 399, 640, 3) Copy to clipboard >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.videos.append(deeplake.read('path/to/video.mp4')) >>> ds.videos.shape (1, 136, 720, 1080, 3) Copy to clipboard >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('https://picsum.photos/200/300')) >>> ds.images[0].shape (300, 200, 3) Copy to clipboard Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Copy to clipboard Parameters: path (str) – Path to a supported file. verify (bool) – If True, contents of the file are verified. creds (optional, Dict) – Credentials for s3, gcp and http urls. compression (optional, str) – Format of the file. Only required if path does not have an extension. storage (optional, StorageProvider) – Storage provider to use to retrieve remote files. Useful if multiple files are being read from same storage to minimize overhead of creating a new provider. Returns: Sample object. Call sample.array to get the np.ndarray. Return type: Sample Note No data is actually loaded until you try to get a property of the returned Sample. This is useful for passing along to Tensor.append and Tensor.extend. deeplake.link(path: str, creds_key: str | None = None) → LinkedSample¶ Utility that stores a link to raw data. Used to add data to a Deep Lake Dataset without copying it. See Link htype. Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Copy to clipboard Parameters: path (str) – Path to a supported file. creds_key (optional, str) – The credential key to use to read data for this sample. The actual credentials are fetched from the dataset. Returns: LinkedSample object that stores path and creds. Return type: LinkedSample Examples >>> ds = deeplake.dataset('test/test_ds') >>> ds.create_tensor('images', htype='link[image]') >>> ds.images.append(deeplake.link('https://picsum.photos/200/300')) Copy to clipboard See more examples here. deeplake.tiled(sample_shape: Tuple[int, ...], tile_shape: Tuple[int, ...] | None = None, dtype: str | dtype = dtype('uint8'))¶ Allocates an empty sample of shape sample_shape, broken into tiles of shape tile_shape (except for edge tiles). Example >>> with ds: ... ds.create_tensor('image', htype='image', sample_compression='png') ... ds.image.append(deeplake.tiled(sample_shape=(1003, 1103, 3), tile_shape=(10, 10, 3))) ... ds.image[0][-217:, :212, 1:] = np.random.randint(0, 256, (217, 212, 2), dtype=np.uint8) Copy to clipboard Parameters: sample_shape (Tuple[int, ...]) – Full shape of the sample",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys . dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). **kwargs (dict) – Additional keyword arguments Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. UnsupportedParameterException – If a parameter that is no longer supported is specified. deeplake.deepcopy(src: str | Path, dest: str | Path, tensors: List[str] | None = None, overwrite: bool = False, src_creds=None, dest_creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False, verbose: bool = True, **kwargs)¶ Copies dataset at src to dest including version control history. Parameters: src (str, pathlib.Path) – Path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. src_creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. verbose (bool) – If True, logs will be printed. Defaults to True. **kwargs – Additional keyword arguments Returns: New dataset object",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"deeplake.ingest_kaggle(tag: str, src: str | Path, dest: str | Path, exist_ok: bool = False, images_compression: str = 'auto', dest_creds: Dict | None = None, kaggle_credentials: dict | None = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ Download and ingest a kaggle dataset and store it as a structured dataset to destination . Parameters: tag (str) – Kaggle dataset tag. Example: 'coloradokb/dandelionimages' points to https://www.kaggle.com/coloradokb/dandelionimages src (str, pathlib.Path) – Local path to where the raw kaggle dataset will be downlaoded to. dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. exist_ok (bool) – If the kaggle dataset was already downloaded and exist_ok is True, ingestion will proceed without error. images_compression (str) – For image classification datasets, this compression will be used for the images tensor. If images_compression is 'auto', compression will be automatically determined by the most common extension in the directory. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. kaggle_credentials (dict) – A dictionary containing kaggle credentials {'username':'YOUR_USERNAME', 'key': 'YOUR_KEY'}. If None, environment variables/the kaggle.json file will be used if available. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. summary (bool) – Generates ingestion summary. Set to True by default. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.dataset(). Returns: New dataset object with structured dataset. Return type: Dataset Raises: SamePathException – If the source and destination path are same. Note Currently only local source paths and image classification datasets are supported for automatic ingestion. deeplake.ingest_dataframe(src, dest: str | Path | Dataset, dest_creds: Dict | None = None, progressbar: bool = True, **dataset_kwargs)¶ Convert pandas dataframe to a Deep Lake Dataset. Parameters: src (pd.DataFrame) – The pandas dataframe to be converted. dest (str, pathlib.Path, Dataset) – A Dataset or The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False . TypeError – If source is not a path to a dataset. UnsupportedParameterException – If parameter that is no longer supported is beeing called. deeplake.connect(src_path: str, creds_key: str, dest_path: str | None = None, org_id: str | None = None, ds_name: str | None = None, token: str | None = None) → Dataset¶ Connects dataset at src_path to Deep Lake via the provided path. Examples >>> # Connect an s3 dataset >>> ds = deeplake.connect(src_path='s3://bucket/dataset', dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token') >>> # or >>> ds = deeplake.connect(src_path='s3://bucket/dataset', org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters: src_path (str) – Cloud path to the source dataset. Can be: an s3 path like s3://bucket/path/to/dataset. a gcs path like gcs://bucket/path/to/dataset. creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the src_path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. deeplake.list(workspace: str = '', token: str | None = None) → None¶ List all available Deep Lake cloud datasets. Parameters: workspace (str) – Specify user/organization name. If not given, returns a list of all datasets that can be accessed, regardless of what workspace they are in. Otherwise, lists all datasets in the given workspace. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets. This is optional, tokens are normally autogenerated. Returns: List of dataset names. Return type: List deeplake.exists(path: str | Path, creds: dict | None = None, token: str | None = None) → bool¶ Checks if a dataset exists at the given path. Parameters: path (str, pathlib.Path) – the path which needs to be checked. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. Returns: A boolean confirming whether the dataset exists or not at the given path",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"Note pad_data_in is only applicable if data_in is a Deep Lake dataset . It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ds_out provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError: All other exceptions raised if there are problems while running the pipeline. deeplake.compose(functions: List[ComputeFunction])¶ Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Example: pipeline = deeplake.compose([my_fn(a=3), another_function(b=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function. It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"tile_shape (Optional, Tuple[int, ...]) – The sample will be will stored as tiles where each tile will have this shape (except edge tiles) . If not specified, it will be computed such that each tile is close to half of the tensor's max_chunk_size (after compression). dtype (Union[str, np.dtype]) – Dtype for the sample array. Default uint8. Returns: A PartialSample instance which can be appended to a Tensor. Return type: PartialSample deeplake.compute(fn, name: str | None = None) → Callable[[...], ComputeFunction]¶ Compute is a decorator for functions. The functions should have atleast 2 argument, the first two will correspond to sample_in and samples_out. There can be as many other arguments as required. The output should be appended/extended to the second argument in a deeplake like syntax. Any value returned by the fn will be ignored. Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function. It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool): If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool): If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.html#module-deeplake,"Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this . TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError: All other exceptions raised if there are problems while running the pipeline.",activeloop deeplake.core¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.html#deeplake-core,deeplake.core.sample Sample deeplake.core.linked_sample LinkedSample deeplake.core.partial_sample PartialSample deeplake.core.storage Base Storage Provider LRU Cache S3 Storage Provider Google Cloud Storage Provider Google Drive Storage Provider Local Storage Provider Memory Provider deeplake.core.index IndexEntry Index merge_slices() slice_at_int() slice_length(),activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset," Dataset¶ class deeplake.core.dataset.Dataset¶ add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises: ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False)¶ Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises: KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str¶ The current branch of the dataset property branches¶ Lists all the branches of the dataset. Returns: List of branches. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"If False, an error is raised if the group already exists . Defaults to False. Returns: The created group. Raises: TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False. Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: str | dtype = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, verbose: bool = True, downsampling: Tuple[int, int] | None = None, **kwargs)¶ Creates a new tensor in the dataset. Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters: name (str) – The name of the tensor to be created. htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. verify is always True even if specified as False if create_shape_tensor or create_sample_info_tensor is True. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. verbose (bool) – Shows warnings if True. downsampling (tuple[int, int]) – If not None, the tensor will be downsampled by the provided factors",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"connect(creds_key: str, dest_path: str | None = None, org_id: str | None = None, ds_name: str | None = None, token: str | None = None)¶ Connect a Deep Lake cloud dataset through a deeplake path . Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. copy(dest: str | Path, tensors: List[str] | None = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False)¶ Copies this dataset or dataset view to dest. Version control history is not included. Parameters: dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset¶ Creates a tensor group. Intermediate groups in the path are also created. Parameters: name – The name of the group to create. exist_ok – If True, the group is created if it does not exist",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"pass Copy to clipboard Restrictions The new high performance C++ dataloader is part of our Growth and Enterprise Plan . Users of our Community plan can create dataloaders on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. delete_group(name: str, large_ok: bool = False)¶ Delete a tensor group from the dataset. Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters: name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns: None Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False)¶ Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters: name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns: None Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str)¶ Deletes the view with given view id. Parameters: id (str) – Id of the view to delete. Raises: KeyError – if view with given id does not exist. diff(id_1: str | None = None, id_2: str | None = None, as_dict=False) → Dict | None¶ Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters: id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns: Optional[Dict] Raises: ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"For example, (2, 5) will downsample the tensor by a factor of 2 in both dimensions and create 5 layers of downsampled tensors . Only support for image and mask htypes. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns: The new tensor, which can be accessed by dataset[name] or dataset.name. Return type: Tensor Raises: TensorAlreadyExistsError – If the tensor already exists and exist_ok is False. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If name is in dataset attributes. NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor¶ Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters: name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns: New Tensor object. Return type: Tensor dataloader()¶ Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Returns: A deeplake.enterprise.DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ..",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard checkout(address: str, create: bool = False) → str | None¶ Checks out to a specific commit_id or branch . If create = True, creates a new branch with name address. Parameters: address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. Returns: The commit_id of the dataset after checkout. Return type: Optional[str] Raises: Exception – If the dataset is a filtered view. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout. clear_cache()¶ Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client¶ Returns the client of the dataset. commit(message: str | None = None, allow_empty=False) → str¶ Stores a snapshot of the current state of the dataset. Parameters: message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns: the commit id of the saved commit that can be used to access the snapshot. Return type: str Raises: Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: str | None¶ The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict]¶ Lists all the commits leading to the current dataset state. Returns: List of dictionaries containing commit information",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Note A dictionary of the differences between the commits/branches is returned if as_dict is True . The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False)¶ Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. Raises: KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. filter(function: Callable | str, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: str | None = None, result_ds_args: dict | None = None)¶ Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters: function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations. 0 indicates in-place for-loop evaluation, multiprocessing is used otherwise. scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns: View of Dataset with elements that satisfy filter function",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Example Following filters are identical and return dataset view where all the samples have label equals to 2 . >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard flush()¶ Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict¶ Get details of a particular commit. Parameters: commit_id (str) – commit id of the commit. Returns: Dictionary of details with keys - commit, author, time, message. Return type: Dict Raises: KeyError – If given commit_id is was not found in the dataset. get_creds_keys() → List[str]¶ Returns the list of creds keys added to the dataset. These are used to fetch external data in linked tensors get_view(id: str) → ViewEntry¶ Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters: id (str) – id of required view. Returns: ViewEntry Raises: KeyError – If no such view exists. get_views(commit_id: str | None = None) → List[ViewEntry]¶ Returns list of views stored in this Dataset. Parameters: commit_id (str, optional) – Commit from which views should be returned. If not specified, views from current commit is returned. If not specified, views from the currently checked out commit will be returned. Returns: List of ViewEntry instances. Return type: List[ViewEntry] property groups: Dict[str, Dataset]¶ All sub groups in this group property has_head_changes¶ Returns True if currently at head node and uncommitted changes are present. property info¶ Returns the information about the dataset. property is_head_node¶ Returns True if the current commit is the head node of the branch and False otherwise. property is_view: bool¶ Returns True if this dataset is a view and False otherwise. load_view(id: str, optimize: bool | None = False, tensors: List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool | None = True)¶ Loads the view and returns the Dataset by id. Equivalent to ds.get_view(id).load(). Parameters: id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"scheduler (str) – The scheduler to be used for optimization . Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns: The loaded view. Return type: Dataset Raises: KeyError – if view with given id does not exist. log()¶ Displays the details of all the past commits. property max_len¶ Return the maximum length of the tensor. property max_view¶ Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: str | None = None, delete_removed_tensors: bool = False, force: bool = False)¶ Merges the target_id into the current dataset. Parameters: target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge. force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch. If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises: Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta¶ Returns the metadata of the dataset. property min_len¶ Return the minimum length of the tensor",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"property min_view¶ Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor . Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property no_view_dataset¶ Returns the same dataset without slicing. property num_samples: int¶ Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent¶ Returns the parent of this group. Returns None if this is the root dataset. property pending_commit_id: str¶ The commit_id of the next commit that will be made to the dataset. If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: int | None = None)¶ Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters: index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises: IndexError – If the index is out of range. populate_creds(creds_key: str, creds: dict)¶ Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) Copy to clipboard pytorch(transform: Callable | None = None, tensors: Sequence[str] | None = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Callable | None = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, progressbar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Dict[str, Any] | None = None, torch_dataset=None, decode_method: Dict[str, str] | None = None, *args, **kwargs)¶ Converts the dataset into a pytorch Dataloader. Parameters: *args – Additional args to be passed to torch_dataset **kwargs – Additional kwargs to be passed to torch_dataset transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Note Pytorch does not support uint16, uint32, uint64 dtypes . These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str)¶ Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object Returns: A Dataset object. Return type: Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. random_split(lengths: Sequence[int | float])¶ Splits the dataset into non-overlapping Dataset objects of given lengths. If a list of fractions that sum up to 1 is given, the lengths will be computed automatically as floor(frac * len(dataset)) for each fraction provided. The split generated is only performant with enterprise dataloader which can be installed with pip install deeplake[enterprise]. After computing the lengths, if there are any remainders, 1 count will be distributed in round-robin fashion to the lengths until there are no remainders left. Example >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('labels', htype='class_label') >>> ds.labels.extend([0, 1, 2, 1, 3]) >>> len(ds) 5 >>> train_ds, val_ds = ds.random_split([0.8, 0.2]) >>> len(train_ds) 4 >>> len(val_ds) 1 >>> train_ds, val_ds = ds.random_split([3, 2]) >>> len(train_ds) 3 >>> len(val_ds) 2 >> train_loader = train_ds.pytorch(batch_size=2, shuffle=True) >> val_loader = val_ds.pytorch(batch_size=2, shuffle=False) Copy to clipboard Parameters: lengths (Sequence[Union[int, float]]) – lengths or fractions of splits to be produced. Returns: a tuple of datasets of the given lengths. Return type: Tuple[Dataset, …] Raises: ValueError – If the sum of the lengths is not equal to the length of the dataset. ValueError – If the dataset has variable length tensors. ValueError – If lengths are floats and one or more of them are not between 0 and 1. property read_only¶ Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: str | List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True)¶ Rewrites the underlying chunks to make their sizes optimal",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label) . num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False. Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False progressbar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True. return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset. Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. torch_dataset (None) – dataset type that going to be used in dataloader decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A torch.utils.data.DataLoader object. Raises: EmptyTensorError – If one or more tensors being passed to pytorch are empty",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"This is usually needed in cases where a lot of updates have been made to the data . Parameters: tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: str | Path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None¶ Renames group with name name to new_name Parameters: name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor¶ Renames tensor with name name to new_name Parameters: name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns: Renamed tensor. Return type: Tensor Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name. reset()¶ Resets the uncommitted changes present in the branch. Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root¶ Returns the root dataset of a group. sample_by(weights: str | list | tuple, replace: bool | None = True, size: int | None = None)¶ Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters: weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view. Defaults to True size – Optional[int] The length of the result view. Defaults to length of the dataset. Returns: A deeplake.Dataset object",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Return type: Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights . >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only use sample_by on Activeloop datasets ('hub://activeloop/…' datasets). To use sampling functionality on your own datasets, upgrade your organization's plan. property sample_indices¶ Returns all the indices pointed to by this dataset view. save_view(message: str | None = None, path: str | Path | None = None, id: str | None = None, optimize: bool = False, tensors: List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, **ds_args) → str¶ Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters: message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. If the user doesn't have write access to the source dataset and the source dataset is a Deep Lake cloud dataset, then the VDS is saved is saved under the user's Deep Lake account and can be accessed using deeplake.load(f'hub://{username}/queries/{query_hash}'). id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. num_workers (int) – Number of workers to be used for optimization process. Applicable only if optimize=True",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added . managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform. Defaults to False. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard property client¶ Returns the client of the dataset. connect(*args, **kwargs)¶ Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Defaults to 0 . scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns: Path to the saved VDS. Return type: str Raises: ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). size_approx()¶ Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary()¶ Prints a summary of the dataset. tensorflow(tensors: Sequence[str] | None = None, tobytes: bool | Sequence[str] = False, fetch_chunks: bool = True)¶ Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters: tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns: tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor]¶ All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: int | str | None = None, height: int | str | None = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. DeepLakeCloudDataset¶ class deeplake.core.dataset.DeepLakeCloudDataset¶ Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored on Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard checkout(address: str, create: bool = False) → str | None¶ Checks out to a specific commit_id or branch . If create = True, creates a new branch with name address. Parameters: address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. Returns: The commit_id of the dataset after checkout. Return type: Optional[str] Raises: Exception – If the dataset is a filtered view. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout. clear_cache()¶ Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client¶ Returns the client of the dataset. commit(message: str | None = None, allow_empty=False) → str¶ Stores a snapshot of the current state of the dataset. Parameters: message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns: the commit id of the saved commit that can be used to access the snapshot. Return type: str Raises: Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: str | None¶ The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict]¶ Lists all the commits leading to the current dataset state. Returns: List of dictionaries containing commit information",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplake-core-dataset,"Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False . property is_actually_cloud: bool¶ Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: int | str | None = None, height: int | str | None = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. ViewEntry¶ class deeplake.core.dataset.ViewEntry¶ Represents a view saved inside a dataset. delete()¶ Deletes the view. property id: str¶ Returns id of the view. load(verbose=True)¶ Loads the view and returns the Dataset. Parameters: verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Loaded dataset view. Return type: Dataset property message: str¶ Returns the message with which the view was saved. optimize(tensors: List[str] | None = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True)¶ Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters: tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns: ViewEntry",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"connect(creds_key: str, dest_path: str | None = None, org_id: str | None = None, ds_name: str | None = None, token: str | None = None)¶ Connect a Deep Lake cloud dataset through a deeplake path . Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. copy(dest: str | Path, tensors: List[str] | None = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False)¶ Copies this dataset or dataset view to dest. Version control history is not included. Parameters: dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset¶ Creates a tensor group. Intermediate groups in the path are also created. Parameters: name – The name of the group to create. exist_ok – If True, the group is created if it does not exist",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"For example, (2, 5) will downsample the tensor by a factor of 2 in both dimensions and create 5 layers of downsampled tensors . Only support for image and mask htypes. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns: The new tensor, which can be accessed by dataset[name] or dataset.name. Return type: Tensor Raises: TensorAlreadyExistsError – If the tensor already exists and exist_ok is False. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If name is in dataset attributes. NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor¶ Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters: name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns: New Tensor object. Return type: Tensor dataloader()¶ Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Returns: A deeplake.enterprise.DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ..",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"If False, an error is raised if the group already exists . Defaults to False. Returns: The created group. Raises: TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False. Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: str | dtype = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, verbose: bool = True, downsampling: Tuple[int, int] | None = None, **kwargs)¶ Creates a new tensor in the dataset. Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters: name (str) – The name of the tensor to be created. htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. verify is always True even if specified as False if create_shape_tensor or create_sample_info_tensor is True. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. verbose (bool) – Shows warnings if True. downsampling (tuple[int, int]) – If not None, the tensor will be downsampled by the provided factors",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"pass Copy to clipboard Restrictions The new high performance C++ dataloader is part of our Growth and Enterprise Plan . Users of our Community plan can create dataloaders on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. delete_group(name: str, large_ok: bool = False)¶ Delete a tensor group from the dataset. Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters: name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns: None Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False)¶ Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters: name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns: None Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str)¶ Deletes the view with given view id. Parameters: id (str) – Id of the view to delete. Raises: KeyError – if view with given id does not exist. diff(id_1: str | None = None, id_2: str | None = None, as_dict=False) → Dict | None¶ Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters: id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns: Optional[Dict] Raises: ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Example Following filters are identical and return dataset view where all the samples have label equals to 2 . >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard flush()¶ Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict¶ Get details of a particular commit. Parameters: commit_id (str) – commit id of the commit. Returns: Dictionary of details with keys - commit, author, time, message. Return type: Dict Raises: KeyError – If given commit_id is was not found in the dataset. get_creds_keys() → List[str]¶ Returns the list of creds keys added to the dataset. These are used to fetch external data in linked tensors get_view(id: str) → ViewEntry¶ Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters: id (str) – id of required view. Returns: ViewEntry Raises: KeyError – If no such view exists. get_views(commit_id: str | None = None) → List[ViewEntry]¶ Returns list of views stored in this Dataset. Parameters: commit_id (str, optional) – Commit from which views should be returned. If not specified, views from current commit is returned. If not specified, views from the currently checked out commit will be returned. Returns: List of ViewEntry instances. Return type: List[ViewEntry] property groups: Dict[str, Dataset]¶ All sub groups in this group property has_head_changes¶ Returns True if currently at head node and uncommitted changes are present. property info¶ Returns the information about the dataset. property is_head_node¶ Returns True if the current commit is the head node of the branch and False otherwise. property is_view: bool¶ Returns True if this dataset is a view and False otherwise. load_view(id: str, optimize: bool | None = False, tensors: List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool | None = True)¶ Loads the view and returns the Dataset by id. Equivalent to ds.get_view(id).load(). Parameters: id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Note A dictionary of the differences between the commits/branches is returned if as_dict is True . The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False)¶ Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. Raises: KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. filter(function: Callable | str, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: str | None = None, result_ds_args: dict | None = None)¶ Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters: function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations. 0 indicates in-place for-loop evaluation, multiprocessing is used otherwise. scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns: View of Dataset with elements that satisfy filter function",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"property min_view¶ Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor . Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property no_view_dataset¶ Returns the same dataset without slicing. property num_samples: int¶ Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent¶ Returns the parent of this group. Returns None if this is the root dataset. property pending_commit_id: str¶ The commit_id of the next commit that will be made to the dataset. If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: int | None = None)¶ Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters: index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises: IndexError – If the index is out of range. populate_creds(creds_key: str, creds: dict)¶ Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) Copy to clipboard pytorch(transform: Callable | None = None, tensors: Sequence[str] | None = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Callable | None = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, progressbar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Dict[str, Any] | None = None, torch_dataset=None, decode_method: Dict[str, str] | None = None, *args, **kwargs)¶ Converts the dataset into a pytorch Dataloader. Parameters: *args – Additional args to be passed to torch_dataset **kwargs – Additional kwargs to be passed to torch_dataset transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"scheduler (str) – The scheduler to be used for optimization . Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns: The loaded view. Return type: Dataset Raises: KeyError – if view with given id does not exist. log()¶ Displays the details of all the past commits. property max_len¶ Return the maximum length of the tensor. property max_view¶ Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: str | None = None, delete_removed_tensors: bool = False, force: bool = False)¶ Merges the target_id into the current dataset. Parameters: target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge. force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch. If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises: Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta¶ Returns the metadata of the dataset. property min_len¶ Return the minimum length of the tensor",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Note Pytorch does not support uint16, uint32, uint64 dtypes . These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str)¶ Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object Returns: A Dataset object. Return type: Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. random_split(lengths: Sequence[int | float])¶ Splits the dataset into non-overlapping Dataset objects of given lengths. If a list of fractions that sum up to 1 is given, the lengths will be computed automatically as floor(frac * len(dataset)) for each fraction provided. The split generated is only performant with enterprise dataloader which can be installed with pip install deeplake[enterprise]. After computing the lengths, if there are any remainders, 1 count will be distributed in round-robin fashion to the lengths until there are no remainders left. Example >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('labels', htype='class_label') >>> ds.labels.extend([0, 1, 2, 1, 3]) >>> len(ds) 5 >>> train_ds, val_ds = ds.random_split([0.8, 0.2]) >>> len(train_ds) 4 >>> len(val_ds) 1 >>> train_ds, val_ds = ds.random_split([3, 2]) >>> len(train_ds) 3 >>> len(val_ds) 2 >> train_loader = train_ds.pytorch(batch_size=2, shuffle=True) >> val_loader = val_ds.pytorch(batch_size=2, shuffle=False) Copy to clipboard Parameters: lengths (Sequence[Union[int, float]]) – lengths or fractions of splits to be produced. Returns: a tuple of datasets of the given lengths. Return type: Tuple[Dataset, …] Raises: ValueError – If the sum of the lengths is not equal to the length of the dataset. ValueError – If the dataset has variable length tensors. ValueError – If lengths are floats and one or more of them are not between 0 and 1. property read_only¶ Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: str | List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True)¶ Rewrites the underlying chunks to make their sizes optimal",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label) . num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False. Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False progressbar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True. return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset. Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. torch_dataset (None) – dataset type that going to be used in dataloader decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A torch.utils.data.DataLoader object. Raises: EmptyTensorError – If one or more tensors being passed to pytorch are empty",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"This is usually needed in cases where a lot of updates have been made to the data . Parameters: tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: str | Path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None¶ Renames group with name name to new_name Parameters: name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor¶ Renames tensor with name name to new_name Parameters: name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns: Renamed tensor. Return type: Tensor Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name. reset()¶ Resets the uncommitted changes present in the branch. Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root¶ Returns the root dataset of a group. sample_by(weights: str | list | tuple, replace: bool | None = True, size: int | None = None)¶ Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters: weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view. Defaults to True size – Optional[int] The length of the result view. Defaults to length of the dataset. Returns: A deeplake.Dataset object",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Return type: Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights . >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only use sample_by on Activeloop datasets ('hub://activeloop/…' datasets). To use sampling functionality on your own datasets, upgrade your organization's plan. property sample_indices¶ Returns all the indices pointed to by this dataset view. save_view(message: str | None = None, path: str | Path | None = None, id: str | None = None, optimize: bool = False, tensors: List[str] | None = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, **ds_args) → str¶ Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters: message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. If the user doesn't have write access to the source dataset and the source dataset is a Deep Lake cloud dataset, then the VDS is saved is saved under the user's Deep Lake account and can be accessed using deeplake.load(f'hub://{username}/queries/{query_hash}'). id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. num_workers (int) – Number of workers to be used for optimization process. Applicable only if optimize=True",activeloop DeepLakeCloudDataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplakeclouddataset," class deeplake.core.dataset.DeepLakeCloudDataset¶ Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored on Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform. Defaults to False. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard property client¶ Returns the client of the dataset. connect(*args, **kwargs)¶ Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key', token='my_activeloop_token) >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key', token='my_activeloop_token') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path",activeloop DeepLakeCloudDataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#deeplakeclouddataset,"InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path . delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. property is_actually_cloud: bool¶ Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: int | str | None = None, height: int | str | None = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop ViewEntry¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#viewentry,"class deeplake.core.dataset.ViewEntry¶ Represents a view saved inside a dataset. delete()¶ Deletes the view. property id: str¶ Returns id of the view. load(verbose=True)¶ Loads the view and returns the Dataset. Parameters: verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Loaded dataset view. Return type: Dataset property message: str¶ Returns the message with which the view was saved. optimize(tensors: List[str] | None = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True)¶ Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters: tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns: ViewEntry",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset,"Defaults to 0 . scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns: Path to the saved VDS. Return type: str Raises: ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). size_approx()¶ Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary()¶ Prints a summary of the dataset. tensorflow(tensors: Sequence[str] | None = None, tobytes: bool | Sequence[str] = False, fetch_chunks: bool = True)¶ Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters: tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns: tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor]¶ All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: int | str | None = None, height: int | str | None = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.dataset.html#dataset," class deeplake.core.dataset.Dataset¶ add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises: ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False)¶ Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises: KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str¶ The current branch of the dataset property branches¶ Lists all the branches of the dataset. Returns: List of branches. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#deeplake-core-tensor," Tensor¶ class deeplake.core.tensor.Tensor¶ __len__()¶ Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns: The current length of this tensor. Return type: int __setitem__(item: int | slice, value: Any)¶ Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard property _config¶ Returns a summary of the configuration of the tensor. _linked_sample()¶ Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Sample | ndarray | int | float | bool | dict | list | str | integer | floating | bool_)¶ Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters: sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype¶ Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear()¶ Deletes all samples from the tensor data(aslist: bool = False, fetch_chunks: bool = False) → Any¶ Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False)¶ Return json data. Only applicable for tensors with ‘json' base htype. property dtype: dtype | None¶ Dtype of the tensor",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#deeplake-core-tensor,"extend(samples: ndarray | Sequence[Sample | ndarray | int | float | bool | dict | list | str | integer | floating | bool_] | Tensor, progressbar: bool = False)¶ Extends the end of the tensor by appending multiple elements from a sequence . Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files. See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters: samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. Raises: TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool¶ Whether this tensor is a hidden tensor. property htype¶ Htype of the tensor. property info: Info¶ Returns the information about the tensor. User can set info of tensor. Returns: Information about the tensor. Return type: Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard invalidate_libdeeplake_dataset()¶ Invalidates the libdeeplake dataset object. property is_dynamic: bool¶ Will return True if samples in this tensor have shapes that are unequal. property is_link¶ Whether this tensor is a link tensor. property is_sequence¶ Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False)¶ Return list data. Only applicable for tensors with ‘list' base htype. property meta¶ Metadata of the tensor. modified_samples(target_id: str | None = None, return_indexes: bool | None = False)¶ Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters: target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns: A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type: Tensor Raises: TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int¶ Number of dimensions of the tensor. property num_samples: int¶ Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#deeplake-core-tensor,"numpy(aslist=False, fetch_chunks=False) → ndarray | List[ndarray]¶ Computes the contents of the tensor in numpy format . Parameters: aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises: DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns: A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(fetch_chunks: bool = False)¶ Return path data. Only applicable for linked tensors play()¶ Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: int | None = None)¶ Removes an element at the given index. property sample_indices¶ Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Dict | List[Dict]¶ Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[int | None, ...]¶ Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns: Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type: tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval¶ Returns a ShapeInterval object that describes this tensor's shape more accurately. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns: Object containing lower and upper properties. Return type: ShapeInterval Note If you are expecting a tuple, use shape instead. summary()¶ Prints a summary of the tensor",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#tensor," class deeplake.core.tensor.Tensor¶ __len__()¶ Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns: The current length of this tensor. Return type: int __setitem__(item: int | slice, value: Any)¶ Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard property _config¶ Returns a summary of the configuration of the tensor. _linked_sample()¶ Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Sample | ndarray | int | float | bool | dict | list | str | integer | floating | bool_)¶ Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters: sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype¶ Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear()¶ Deletes all samples from the tensor data(aslist: bool = False, fetch_chunks: bool = False) → Any¶ Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False)¶ Return json data. Only applicable for tensors with ‘json' base htype. property dtype: dtype | None¶ Dtype of the tensor",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#deeplake-core-tensor,"text(fetch_chunks: bool = False)¶ Return text data . Only applicable for tensors with ‘text' base htype. property timestamps: ndarray¶ Returns timestamps (in seconds) for video sample as numpy array. Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes¶ Returns the bytes of the tensor. Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns: The bytes of the tensor. Return type: bytes Raises: ValueError – If the tensor has multiple samples. property verify¶ Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#tensor,"extend(samples: ndarray | Sequence[Sample | ndarray | int | float | bool | dict | list | str | integer | floating | bool_] | Tensor, progressbar: bool = False)¶ Extends the end of the tensor by appending multiple elements from a sequence . Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files. See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters: samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. Raises: TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool¶ Whether this tensor is a hidden tensor. property htype¶ Htype of the tensor. property info: Info¶ Returns the information about the tensor. User can set info of tensor. Returns: Information about the tensor. Return type: Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard invalidate_libdeeplake_dataset()¶ Invalidates the libdeeplake dataset object. property is_dynamic: bool¶ Will return True if samples in this tensor have shapes that are unequal. property is_link¶ Whether this tensor is a link tensor. property is_sequence¶ Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False)¶ Return list data. Only applicable for tensors with ‘list' base htype. property meta¶ Metadata of the tensor. modified_samples(target_id: str | None = None, return_indexes: bool | None = False)¶ Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters: target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns: A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type: Tensor Raises: TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int¶ Number of dimensions of the tensor. property num_samples: int¶ Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#tensor,"text(fetch_chunks: bool = False)¶ Return text data . Only applicable for tensors with ‘text' base htype. property timestamps: ndarray¶ Returns timestamps (in seconds) for video sample as numpy array. Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes¶ Returns the bytes of the tensor. Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns: The bytes of the tensor. Return type: bytes Raises: ValueError – If the tensor has multiple samples. property verify¶ Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.tensor.html#tensor,"numpy(aslist=False, fetch_chunks=False) → ndarray | List[ndarray]¶ Computes the contents of the tensor in numpy format . Parameters: aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised. fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved. This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises: DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns: A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(fetch_chunks: bool = False)¶ Return path data. Only applicable for linked tensors play()¶ Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: int | None = None)¶ Removes an element at the given index. property sample_indices¶ Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Dict | List[Dict]¶ Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[int | None, ...]¶ Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns: Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type: tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval¶ Returns a ShapeInterval object that describes this tensor's shape more accurately. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns: Object containing lower and upper properties. Return type: ShapeInterval Note If you are expecting a tuple, use shape instead. summary()¶ Prints a summary of the tensor",activeloop deeplake.api¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.api.html#deeplake-api,deeplake.api.dataset dataset deeplake.api.info Info deeplake.api.link link() deeplake.api.read read() deeplake.api.tiled tiled(),activeloop deeplake.auto¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.auto.html#deeplake-auto,deeplake.auto.structured deeplake.auto.structured.base deeplake.auto.structured.dataframe deeplake.auto.unstructured deeplake.auto.unstructured.base deeplake.auto.unstructured.image_classification deeplake.auto.unstructured.kaggle,activeloop deeplake.util¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.util.html#deeplake-util,deeplake.util.shape_interval ShapeInterval deeplake.util.remove_cache remove_memory_cache() get_base_storage() get_dataset_with_zero_size_cache() create_read_copy_dataset() deeplake.util.notebook is_notebook() is_jupyter() is_colab() deeplake.util.exceptions ExternalCommandError KaggleError KaggleMissingCredentialsError KaggleDatasetAlreadyDownloadedError InvalidPathException AutoCompressionError InvalidFileExtension SamePathException TensorInvalidSampleShapeError TensorMetaMissingKey TensorDoesNotExistError TensorAlreadyExistsError TensorGroupDoesNotExistError TensorGroupAlreadyExistsError InvalidTensorNameError InvalidTensorGroupNameError DynamicTensorNumpyError InvalidShapeIntervalError InvalidKeyTypeError UnsupportedTensorTypeError InvalidBytesRequestedError ProviderListEmptyError DirectoryAtPathException FileAtPathException ProviderSizeListMismatch ModuleNotInstalledException LoginException UserNotLoggedInException InvalidHubPathException PathNotEmptyException AuthenticationException AuthorizationException InvalidPasswordException CouldNotCreateNewDatasetException ResourceNotFoundException BadRequestException OverLimitException ServerException BadGatewayException GatewayTimeoutException WaitTimeoutException LockedException UnexpectedStatusCodeException EmptyTokenException S3Error S3GetError S3SetError S3DeletionError S3ListError UnsupportedCompressionError SampleCompressionError SampleDecompressionError InvalidImageDimensions TensorUnsupportedSampleType MetaError MetaDoesNotExistError MetaAlreadyExistsError MetaInvalidKey MetaInvalidRequiredMetaKey TensorMetaInvalidHtype TensorMetaInvalidHtypeOverwriteValue TensorMetaMissingRequiredValue TensorMetaInvalidHtypeOverwriteKey TensorDtypeMismatchError InvalidTensorLinkError TensorMetaMutuallyExclusiveKeysError ReadOnlyModeError TransformError FilterError InvalidInputDataError UnsupportedSchedulerError TensorMismatchError InvalidOutputDatasetError InvalidTransformDataset HubComposeEmptyListError HubComposeIncompatibleFunction DatasetUnsupportedPytorch CorruptedMetaError ChunkEngineError FullChunkError ChunkIdEncoderError ChunkSizeTooSmallError DatasetHandlerError MemoryDatasetCanNotBePickledError CorruptedSampleError VersionControlError MergeError MergeNotSupportedError MergeMismatchError MergeConflictError CheckoutError CommitError EmptyCommitError TensorModifiedError GCSDefaultCredsNotFoundError InvalidOperationError AgreementError AgreementNotAcceptedError NotLoggedInError RenameError BufferError InfoError OutOfChunkCountError OutOfSampleCountError SampleHtypeMismatchError EmptyTensorError DatasetViewSavingError ManagedCredentialsNotFoundError UnableToReadFromUrlError InvalidTokenException TokenPermissionError,activeloop deeplake.client.log¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.client.log.html#deeplake-client-log,Deep Lake does logging using the 'deeplake' logger. Logging level is logging.INFO by default. See example on how to change this. Copy to clipboard,activeloop deeplake.core.transform¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.transform.html#deeplake-core-transform,"Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard Note pad_data_in is only applicable if data_in is a Deep Lake dataset. ",activeloop deeplake.core.transform¶,https://docs.deeplake.ai/en/v3.1.5/deeplake.core.transform.html#deeplake-core-transform," class deeplake.core.transform.Pipeline(functions: List[ComputeFunction])¶ eval(data_in, ds_out: Dataset | None = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, skip_ok: bool = False, check_lengths: bool = True, pad_data_in: bool = False, read_only_ok: bool = False, **kwargs)¶ Evaluates the pipeline on data_in to produce an output dataset ds_out. Parameters: data_in – Input passed to the transform to generate output dataset. Should support __getitem__ and __len__. Can be a Deep Lake dataset. ds_out (Dataset, optional) – The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty, i.e., all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int) – The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). skip_ok (bool) – If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool) – If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool) – If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False. read_only_ok (bool) – If True and output dataset is same as input dataset, the read-only check is skipped. Defaults to False. **kwargs – Additional arguments. Raises: InvalidInputDataError – If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError – If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError – If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError – If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError – All other exceptions raised if there are problems while running the pipeline",activeloop Index,https://docs.deeplake.ai/en/v3.1.5/genindex.html,_ABCDEFGHIKLMNOPQRSTUVW,activeloop _,https://docs.deeplake.ai/en/v3.1.5/genindex.html,__contains__() (deeplake.core.storage.GCSProvider method) __delitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getitem__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getstate__() (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) __init__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.sample.Sample method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.util.shape_interval.ShapeInterval method) __iter__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __len__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __repr__() (deeplake.core.index.Index method) __setitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __setstate__() (deeplake.core.storage.LRUCache method) __str__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) __weakref__ (deeplake.core.index.Index attribute) (deeplake.core.index.IndexEntry attribute) (deeplake.core.storage.StorageProvider attribute) _all_keys() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) _check_is_file() (deeplake.core.storage.LocalProvider method) _check_update_creds() (deeplake.core.storage.S3Provider method) _config (deeplake.core.tensor.Tensor property) _flush_if_not_read_only() (deeplake.core.storage.LRUCache method) _forward() (deeplake.core.storage.LRUCache method) _forward_value() (deeplake.core.storage.LRUCache method) _free_up_space() (deeplake.core.storage.LRUCache method) _insert_in_cache() (deeplake.core.storage.LRUCache method) _is_hub_path (deeplake.core.storage.StorageProvider attribute) _linked_sample() (deeplake.core.tensor.Tensor method) _pop_from_cache() (deeplake.core.storage.LRUCache method) _set_hub_creds_info() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.S3Provider method) _state_keys() (deeplake.core.storage.S3Provider method),activeloop B,https://docs.deeplake.ai/en/v3.1.5/genindex.html,BadGatewayException (class in deeplake.util.exceptions) BadRequestException (class in deeplake.util.exceptions) base_htype (deeplake.core.tensor.Tensor property) batch() (deeplake.enterprise.DeepLakeDataLoader method) branch (deeplake.core.dataset.Dataset property) branches (deeplake.core.dataset.Dataset property) BufferError (class in deeplake.util.exceptions),activeloop C,https://docs.deeplake.ai/en/v3.1.5/genindex.html,change_creds_management() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) check_readonly() (deeplake.core.storage.StorageProvider method) checkout() (deeplake.core.dataset.Dataset method) CheckoutError (class in deeplake.util.exceptions) ChunkEngineError (class in deeplake.util.exceptions) ChunkIdEncoderError (class in deeplake.util.exceptions) ChunkSizeTooSmallError (class in deeplake.util.exceptions) clear() (deeplake.api.info.Info method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) clear_cache() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) clear_deeplake_objects() (deeplake.core.storage.LRUCache method) client (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) close() (deeplake.enterprise.DeepLakeDataLoader method) commit() (deeplake.core.dataset.Dataset method) commit_id (deeplake.core.dataset.Dataset property) CommitError (class in deeplake.util.exceptions) commits (deeplake.core.dataset.Dataset property) compose() (in module deeplake) compose_at() (deeplake.core.index.Index method) compressed_bytes() (deeplake.core.sample.Sample method) compute() (in module deeplake) connect() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (in module deeplake) copy() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.storage.StorageProvider method) (in module deeplake) CorruptedMetaError (class in deeplake.util.exceptions) CorruptedSampleError (class in deeplake.util.exceptions) CouldNotCreateNewDatasetException (class in deeplake.util.exceptions) create_group() (deeplake.core.dataset.Dataset method) create_read_copy_dataset() (in module deeplake.util.remove_cache) create_tensor() (deeplake.core.dataset.Dataset method) create_tensor_like() (deeplake.core.dataset.Dataset method),activeloop D,https://docs.deeplake.ai/en/v3.1.5/genindex.html,data() (deeplake.core.tensor.Tensor method) DataFrame (class in deeplake.auto.structured.dataframe) dataloader() (deeplake.core.dataset.Dataset method) dataset (class in deeplake.api.dataset) Dataset (class in deeplake.core.dataset) dataset() (in module deeplake) DatasetHandlerError (class in deeplake.util.exceptions) DatasetUnsupportedPytorch (class in deeplake.util.exceptions) DatasetViewSavingError (class in deeplake.util.exceptions) deepcopy() (deeplake.api.dataset.dataset static method) (in module deeplake) deeplake module deeplake.api.info module deeplake.integrations.wandb.wandb module DeepLakeCloudDataset (class in deeplake.core.dataset) DeepLakeDataLoader (class in deeplake.enterprise) delete() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) delete_group() (deeplake.core.dataset.Dataset method) delete_tensor() (deeplake.core.dataset.Dataset method) delete_view() (deeplake.core.dataset.Dataset method) dict() (deeplake.core.tensor.Tensor method) diff() (deeplake.core.dataset.Dataset method) DirectoryAtPathException (class in deeplake.util.exceptions) disable_readonly() (deeplake.core.storage.StorageProvider method) download_kaggle_dataset() (in module deeplake.auto.unstructured.kaggle) downsample() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) dtype (deeplake.core.tensor.Tensor property) DynamicTensorNumpyError (class in deeplake.util.exceptions),activeloop F,https://docs.deeplake.ai/en/v3.1.5/genindex.html,FileAtPathException (class in deeplake.util.exceptions) fill_dataset() (deeplake.auto.structured.dataframe.DataFrame method) filter() (deeplake.core.dataset.Dataset method) FilterError (class in deeplake.util.exceptions) find_axis() (deeplake.core.index.Index method) flush() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.StorageProvider method) FullChunkError (class in deeplake.util.exceptions),activeloop E,https://docs.deeplake.ai/en/v3.1.5/genindex.html,empty() (deeplake.api.dataset.dataset static method) (in module deeplake) EmptyCommitError (class in deeplake.util.exceptions) EmptyTensorError (class in deeplake.util.exceptions) EmptyTokenException (class in deeplake.util.exceptions) enable_readonly() (deeplake.core.storage.StorageProvider method) eval() (deeplake.core.transform.Pipeline method) exists() (deeplake.api.dataset.dataset static method) (in module deeplake) extend() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) ExternalCommandError (class in deeplake.util.exceptions),activeloop A,https://docs.deeplake.ai/en/v3.1.5/genindex.html,add_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) AgreementError (class in deeplake.util.exceptions) AgreementNotAcceptedError (class in deeplake.util.exceptions) append() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) apply() (deeplake.core.index.Index method) apply_squeeze() (deeplake.core.index.Index method) array (deeplake.core.sample.Sample property) AuthenticationException (class in deeplake.util.exceptions) AuthorizationException (class in deeplake.util.exceptions) AutoCompressionError (class in deeplake.util.exceptions),activeloop G,https://docs.deeplake.ai/en/v3.1.5/genindex.html,GatewayTimeoutException (class in deeplake.util.exceptions) GCSDefaultCredsNotFoundError (class in deeplake.util.exceptions) GCSProvider (class in deeplake.core.storage) GDriveProvider (class in deeplake.core.storage) get() (deeplake.api.info.Info method) get_base_storage() (in module deeplake.util.remove_cache) get_bytes() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) get_commit_details() (deeplake.core.dataset.Dataset method) get_creds_keys() (deeplake.core.dataset.Dataset method) get_dataset_with_zero_size_cache() (in module deeplake.util.remove_cache) get_deeplake_object() (deeplake.core.storage.LRUCache method) get_view() (deeplake.core.dataset.Dataset method) get_views() (deeplake.core.dataset.Dataset method) groups (deeplake.core.dataset.Dataset property),activeloop I,https://docs.deeplake.ai/en/v3.1.5/genindex.html,id (deeplake.core.dataset.ViewEntry property) ImageClassification (class in deeplake.auto.unstructured.image_classification) Index (class in deeplake.core.index) IndexEntry (class in deeplake.core.index) indices() (deeplake.core.index.IndexEntry method) Info (class in deeplake.api.info) info (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) InfoError (class in deeplake.util.exceptions) ingest() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_dataframe() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_huggingface() (in module deeplake) ingest_kaggle() (deeplake.api.dataset.dataset static method) (in module deeplake) invalidate_libdeeplake_dataset() (deeplake.core.tensor.Tensor method) InvalidBytesRequestedError (class in deeplake.util.exceptions) InvalidFileExtension (class in deeplake.util.exceptions) InvalidHubPathException (class in deeplake.util.exceptions) InvalidImageDimensions (class in deeplake.util.exceptions) InvalidInputDataError (class in deeplake.util.exceptions) InvalidKeyTypeError (class in deeplake.util.exceptions) InvalidOperationError (class in deeplake.util.exceptions) InvalidOutputDatasetError (class in deeplake.util.exceptions) InvalidPasswordException (class in deeplake.util.exceptions) InvalidPathException (class in deeplake.util.exceptions) InvalidShapeIntervalError (class in deeplake.util.exceptions) InvalidTensorGroupNameError (class in deeplake.util.exceptions) InvalidTensorLinkError (class in deeplake.util.exceptions) InvalidTensorNameError (class in deeplake.util.exceptions) InvalidTokenException (class in deeplake.util.exceptions) InvalidTransformDataset (class in deeplake.util.exceptions) is_actually_cloud (deeplake.core.dataset.DeepLakeCloudDataset property) is_colab() (in module deeplake.util.notebook) is_dynamic (deeplake.core.tensor.Tensor property) is_head_node (deeplake.core.dataset.Dataset property) is_jupyter() (in module deeplake.util.notebook) is_link (deeplake.core.tensor.Tensor property) is_notebook() (in module deeplake.util.notebook) is_sequence (deeplake.core.tensor.Tensor property) is_trivial() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) is_view (deeplake.core.dataset.Dataset property) items() (deeplake.api.info.Info method),activeloop L,https://docs.deeplake.ai/en/v3.1.5/genindex.html,length() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) like() (deeplake.api.dataset.dataset static method) (in module deeplake) link() (in module deeplake) (in module deeplake.api.link) LinkedSample (class in deeplake.core.linked_sample) list() (deeplake.api.dataset.dataset static method) (deeplake.core.tensor.Tensor method) (in module deeplake) load() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) load_view() (deeplake.core.dataset.Dataset method) LocalProvider (class in deeplake.core.storage) LockedException (class in deeplake.util.exceptions) log() (deeplake.core.dataset.Dataset method) LoginException (class in deeplake.util.exceptions) LRUCache (class in deeplake.core.storage),activeloop M,https://docs.deeplake.ai/en/v3.1.5/genindex.html,ManagedCredentialsNotFoundError (class in deeplake.util.exceptions) max_len (deeplake.core.dataset.Dataset property) max_view (deeplake.core.dataset.Dataset property) maybe_flush() (deeplake.core.storage.StorageProvider method) MemoryDatasetCanNotBePickledError (class in deeplake.util.exceptions) MemoryProvider (class in deeplake.core.storage) merge() (deeplake.core.dataset.Dataset method) merge_slices() (in module deeplake.core.index) MergeConflictError (class in deeplake.util.exceptions) MergeError (class in deeplake.util.exceptions) MergeMismatchError (class in deeplake.util.exceptions) MergeNotSupportedError (class in deeplake.util.exceptions) message (deeplake.core.dataset.ViewEntry property) meta (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) MetaAlreadyExistsError (class in deeplake.util.exceptions) MetaDoesNotExistError (class in deeplake.util.exceptions) MetaError (class in deeplake.util.exceptions) MetaInvalidKey (class in deeplake.util.exceptions) MetaInvalidRequiredMetaKey (class in deeplake.util.exceptions) min_len (deeplake.core.dataset.Dataset property) min_view (deeplake.core.dataset.Dataset property) modified_samples() (deeplake.core.tensor.Tensor method) module deeplake deeplake.api.info deeplake.integrations.wandb.wandb ModuleNotInstalledException (class in deeplake.util.exceptions),activeloop N,https://docs.deeplake.ai/en/v3.1.5/genindex.html,nbytes (deeplake.api.info.Info property) ndim (deeplake.core.tensor.Tensor property) need_to_reload_creds() (deeplake.core.storage.S3Provider method) no_view_dataset (deeplake.core.dataset.Dataset property) NotLoggedInError (class in deeplake.util.exceptions) num_samples (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) numpy() (deeplake.core.tensor.Tensor method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop K,https://docs.deeplake.ai/en/v3.1.5/genindex.html,KaggleDatasetAlreadyDownloadedError (class in deeplake.util.exceptions) KaggleError (class in deeplake.util.exceptions) KaggleMissingCredentialsError (class in deeplake.util.exceptions) keys() (deeplake.api.info.Info method),activeloop H,https://docs.deeplake.ai/en/v3.1.5/genindex.html,has_head_changes (deeplake.core.dataset.Dataset property) hidden (deeplake.core.tensor.Tensor property) htype (deeplake.core.tensor.Tensor property) HubComposeEmptyListError (class in deeplake.util.exceptions) HubComposeIncompatibleFunction (class in deeplake.util.exceptions),activeloop P,https://docs.deeplake.ai/en/v3.1.5/genindex.html,parent (deeplake.core.dataset.Dataset property) PartialSample (class in deeplake.core.partial_sample) path() (deeplake.core.tensor.Tensor method) PathNotEmptyException (class in deeplake.util.exceptions) pending_commit_id (deeplake.core.dataset.Dataset property) pil (deeplake.core.sample.Sample property) Pipeline (class in deeplake.core.transform) play() (deeplake.core.tensor.Tensor method) pop() (deeplake.api.info.Info method) (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) popitem() (deeplake.api.info.Info method) populate_creds() (deeplake.core.dataset.Dataset method) ProviderListEmptyError (class in deeplake.util.exceptions) ProviderSizeListMismatch (class in deeplake.util.exceptions) pytorch() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop Q,https://docs.deeplake.ai/en/v3.1.5/genindex.html,query() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop R,https://docs.deeplake.ai/en/v3.1.5/genindex.html,random_split() (deeplake.core.dataset.Dataset method) read() (in module deeplake) (in module deeplake.api.read) read_only (deeplake.core.dataset.Dataset property) ReadOnlyModeError (class in deeplake.util.exceptions) rechunk() (deeplake.core.dataset.Dataset method) register_deeplake_object() (deeplake.core.storage.LRUCache method) remove_deeplake_object() (deeplake.core.storage.LRUCache method) remove_memory_cache() (in module deeplake.util.remove_cache) rename() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.S3Provider method) (in module deeplake) rename_group() (deeplake.core.dataset.Dataset method) rename_tensor() (deeplake.core.dataset.Dataset method) RenameError (class in deeplake.util.exceptions) replace_with() (deeplake.api.info.Info method) reset() (deeplake.core.dataset.Dataset method) ResourceNotFoundException (class in deeplake.util.exceptions) root (deeplake.core.dataset.Dataset property),activeloop S,https://docs.deeplake.ai/en/v3.1.5/genindex.html,S3DeletionError (class in deeplake.util.exceptions) S3Error (class in deeplake.util.exceptions) S3GetError (class in deeplake.util.exceptions) S3ListError (class in deeplake.util.exceptions) S3Provider (class in deeplake.core.storage) S3SetError (class in deeplake.util.exceptions) SamePathException (class in deeplake.util.exceptions) Sample (class in deeplake.core.sample) sample_by() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method) sample_indices (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) sample_info (deeplake.core.tensor.Tensor property) SampleCompressionError (class in deeplake.util.exceptions) SampleDecompressionError (class in deeplake.util.exceptions) SampleHtypeMismatchError (class in deeplake.util.exceptions) save_view() (deeplake.core.dataset.Dataset method) ServerException (class in deeplake.util.exceptions) set_bytes() (deeplake.core.storage.StorageProvider method) setdefault() (deeplake.api.info.Info method) shape (deeplake.core.tensor.Tensor property) shape_interval (deeplake.core.tensor.Tensor property) ShapeInterval (class in deeplake.util.shape_interval) shuffle() (deeplake.enterprise.DeepLakeDataLoader method) size_approx() (deeplake.core.dataset.Dataset method) slice_at_int() (in module deeplake.core.index) slice_length() (in module deeplake.core.index) StorageProvider (class in deeplake.core.storage) structure() (deeplake.auto.unstructured.image_classification.ImageClassification method) StructuredDataset (class in deeplake.auto.structured.base) subscriptable() (deeplake.core.index.IndexEntry method) summary() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) sync() (deeplake.core.storage.GDriveProvider method),activeloop U,https://docs.deeplake.ai/en/v3.1.5/genindex.html,UnableToReadFromUrlError (class in deeplake.util.exceptions) uncompressed_bytes() (deeplake.core.sample.Sample method) UnexpectedStatusCodeException (class in deeplake.util.exceptions) UnstructuredDataset (class in deeplake.auto.unstructured.base) UnsupportedCompressionError (class in deeplake.util.exceptions) UnsupportedSchedulerError (class in deeplake.util.exceptions) UnsupportedTensorTypeError (class in deeplake.util.exceptions) update() (deeplake.api.info.Info method) update_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) UserNotLoggedInException (class in deeplake.util.exceptions),activeloop T,https://docs.deeplake.ai/en/v3.1.5/genindex.html,Tensor (class in deeplake.core.tensor) TensorAlreadyExistsError (class in deeplake.util.exceptions) TensorDoesNotExistError (class in deeplake.util.exceptions) TensorDtypeMismatchError (class in deeplake.util.exceptions) tensorflow() (deeplake.core.dataset.Dataset method) TensorGroupAlreadyExistsError (class in deeplake.util.exceptions) TensorGroupDoesNotExistError (class in deeplake.util.exceptions) TensorInvalidSampleShapeError (class in deeplake.util.exceptions) TensorMetaInvalidHtype (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteKey (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteValue (class in deeplake.util.exceptions) TensorMetaMissingKey (class in deeplake.util.exceptions) TensorMetaMissingRequiredValue (class in deeplake.util.exceptions) TensorMetaMutuallyExclusiveKeysError (class in deeplake.util.exceptions) TensorMismatchError (class in deeplake.util.exceptions) TensorModifiedError (class in deeplake.util.exceptions) tensors (deeplake.core.dataset.Dataset property) TensorUnsupportedSampleType (class in deeplake.util.exceptions) text() (deeplake.core.tensor.Tensor method) tiled() (in module deeplake) (in module deeplake.api.tiled) timestamps (deeplake.core.tensor.Tensor property) tobytes() (deeplake.core.tensor.Tensor method) token (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) TokenPermissionError (class in deeplake.util.exceptions) transform() (deeplake.enterprise.DeepLakeDataLoader method) TransformError (class in deeplake.util.exceptions),activeloop W,https://docs.deeplake.ai/en/v3.1.5/genindex.html,WaitTimeoutException (class in deeplake.util.exceptions),activeloop V,https://docs.deeplake.ai/en/v3.1.5/genindex.html,validate() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) values() (deeplake.api.info.Info method) verify (deeplake.core.tensor.Tensor property) VersionControlError (class in deeplake.util.exceptions) ViewEntry (class in deeplake.core.dataset) visualize() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method),activeloop O,https://docs.deeplake.ai/en/v3.1.5/genindex.html,optimize() (deeplake.core.dataset.ViewEntry method) OutOfChunkCountError (class in deeplake.util.exceptions) OutOfSampleCountError (class in deeplake.util.exceptions) OverLimitException (class in deeplake.util.exceptions),activeloop Python Module Index,https://docs.deeplake.ai/en/v3.1.5/py-modindex.html,d d deeplake deeplake.api.info deeplake.integrations.wandb.wandb,activeloop Installation¶,https://docs.deeplake.ai/en/v3.1.0/Installation.html#installation,"Deep Lake can be installed with pip Copy to clipboard Deep Lake has the following extras that you can choose to install according to your needs. Installation commands¶ Install command Description Dependencies installed pip install 'deeplake[av]' Audio and video support via PyAV av pip install 'deeplake[visualizer]' Visualize Deep Lake datasets within notebooks. This is required for Dataset.visualize to work. IPython, flask pip install 'deeplake[gcp]' GCS support google-cloud-storage, google-auth, google-auth-oauthlib pip install 'deeplake[dicom]' DICOM data support pydicom pip install 'deeplake[gdrive]' Google Drive support google-api-python-client, oauth2client, google-auth, google-auth-oauthlib pip install 'deeplake[point_cloud]' Support for LiDAR point cloud data laspy pip install 'deeplake[enterprise]' Installs Deep Lake's enterprise features libdeeplake pip install 'deeplake[all]' Installs all of the above",activeloop Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#datasets,"Dataset.commit_id The lasted committed commit id of the dataset . Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset. Dataset Views¶ A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here. Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated. Example >>> import deeplake >>> # load dataset >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> # filter dataset >>> zeros = ds.filter('labels == 0') >>> # save view >>> zeros.save_view(id='zeros') >>> # load_view >>> zeros = ds.load_view(id='zeros') >>> len(zeros) 5923 Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#datasets," Creating Datasets¶ deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format. Loading Datasets¶ deeplake.load Loads an existing dataset Deleting and Renaming Datasets¶ deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path. Copying Datasets¶ deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history. Dataset Operations¶ Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset. Dataset Visualization¶ Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset Credentials¶ Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset. Dataset Properties¶ Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor. Dataset Version Control¶ Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit",activeloop Deleting and Renaming Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#deleting-and-renaming-datasets,deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path.,activeloop Creating Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#creating-datasets,deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format.,activeloop Copying Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#copying-datasets,deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history.,activeloop Dataset Properties¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-properties,"Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset.max_len Return the maximum length of the tensor. Dataset.min_len Return the minimum length of the tensor.",activeloop Loading Datasets¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#loading-datasets,deeplake.load Loads an existing dataset,activeloop Dataset Visualization¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-visualization,Dataset.visualize Visualizes the dataset in the Jupyter notebook.,activeloop Dataset Operations¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-operations,Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset.,activeloop Dataset Views¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-views,"A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter(), querying a dataset with Dataset.query() or by sampling a dataset with Dataset.sample_by(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here.Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated.Example Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise. Dataset.min_view Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Dataset.max_view Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor.",activeloop Dataset Credentials¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-credentials,Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset.,activeloop Dataset Version Control¶,https://docs.deeplake.ai/en/v3.1.0/Datasets.html#dataset-version-control,Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset. Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset.,activeloop Tensors¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#tensors,Creating Tensors¶ Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it. Deleting and Renaming Tensors¶ Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name Adding and deleting samples¶ Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values. Retrieving samples¶ Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index. Tensor Properties¶ Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added. Info¶ Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor. Video features¶ Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Adding and deleting samples¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#adding-and-deleting-samples,Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values.,activeloop Retrieving samples¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#retrieving-samples,Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index.,activeloop Tensor Properties¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#tensor-properties,Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added.,activeloop Deleting and Renaming Tensors¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#deleting-and-renaming-tensors,Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name,activeloop Video features¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#video-features,Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Info¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#info,Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor.,activeloop Creating Tensors¶,https://docs.deeplake.ai/en/v3.1.0/Tensors.html#creating-tensors,Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it.,activeloop Compressions¶,https://docs.deeplake.ai/en/v3.1.0/Compressions.html#compressions,"Deep Lake can read, compress, decompress and recompress data to different formats. The supported htype-compression configurations are given below. Sample Type Htype Compressions Image image bmp, dib, gif, ico, jpeg, jpeg2000, pcx, png, ppm, sgi, tga, tiff, webp, wmf, xbm, eps, fli, im, msp, mpo, apng Video video mp4, mkv, avi Audio audio flac, mp3, wav Dicom dicom dcm Point Cloud point_cloud las Other bbox, text, list, json, generic, etc. lz4 Sample Compression¶ If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed. Note For audio and video, we don't support compressing raw frames but only reading compressed audio and video data. Examples: >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Structure of sample-wise compressed tensor.¶ Chunk Compression¶ If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise. Note Chunk-wise compression is not supported for audio, video and point_cloud htypes. Examples: >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') >>> ds.create_tensor('boxes', htype='bbox', chunk_compression='lz4') Copy to clipboard Structure of chunk-wise compressed tensor.¶ Note See deeplake.read() to learn how to read data from files and populate these tensors.",activeloop Chunk Compression¶,https://docs.deeplake.ai/en/v3.1.0/Compressions.html#chunk-compression,"If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise.NoteChunk-wise compression is not supported for audio, video and point_cloud htypes.Examples: Copy to clipboard Structure of chunk-wise compressed tensor.¶ NoteSee deeplake.read() to learn how to read data from files and populate these tensors.",activeloop Sample Compression¶,https://docs.deeplake.ai/en/v3.1.0/Compressions.html#sample-compression,"If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed.NoteFor audio and video, we don't support compressing raw frames but only reading compressed audio and video data.Examples: Copy to clipboard Structure of sample-wise compressed tensor.¶",activeloop PyTorch and Tensorflow Support¶,https://docs.deeplake.ai/en/v3.1.0/Pytorch-and-Tensorflow-Support.html#pytorch-and-tensorflow-support,Deep Lake datasets can be easily converted to Torch dataloaders or Tensorflow datasets for training. Dataset.pytorch Converts the dataset into a pytorch Dataloader. Dataset.tensorflow Converts the dataset into a tensorflow compatible format.,activeloop Making Deep Lake Samples¶,https://docs.deeplake.ai/en/v3.1.0/Utility-Functions.html#making-deep-lake-samples,read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data.,activeloop General Functions¶,https://docs.deeplake.ai/en/v3.1.0/Utility-Functions.html#general-functions,list List all available Deep Lake cloud datasets. exists Checks if a dataset exists at the given path.,activeloop Utility Functions¶,https://docs.deeplake.ai/en/v3.1.0/Utility-Functions.html#utility-functions,General Functions¶ list List all available Deep Lake cloud datasets. exists Checks if a dataset exists at the given path. Making Deep Lake Samples¶ read Utility that reads raw data from supported files into Deep Lake format. link Utility that stores a link to raw data. Parallelism¶ compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop Parallelism¶,https://docs.deeplake.ai/en/v3.1.0/Utility-Functions.html#parallelism,compute Compute is a decorator for functions. compose Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Transform pipelines returned by compute() and compose() are evaluated using eval: eval Evaluates the pipeline on data_in to produce an output dataset ds_out.,activeloop Weights and Biases¶,https://docs.deeplake.ai/en/v3.1.0/Weights-and-Biases.html#module-deeplake.integrations.wandb.wandb,"Deep Lake's Weights and Biases integration allows you to track and improve reproducibility of your machine learning experiments. Deep Lake will automatically push all information required to reproduce the snapshot of the data like your dataset's URI, commit ID, and view IDs of any views that you have used in your training workflow.Learn more about Weights and Biases here. Logging Dataset Creation¶ If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. >>> run = wandb.init(project='deeplake_wandb', job_type='dataset_upload') >>> ds = deeplake.empty('hub://fayazrahman4u/my_dataset') # create dataset >>> ds.create_tensor('images', htype='image', sample_compression='jpg') # create a tensor >>> ds.images.append(deeplake.read('files/images/dog.jpg')) # add a sample >>> ds.commit('creation') # commit -> trigger logging >>> run.finish() Copy to clipboard Note If you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes. Note If you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state. Logging Dataset Read¶ A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. >>> run = wandb.init(project='deeplake_wandb', job_type='torch dataloader') >>> train_loader = ds.pytorch() >>> run.finish() Copy to clipboard >>> run = wandb.init(project='deeplake_wandb', job_type='iteration') >>> for sample in ds: >>> print(sample['images'].shape) >>> run.finish() Copy to clipboard",activeloop Logging Dataset Creation¶,https://docs.deeplake.ai/en/v3.1.0/Weights-and-Biases.html#logging-dataset-creation,"If you create a Deep Lake dataset using any of the functions mentioned in Creating Datasets, just perform a commit on the dataset to log its creation on W&B. Copy to clipboard NoteIf you created your dataset using deeplake.deepcopy(), perform the commit only if you have head changes.NoteIf you make changes to an existing dataset, commit the changes with an active Weights and Biases run to log it's state.",activeloop Logging Dataset Read¶,https://docs.deeplake.ai/en/v3.1.0/Weights-and-Biases.html#logging-dataset-read,A dataset read will be logged if you iterate over a dataset or call Dataset.pytorch() or Tensor.numpy() on its tensors. Copy to clipboard Copy to clipboard,activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#dataloader," Train your models using the new high performance C++ dataloader. See the dataloader method on how to create dataloaders from your datasets: Dataset.dataloader Returns a DeepLakeDataLoader object. DeepLakeDataLoader¶ class deeplake.enterprise.DeepLakeDataLoader¶ batch(batch_size: int, drop_last: bool = False)¶ Returns a batched DeepLakeDataLoader object. Parameters: batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .batch() has already been called. numpy(num_workers: int = 0, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, decode_method: Optional[Dict[str, str]] = None)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. pytorch(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True",activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#dataloader,"decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor . Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. query(query_string: str)¶ Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: Union[str, list, tuple, ndarray], replace: Optional[bool] = True, size: Optional[int] = None)¶ Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters: weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True). size – Optional[int] The length of the result view. (default: len(dataset)) Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights. >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048)¶ Returns a shuffled DeepLakeDataLoader object. Parameters: shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True transform(transform: Union[Callable, Dict[str, Optional[Callable]]], **kwargs: Dict)¶ Returns a transformed DeepLakeDataLoader object",activeloop Dataloader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#dataloader,Parameters: transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data . kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .transform() has already been called.,activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#deeplakedataloader," class deeplake.enterprise.DeepLakeDataLoader¶ batch(batch_size: int, drop_last: bool = False)¶ Returns a batched DeepLakeDataLoader object. Parameters: batch_size (int) – Number of samples in each batch. drop_last (bool) – If True, the last batch will be dropped if its size is less than batch_size. Defaults to False. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .batch() has already been called. numpy(num_workers: int = 0, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, decode_method: Optional[Dict[str, str]] = None)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are:- ‘numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. pytorch(num_workers: int = 0, collate_fn: Optional[Callable] = None, tensors: Optional[List[str]] = None, num_threads: Optional[int] = None, prefetch_factor: int = 2, distributed: bool = False, return_index: bool = True, decode_method: Optional[Dict[str, str]] = None)¶ Returns a DeepLakeDataLoader object. Parameters: num_workers (int) – Number of workers to use for transforming and processing the data. Defaults to 0. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). tensors (List[str], Optional) – List of tensors to load. If None, all tensors are loaded. Defaults to None. num_threads (int, Optional) – Number of threads to use for fetching and decompressing the data. If None, the number of threads is automatically determined. Defaults to None. prefetch_factor (int) – Number of batches to transform and collate in advance per worker. Defaults to 2. distributed (bool) – Used for DDP training. Distributes different sections of the dataset to different ranks. Defaults to False. return_index (bool) – Used to idnetify where loader needs to retur sample index or not. Defaults to True. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples",activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#deeplakedataloader,"'pil': Returns samples as PIL images . Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or ‘png'. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .pytorch() or .numpy() has already been called. query(query_string: str)¶ Returns a sliced DeepLakeDataLoader object with given query results. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the dataset object Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds_train = ds_train.dataloader().query('select * where labels != 5') Copy to clipboard >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.dataloader().query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard sample_by(weights: Union[str, list, tuple, ndarray], replace: Optional[bool] = True, size: Optional[int] = None)¶ Returns a sliced DeepLakeDataLoader with given weighted sampler applied Parameters: weights – (Union[str, list, tuple, np.ndarray]): If it's string then tql will be run to calculate the weights based on the expression. list, tuple and ndarray will be treated as the list of the weights per sample replace – Optional[bool] If true the samples can be repeated in the result view. (default: True). size – Optional[int] The length of the result view. (default: len(dataset)) Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Sample the dataloader with labels == 5 twice more than labels == 6 >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataloader treating labels tensor as weights. >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.dataloader().sample_by('labels') Copy to clipboard Sample the dataloader with the given weights; >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(0, len(ds_train)): ... weights.append(i % 5) ... >>> sampled_ds = ds.dataloader().sample_by(weights, replace=False) Copy to clipboard shuffle(shuffle: bool = True, buffer_size: int = 2048)¶ Returns a shuffled DeepLakeDataLoader object. Parameters: shuffle (bool) – shows wheter we need to shuffle elements or not. Defaults to True. buffer_size (int) – The size of the buffer used to shuffle the data in MBs. Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. Returns: A DeepLakeDataLoader object. Return type: DeepLakeDataLoader Raises: ValueError – If .shuffle() has already been called. ValueError – If dataset is view and shuffle is True transform(transform: Union[Callable, Dict[str, Optional[Callable]]], **kwargs: Dict)¶ Returns a transformed DeepLakeDataLoader object. Parameters: transform (Callable or Dict[Callable]) – A function or dictionary of functions to apply to the data. kwargs – Additional arguments to be passed to transform. Only applicable if transform is a callable. Ignored if transform is a dictionary. Returns: A DeepLakeDataLoader object",activeloop DeepLakeDataLoader¶,https://docs.deeplake.ai/en/v3.1.0/Dataloader.html#deeplakedataloader,Return type: DeepLakeDataLoader Raises: ValueError – If .transform() has already been called. ,activeloop Sampler¶,https://docs.deeplake.ai/en/v3.1.0/Sampler.html#sampler,"The sampler applies weighted sampling on the dataset and returns the sampled view. It creates a discrete distribution with given weights and randomly picks samples based on it. The resulting view is generated in such a way that when creating a dataloader from the view and training on it, the performance impact is minimal. See the sample_by method on how to use this feature: Dataset.sample_by Returns a sliced Dataset with given weighted sampler applied.",activeloop Tensor Query Language¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#tensor-query-language," This page describes the Tensor Query Language (TQL), an SQL-like language used for Querying in Activeloop Platform as well as in ds.query in our Python API. To use queries, install deeplake with pip install deeplake[enterprise]. Querying datasets is part of our Growth and Enterprise Plan. Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. Language¶ SELECT¶ TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE¶ WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY¶ ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET¶ LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions¶ TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]",activeloop Tensor Query Language¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#tensor-query-language,"This last expression returns array containing the third elements of the initial two dimensional array boxes . TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions. For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions¶ There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number. May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT¶ Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT. Examples¶ Querying for images containing 0 in MNIST Train Dataset with ds.query. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> result = ds.query('select * where labels == 0') >>> len(result) 5923 Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/coco-train') >>> result = ds.query('(select * where contains(categories, 'car')) union (select * where contains(categories, 'motorcycle'))') >>> len(result) 14376 Copy to clipboard",activeloop Language¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#language," SELECT¶ TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: SELECT * [FROM string] [WHERE expression] [LIMIT number [OFFSET number]] [ORDER BY expression [ASC/DESC]] Copy to clipboard Each part of the SELECT statement can be omitted. FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context WHERE¶ WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error. ORDER BY¶ ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string. ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default. LIMIT OFFSET¶ LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL. Expressions¶ TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value. The value can be numeric scalar or array as well as string value. String literal should be provided within single quotes (') and can be used on class_label, json and text tensors. For class labels it will get corresponding numeric value from the class_names list and do numeric comparison. For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array. Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes. TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions",activeloop ORDER BY¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#order-by,"ORDER BY expression orders the output of the query by the given criteria. The criteria can be any expression output of which can be ordered. The ordered outputs are either scalar numbers or strings. In addition it can also be json, which contains number or string.ORDER BY statement optionally accepts ASC/DESC keywords specifying whether the ordering should be ascending or descending. It is ascending by default.",activeloop LIMIT OFFSET¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#limit-offset,"LIMIT and OFFSET expressions are used to limit the output of the query by index, as in SQL.",activeloop SELECT¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#select,"TQL supports only SELECT statement. Every TQL expression starts with SELECT *. TQL supports only * which means to select all tensors. The common syntax for select statement is the following: Copy to clipboard Each part of the SELECT statement can be omitted.FROM expression is allowed, but it does not have any effect on the query, because for now TQL queries are run on a specific dataset, so the FROM is known from the context",activeloop Expressions¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#expressions,"TQL supports any comparison operator (==, !=, <, <=, >=) where the left side is a tensor and the right side is a known value.The value can be numeric scalar or array as well as string value.String literal should be provided within single quotes (') and can be used on class_label, json and text tensors.For class labels it will get corresponding numeric value from the class_names list and do numeric comparison.For json and text it will do string comparison. The left side of the expression can be indexed (subscripted) if the tensor is multidimensional array or json. Jsons support indexing by string, e.g. index_meta['id'] == 'some_id'. Jsons can also be indexed by number if the underlying data is array.Numeric multidimensional tensors can be indexed by numbers, e.g. categories[0] == 1 as well as Python style slicing and multidimensional indexing, such as boxes[:2]. This last expression returns array containing the third elements of the initial two dimensional array boxes.TQL supports logical operators - AND, OR and NOT. These operators can be used to combine boolean expressions. For example, Copy to clipboard From SQL we also support the following two keywords: BETWEEN Copy to clipboard IN Copy to clipboard",activeloop Functions¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#functions,"There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number. May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes.",activeloop "UNION, INTERSECT, EXCEPT¶",https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#union-intersect-except,"Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop Examples¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#examples,Querying for images containing 0 in MNIST Train Dataset with ds.query. Copy to clipboard Querying for samples with car or motorcycle in categories of COCO Train Dataset. Copy to clipboard,activeloop Language¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#language,"For example, labels == 0 OR labels == 1 Copy to clipboard From SQL we also support the following two keywords: BETWEEN labels BETWEEN 0 and 5 Copy to clipboard IN labels in ARRAY[0, 2, 4, 6, 8] Copy to clipboard Functions¶ There are predefined functions which can be used in WHERE expression as well as in ORDER BY expressions: CONTAINS - checks if the given tensor contains given value - CONTAINS(categories, 'person') RANDOM - returns random number . May be used in ORDER BY to shuffle the output - ORDER BY RANDOM() SHAPE - returns the shape array of the given tensor - SHAPE(boxes) ALL - takes an array of booleans and returns single boolean, True if all elements of the input array are True ALL_STRICT - same as ALL with one difference. ALL returns True on empty array, while ALL_STRICT return False ANY - takes an array of booleans and returns single boolean, True if any of the elements int the input array is True LOGICAL_AND - takes two boolean arrays, does element wise logical and, returns the result array. This will return False if the input arrays have different sizes. LOGICAL_OR - takes two boolean arrays, does element wise logical or, returns the result array. This will return False if the input arrays have different sizes. UNION, INTERSECT, EXCEPT¶ Query can contain multiple SELECT statements, combined by one of the set operations - UNION, INTERSECT and EXCEPT.",activeloop WHERE¶,https://docs.deeplake.ai/en/v3.1.0/Tensor-Query-Language.html#where,"WHERE expression is used to filter the samples in the dataset by conditions. The conditions should be convertible to boolean. Any expression which outputs a number will be converted to boolean with non-zero values taken as True. If the expression is not convertible to boolean, such as strings, json objects and arrays, the query will print the corresponding error.",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake," The deeplake package provides a database which stores data as compressed chunked arrays that can be stored anywhere and later streamed to deep learning models. deeplake.dataset(path: Union[str, Path], read_only: Optional[bool] = None, overwrite: bool = False, public: bool = False, memory_cache_size: int = 256, local_cache_size: int = 0, creds: Optional[Union[str, Dict]] = None, token: Optional[str] = None, verbose: bool = True, access_method: str = 'stream')¶ Returns a Dataset object referencing either a new or existing dataset. Examples >>> ds = deeplake.dataset('hub://username/dataset') >>> ds = deeplake.dataset('s3://mybucket/my_dataset') >>> ds = deeplake.dataset('./datasets/my_dataset', overwrite=True) Copy to clipboard Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to True. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist . The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. Returns: Dataset created using the arguments provided. Return type: Dataset Raises: AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified token is invalid TokenPermissionError – When there are permission or other errors related to token Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter. Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded. Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.empty(path: Union[str, Path], overwrite: bool = False, public: bool = False, memory_cache_size: int = 256, local_cache_size: int = 0, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = True) → Dataset¶ Creates an empty dataset Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. overwrite (bool) – If set to True this overwrites the dataset if it already exists. Defaults to False. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path . If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Dataset created using the arguments provided. Return type: Dataset Raises: DatasetHandlerError – If a Dataset already exists at the given path and overwrite is False. UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token Danger Setting overwrite to True will delete all of your data if it exists! Be very careful when setting this parameter. deeplake.like(dest: Union[str, Path], src: Union[str, Dataset, Path], tensors: Optional[List[str]] = None, overwrite: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, public: bool = False) → Dataset¶ Creates a new dataset by copying the source dataset's structure to a new location. No samples are copied, only the meta/info for the dataset and it's tensors. Parameters: dest – Empty Dataset or Path where the new dataset will be created. src (Union[str, Dataset]) – Path or dataset object that will be used as the template for the new dataset. tensors (List[str], optional) – Names of tensors (and groups) to be replicated. If not specified all tensors in source dataset are considered. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object. Return type: Dataset deeplake.ingest(src: Union[str, Path], dest: Union[str, Path], images_compression: str = 'auto', dest_creds: Optional[Dict] = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ Ingests a dataset from a source and stores it as a structured dataset to destination",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"Parameters: src (str, pathlib.Path) – Local path to where the unstructured dataset is stored or path to csv file . dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. images_compression (str) – For image classification datasets, this compression will be used for the images tensor. If images_compression is 'auto', compression will be automatically determined by the most common extension in the directory. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. progressbar (bool) – Enables or disables ingestion progress bar. Defaults to True. summary (bool) – If True, a summary of skipped files will be printed after completion. Defaults to True. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. Returns: New dataset object with structured dataset. Return type: Dataset Raises: InvalidPathException – If the source directory does not exist. SamePathException – If the source and destination path are same. AutoCompressionError – If the source director is empty or does not contain a valid extension. InvalidFileExtension – If the most frequent file extension is found to be ‘None' during auto-compression. Note Currently only local source paths and image classification datasets / csv files are supported for automatic ingestion. Supported filetypes: png/jpeg/jpg/csv. All files and sub-directories with unsupported filetypes are ignored. Valid source directory structures for image classification look like: data/ img0.jpg img1.jpg ... Copy to clipboard or: data/ class0/ cat0.jpg ... class1/ dog0.jpg ... ... Copy to clipboard or: data/ train/ class0/ img0.jpg ... ... val/ class0/ img0.jpg ... ... ... Copy to clipboard Classes defined as sub-directories can be accessed at ds['test/labels'].info.class_names. Support for train and test sub directories is present under ds['train/images'], ds['train/labels'] and ds['test/images'], ds['test/labels']. Mapping filenames to classes from an external file is currently not supported. deeplake.ingest_kaggle(tag: str, src: Union[str, Path], dest: Union[str, Path], exist_ok: bool = False, images_compression: str = 'auto', dest_creds: Optional[Dict] = None, kaggle_credentials: Optional[dict] = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ Download and ingest a kaggle dataset and store it as a structured dataset to destination. Parameters: tag (str) – Kaggle dataset tag",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"Example: 'coloradokb/dandelionimages' points to https://www.kaggle.com/coloradokb/dandelionimages src (str, pathlib.Path) – Local path to where the raw kaggle dataset will be downlaoded to . dest (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. exist_ok (bool) – If the kaggle dataset was already downloaded and exist_ok is True, ingestion will proceed without error. images_compression (str) – For image classification datasets, this compression will be used for the images tensor. If images_compression is 'auto', compression will be automatically determined by the most common extension in the directory. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. kaggle_credentials (dict) – A dictionary containing kaggle credentials {'username':'YOUR_USERNAME', 'key': 'YOUR_KEY'}. If None, environment variables/the kaggle.json file will be used if available. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default. summary (bool) – Generates ingestion summary. Set to True by default. **dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function. See deeplake.dataset(). Returns: New dataset object with structured dataset. Return type: Dataset Raises: SamePathException – If the source and destination path are same. Note Currently only local source paths and image classification datasets are supported for automatic ingestion. deeplake.ingest_dataframe(src, dest: Union[str, Path, Dataset], dest_creds: Optional[Dict] = None, progressbar: bool = True, **dataset_kwargs)¶ Convert pandas dataframe to a Deep Lake Dataset. Parameters: src (pd.DataFrame) – The pandas dataframe to be converted. dest (str, pathlib.Path, Dataset) – A Dataset or The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. dest_creds (Optional[Dict]) – A dictionary containing credentials used to access the destination path of the dataset. progressbar (bool) – Enables or disables ingestion progress bar. Set to True by default",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"**dataset_kwargs – Any arguments passed here will be forwarded to the dataset creator function . See deeplake.dataset(). Returns: New dataset created from the dataframe. Return type: Dataset Raises: Exception – If src is not a valid pandas dataframe object. deeplake.ingest_huggingface(src, dest, use_progressbar=True) → Dataset¶ Converts Hugging Face datasets to Deep Lake format. Parameters: src (hfDataset, DatasetDict) – Hugging Face Dataset or DatasetDict to be converted. Data in different splits of a DatasetDict will be stored under respective tensor groups. dest (Dataset, str, pathlib.Path) – Destination dataset or path to it. use_progressbar (bool) – Defines if progress bar should be used to show conversion progress. Returns: The destination Deep Lake dataset. Return type: Dataset Note if DatasetDict looks like: >>> { ... train: Dataset({ ... features: ['data'] ... }), ... validation: Dataset({ ... features: ['data'] ... }), ... test: Dataset({ ... features: ['data'] ... }), ... } Copy to clipboard it will be converted to a Deep Lake Dataset with tensors ['train/data', 'validation/data', 'test/data']. Features of the type Sequence(feature=Value(dtype='string')) are not supported. Columns of such type are skipped. deeplake.load(path: Union[str, Path], read_only: Optional[bool] = None, memory_cache_size: int = 256, local_cache_size: int = 0, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = True, access_method: str = 'stream') → Dataset¶ Loads an existing dataset Parameters: path (str, pathlib.Path) – The full path to the dataset. Can be: a Deep Lake cloud path of the form hub://username/datasetname. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login' from command line) an s3 path of the form s3://bucketname/path/to/dataset. Credentials are required in either the environment or passed to the creds argument. a local file system path of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset. a memory path of the form mem://path/to/dataset which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist. read_only (bool, optional) – Opens dataset in read only mode if this is passed as True. Defaults to False. Datasets stored on Deep Lake cloud that your account does not have write access to will automatically open in read mode. memory_cache_size (int) – The size of the memory cache to be used in MB. local_cache_size (int) – The size of the local filesystem cache to be used in MB. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"This is optional, tokens are normally autogenerated . verbose (bool) – If True, logs will be printed. Defaults to True. access_method (str) – The access method to use for the dataset. Can be: 'stream' Streams the data from the dataset i.e. only fetches data when required. This is the default value. 'download' Downloads the data to the local filesystem to the path specified in environment variable DEEPLAKE_DOWNLOAD_PATH. This will overwrite DEEPLAKE_DOWNLOAD_PATH. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set or if the dataset does not exist. The ‘download' access method can be modified to specify num_workers and/or scheduler. For example: ‘download:2:processed' will use 2 workers and use processed scheduler, while ‘download:3' will use 3 workers and default scheduler (threaded), and ‘download:processed' will use a single worker and use processed scheduler. 'local' Downloads the dataset if it doesn't already exist, otherwise loads from local storage. Raises an exception if DEEPLAKE_DOWNLOAD_PATH environment variable is not set. The ‘local' access method can be modified to specify num_workers and/or scheduler to be used in case dataset needs to be downloaded. If dataset needs to be downloaded, ‘local:2:processed' will use 2 workers and use processed scheduler, while ‘local:3' will use 3 workers and default scheduler (threaded), and ‘local:processed' will use a single worker and use processed scheduler. Returns: Dataset loaded using the arguments provided. Return type: Dataset Raises: DatasetHandlerError – If a Dataset does not exist at the given path. AgreementError – When agreement is rejected UserNotLoggedInException – When user is not logged in InvalidTokenException – If the specified toke is invalid TokenPermissionError – When there are permission or other errors related to token Warning Setting access_method to download will overwrite the local copy of the dataset if it was previously downloaded. Note Any changes made to the dataset in download / local mode will only be made to the local copy and will not be reflected in the original dataset. deeplake.delete(path: Union[str, Path], force: bool = False, large_ok: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = False) → None¶ Deletes a dataset at a given path. Parameters: path (str, pathlib.Path) – The path to the dataset to be deleted. force (bool) – Delete data regardless of whether it looks like a deeplake dataset. All data at the path will be removed if set to True. large_ok (bool) – Delete datasets larger than 1GB. Disabled by default. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"When set to 0, it will always use serial processing, irrespective of the scheduler . scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. deeplake.deepcopy(src: Union[str, Path], dest: Union[str, Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, src_token=None, dest_creds=None, dest_token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False, verbose: bool = True)¶ Copies dataset at src to dest including version control history. Parameters: src (str, pathlib.Path) – Path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. src_creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. src_token (str, optional) – Activeloop token, used for fetching credentials to the dataset at src if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. dest_token (str, optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. verbose (bool) – If True, logs will be printed. Defaults to True. Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. deeplake.connect(src_path: str, creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None) → Dataset¶ Connects dataset at src_path to Deep Lake via the provided path. Examples >>> # Connect an s3 dataset >>> ds = deeplake.connect(src_path='s3://bucket/dataset', dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key') >>> # or >>> ds = deeplake.connect(src_path='s3://bucket/dataset', org_id='my_org', creds_key='my_managed_credentials_key') Copy to clipboard Parameters: src_path (str) – Cloud path to the source dataset",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset . This is optional, tokens are normally autogenerated. verbose (bool) – If True, logs will be printed. Defaults to True. Raises: DatasetHandlerError – If a Dataset does not exist at the given path and force = False. NotImplementedError – When attempting to delete a managed view. Warning This is an irreversible operation. Data once deleted cannot be recovered. deeplake.rename(old_path: Union[str, Path], new_path: Union[str, Path], creds: Optional[dict] = None, token: Optional[str] = None) → Dataset¶ Renames dataset at old_path to new_path. Examples >>> deeplake.rename('hub://username/image_ds', 'hub://username/new_ds') >>> deeplake.rename('s3://mybucket/my_ds', 's3://mybucket/renamed_ds') Copy to clipboard Parameters: old_path (str, pathlib.Path) – The path to the dataset to be renamed. new_path (str, pathlib.Path) – Path to the dataset after renaming. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. This takes precedence over credentials present in the environment. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url' and ‘aws_region' as keys. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. Returns: The renamed Dataset. Return type: Dataset Raises: DatasetHandlerError – If a Dataset does not exist at the given path or if new path is to a different directory. deeplake.copy(src: Union[str, Path, Dataset], dest: Union[str, Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, src_token=None, dest_creds=None, dest_token=None, num_workers: int = 0, scheduler='threaded', progressbar=True)¶ Copies dataset at src to dest. Version control history is not included. Parameters: src (Union[str, Dataset, pathlib.Path]) – The Dataset or the path to the dataset to be copied. dest (str, pathlib.Path) – Destination path to copy to. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at dest, it will be overwritten. Defaults to False. src_creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. If ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token' are present, these take precedence over credentials present in the environment or in credentials file. Currently only works with s3 paths. It supports ‘aws_access_key_id', ‘aws_secret_access_key', ‘aws_session_token', ‘endpoint_url', ‘aws_region', ‘profile_name' as keys. src_token (str, optional) – Activeloop token, used for fetching credentials to the dataset at src if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. dest_creds (dict, optional) – creds required to create / overwrite datasets at dest. dest_token (str, optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"Can be: an s3 path like s3://bucket/path/to/dataset . a gcs path like gcs://bucket/path/to/dataset. creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the src_path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. deeplake.list(workspace: str = '', token: Optional[str] = None) → None¶ List all available Deep Lake cloud datasets. Parameters: workspace (str) – Specify user/organization name. If not given, returns a list of all datasets that can be accessed, regardless of what workspace they are in. Otherwise, lists all datasets in the given workspace. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets. This is optional, tokens are normally autogenerated. Returns: List of dataset names. Return type: List deeplake.exists(path: Union[str, Path], creds: Optional[dict] = None, token: Optional[str] = None) → bool¶ Checks if a dataset exists at the given path. Parameters: path (str, pathlib.Path) – the path which needs to be checked. creds (dict, optional) – A dictionary containing credentials used to access the dataset at the path. token (str, optional) – Activeloop token, used for fetching credentials to the dataset at path if it is a Deep Lake dataset. This is optional, tokens are normally autogenerated. Returns: A boolean confirming whether the dataset exists or not at the given path. deeplake.read(path: Union[str, Path], verify: bool = False, creds: Optional[Dict] = None, compression: Optional[str] = None, storage: Optional[StorageProvider] = None) → Sample¶ Utility that reads raw data from supported files into Deep Lake format. Recompresses data into format required by the tensor if permitted by the tensor htype. Simply copies the data in the file if file format matches sample_compression of the tensor, thus maximizing upload speeds. Examples >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('path/to/cat.jpg')) >>> ds.images.shape (1, 399, 640, 3) Copy to clipboard >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.videos.append(deeplake.read('path/to/video.mp4')) >>> ds.videos.shape (1, 136, 720, 1080, 3) Copy to clipboard >>> ds.create_tensor('images', htype='image', sample_compression='jpeg') >>> ds.images.append(deeplake.read('https://picsum.photos/200/300')) >>> ds.images[0].shape (300, 200, 3) Copy to clipboard Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Copy to clipboard Parameters: path (str) – Path to a supported file",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function . It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool): If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool): If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False. Note pad_data_in is only applicable if data_in is a Deep Lake dataset. It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ds_out provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"verify (bool) – If True, contents of the file are verified . creds (optional, Dict) – Credentials for s3, gcp and http urls. compression (optional, str) – Format of the file. Only required if path does not have an extension. storage (optional, StorageProvider) – Storage provider to use to retrieve remote files. Useful if multiple files are being read from same storage to minimize overhead of creating a new provider. Returns: Sample object. Call sample.array to get the np.ndarray. Return type: Sample Note No data is actually loaded until you try to get a property of the returned Sample. This is useful for passing along to Tensor.append and Tensor.extend. deeplake.link(path: str, creds_key: Optional[str] = None) → LinkedSample¶ Utility that stores a link to raw data. Used to add data to a Deep Lake Dataset without copying it. See Link htype. Supported file types: Image: 'bmp', 'dib', 'gif', 'ico', 'jpeg', 'jpeg2000', 'pcx', 'png', 'ppm', 'sgi', 'tga', 'tiff', 'webp', 'wmf', 'xbm' Audio: 'flac', 'mp3', 'wav' Video: 'mp4', 'mkv', 'avi' Dicom: 'dcm' Copy to clipboard Parameters: path (str) – Path to a supported file. creds_key (optional, str) – The credential key to use to read data for this sample. The actual credentials are fetched from the dataset. Returns: LinkedSample object that stores path and creds. Return type: LinkedSample Examples >>> ds = deeplake.dataset('test/test_ds') >>> ds.create_tensor('images', htype='link[image]') >>> ds.images.append(deeplake.link('https://picsum.photos/200/300')) Copy to clipboard See more examples here. deeplake.tiled(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Union[str, dtype] = dtype('uint8'))¶ Allocates an empty sample of shape sample_shape, broken into tiles of shape tile_shape (except for edge tiles). Example >>> with ds: ... ds.create_tensor('image', htype='image', sample_compression='png') ... ds.image.append(deeplake.tiled(sample_shape=(1003, 1103, 3), tile_shape=(10, 10, 3))) ... ds.image[0][-217:, :212, 1:] = np.random.randint(0, 256, (217, 212, 2), dtype=np.uint8) Copy to clipboard Parameters: sample_shape (Tuple[int, ...]) – Full shape of the sample. tile_shape (Optional, Tuple[int, ...]) – The sample will be will stored as tiles where each tile will have this shape (except edge tiles). If not specified, it will be computed such that each tile is close to half of the tensor's max_chunk_size (after compression). dtype (Union[str, np.dtype]) – Dtype for the sample array. Default uint8. Returns: A PartialSample instance which can be appended to a Tensor. Return type: PartialSample deeplake.compute(fn, name: Optional[str] = None) → Callable[[...], ComputeFunction]¶ Compute is a decorator for functions. The functions should have atleast 2 argument, the first two will correspond to sample_in and samples_out. There can be as many other arguments as required. The output should be appended/extended to the second argument in a deeplake like syntax. Any value returned by the fn will be ignored",activeloop deeplake¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.html#module-deeplake,"TransformError: All other exceptions raised if there are problems while running the pipeline . deeplake.compose(functions: List[ComputeFunction])¶ Takes a list of functions decorated using deeplake.compute() and creates a pipeline that can be evaluated using .eval Example: pipeline = deeplake.compose([my_fn(a=3), another_function(b=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard The eval method evaluates the pipeline/transform function. It has the following arguments: data_in: Input passed to the transform to generate output dataset. It should support __getitem__ and __len__. This can be a Deep Lake dataset. ds_out (Dataset, optional): The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty i.e. all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str): The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool): Displays a progress bar if True (default). skip_ok (bool): If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. It raises the following errors: InvalidInputDataError: If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError: If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError: All other exceptions raised if there are problems while running the pipeline.",activeloop deeplake.core¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.html#deeplake-core,deeplake.core.sample Sample deeplake.core.linked_sample LinkedSample deeplake.core.partial_sample PartialSample deeplake.core.storage Base Storage Provider LRU Cache S3 Storage Provider Google Cloud Storage Provider Google Drive Storage Provider Local Storage Provider Memory Provider deeplake.core.index IndexEntry Index merge_slices() slice_at_int() slice_length(),activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard checkout(address: str, create: bool = False) → Optional[str]¶ Checks out to a specific commit_id or branch . If create = True, creates a new branch with name address. Parameters: address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. Returns: The commit_id of the dataset after checkout. Return type: Optional[str] Raises: Exception – If the dataset is a filtered view. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout. clear_cache()¶ Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client¶ Returns the client of the dataset. commit(message: Optional[str] = None, allow_empty=False) → str¶ Stores a snapshot of the current state of the dataset. Parameters: message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns: the commit id of the saved commit that can be used to access the snapshot. Return type: str Raises: Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: Optional[str]¶ The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict]¶ Lists all the commits leading to the current dataset state. Returns: List of dictionaries containing commit information. connect(creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None)¶ Connect a Deep Lake cloud dataset through a deeplake path",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset," Dataset¶ class deeplake.core.dataset.Dataset¶ add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises: ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False)¶ Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises: KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str¶ The current branch of the dataset property branches¶ Lists all the branches of the dataset. Returns: List of branches. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key') >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path . dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. copy(dest: Union[str, Path], tensors: Optional[List[str]] = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False)¶ Copies this dataset or dataset view to dest. Version control history is not included. Parameters: dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset¶ Creates a tensor group. Intermediate groups in the path are also created. Parameters: name – The name of the group to create. exist_ok – If True, the group is created if it does not exist. If False, an error is raised if the group already exists. Defaults to False. Returns: The created group. Raises: TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: Union[str, dtype] = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, **kwargs)¶ Creates a new tensor in the dataset . Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters: name (str) – The name of the tensor to be created. htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. verify is always True even if specified as False if create_shape_tensor or create_sample_info_tensor is True. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns: The new tensor, which can be accessed by dataset[name] or dataset.name. Return type: Tensor Raises: TensorAlreadyExistsError – If the tensor already exists and exist_ok is False. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"InvalidTensorNameError – If name is in dataset attributes . NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor¶ Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters: name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns: New Tensor object. Return type: Tensor dataloader()¶ Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Returns: A deeplake.enterprise.DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Restrictions The new high performance C++ dataloader is part of our Growth and Enterprise Plan . Users of our Community plan can create dataloaders on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"delete_group(name: str, large_ok: bool = False)¶ Delete a tensor group from the dataset . Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters: name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns: None Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False)¶ Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters: name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns: None Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str)¶ Deletes the view with given view id. Parameters: id (str) – Id of the view to delete. Raises: KeyError – if view with given id does not exist. diff(id_1: Optional[str] = None, id_2: Optional[str] = None, as_dict=False) → Optional[Dict]¶ Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters: id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns: Optional[Dict] Raises: ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated. Note A dictionary of the differences between the commits/branches is returned if as_dict is True. The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value . The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False)¶ Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. Raises: KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. filter(function: Union[Callable, str], num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: Optional[str] = None, result_ds_args: Optional[dict] = None)¶ Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters: function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations. 0 indicates in-place for-loop evaluation, multiprocessing is used otherwise. scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns: View of Dataset with elements that satisfy filter function. Example Following filters are identical and return dataset view where all the samples have label equals to 2. >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard flush()¶ Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict¶ Get details of a particular commit. Parameters: commit_id (str) – commit id of the commit. Returns: Dictionary of details with keys - commit, author, time, message. Return type: Dict Raises: KeyError – If given commit_id is was not found in the dataset",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"get_creds_keys() → List[str]¶ Returns the list of creds keys added to the dataset . These are used to fetch external data in linked tensors get_view(id: str) → ViewEntry¶ Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters: id (str) – id of required view. Returns: ViewEntry Raises: KeyError – If no such view exists. get_views(commit_id: Optional[str] = None) → List[ViewEntry]¶ Returns list of views stored in this Dataset. Parameters: commit_id (str, optional) – Commit from which views should be returned. If not specified, views from current commit is returned. If not specified, views from the currently checked out commit will be returned. Returns: List of ViewEntry instances. Return type: List[ViewEntry] property groups: Dict[str, Dataset]¶ All sub groups in this group property has_head_changes¶ Returns True if currently at head node and uncommitted changes are present. property info¶ Returns the information about the dataset. property is_view: bool¶ Returns True if this dataset is a view and False otherwise. load_view(id: str, optimize: Optional[bool] = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: Optional[bool] = True)¶ Loads the view and returns the Dataset by id. Equivalent to ds.get_view(id).load(). Parameters: id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns: The loaded view. Return type: Dataset Raises: KeyError – if view with given id does not exist. log()¶ Displays the details of all the past commits. property max_len¶ Return the maximum length of the tensor. property max_view¶ Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,">>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): .. . print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: Optional[str] = None, delete_removed_tensors: bool = False, force: bool = False)¶ Merges the target_id into the current dataset. Parameters: target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge. force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch. If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises: Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta¶ Returns the metadata of the dataset. property min_len¶ Return the minimum length of the tensor. property min_view¶ Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property num_samples: int¶ Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent¶ Returns the parent of this group. Returns None if this is the root dataset",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"To run queries on your own datasets, upgrade your organization's plan . property read_only¶ Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: Optional[Union[str, List[str]]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True)¶ Rewrites the underlying chunks to make their sizes optimal. This is usually needed in cases where a lot of updates have been made to the data. Parameters: tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: Union[str, Path])¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None¶ Renames group with name name to new_name Parameters: name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor¶ Renames tensor with name name to new_name Parameters: name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns: Renamed tensor. Return type: Tensor Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name. reset()¶ Resets the uncommitted changes present in the branch. Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root¶ Returns the root dataset of a group. sample_by(weights: Union[str, list, tuple], replace: Optional[bool] = True, size: Optional[int] = None)¶ Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters: weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Defaults to True size – Optional[int] The length of the result view . Defaults to length of the dataset. Returns: A deeplake.Dataset object. Return type: Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only use sample_by on Activeloop datasets ('hub://activeloop/…' datasets). To use sampling functionality on your own datasets, upgrade your organization's plan. property sample_indices¶ Returns all the indices pointed to by this dataset view. save_view(message: Optional[str] = None, path: Optional[Union[str, Path]] = None, id: Optional[str] = None, optimize: bool = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, **ds_args) → str¶ Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters: message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. If the user doesn't have write access to the source dataset and the source dataset is a Deep Lake cloud dataset, then the VDS is saved is saved under the user's Deep Lake account and can be accessed using deeplake.load(f'hub://{username}/queries/{query_hash}'). id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"num_workers (int) – Number of workers to be used for optimization process . Applicable only if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns: Path to the saved VDS. Return type: str Raises: ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). size_approx()¶ Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary()¶ Prints a summary of the dataset. tensorflow(tensors: Optional[Sequence[str]] = None, tobytes: Union[bool, Sequence[str]] = False, fetch_chunks: bool = True)¶ Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters: tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns: tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor]¶ All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. DeepLakeCloudDataset¶ class deeplake.core.dataset.DeepLakeCloudDataset¶ Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored on Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added . managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform. Defaults to False. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard property client¶ Returns the client of the dataset. connect(*args, **kwargs)¶ Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key') >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Defaults to False . Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. property is_actually_cloud: bool¶ Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab. ViewEntry¶ class deeplake.core.dataset.ViewEntry¶ Represents a view saved inside a dataset. delete()¶ Deletes the view. property id: str¶ Returns id of the view. load(verbose=True)¶ Loads the view and returns the Dataset. Parameters: verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Loaded dataset view. Return type: Dataset property message: str¶ Returns the message with which the view was saved. optimize(tensors: Optional[List[str]] = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True)¶ Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters: tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns: ViewEntry",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset," class deeplake.core.dataset.Dataset¶ add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.empty('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from Activeloop platform. Defaults to False. Raises: ValueError – If the dataset is not connected to Activeloop platform and managed is True. Note managed parameter is applicable only for datasets that are connected to Activeloop platform. append(sample: Dict[str, Any], skip_ok: bool = False, append_empty: bool = False)¶ Append samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: sample (dict) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in sample if set to True. append_empty (bool) – Append empty samples to tensors not specified in sample if set to True. If True, skip_ok is ignored. Raises: KeyError – If any tensor in the dataset is not a key in sample and skip_ok is False. TensorDoesNotExistError – If tensor in sample does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. SampleAppendingError – Error that occurs when someone tries to append a tensor value directly to the dataset without specifying tensor name. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('data') Tensor(key='data') >>> ds.create_tensor('labels') Tensor(key='labels') >>> ds.append({'data': [1, 2, 3, 4], 'labels':[0, 1, 2, 3]}) Copy to clipboard property branch: str¶ The current branch of the dataset property branches¶ Lists all the branches of the dataset. Returns: List of branches. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard checkout(address: str, create: bool = False) → Optional[str]¶ Checks out to a specific commit_id or branch . If create = True, creates a new branch with name address. Parameters: address (str) – The commit_id or branch to checkout to. create (bool) – If True, creates a new branch with name as address. Returns: The commit_id of the dataset after checkout. Return type: Optional[str] Raises: Exception – If the dataset is a filtered view. Examples >>> ds = deeplake.empty('../test/test_ds') >>> ds.create_tensor('abc') Tensor(key='abc') >>> ds.abc.append([1, 2, 3]) >>> first_commit = ds.commit() >>> ds.checkout('alt', create=True) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.append([4, 5, 6]) >>> ds.abc.numpy() array([[1, 2, 3], [4, 5, 6]]) >>> ds.checkout(first_commit) 'firstdbf9474d461a19e9333c2fd19b46115348f' >>> ds.abc.numpy() array([[1, 2, 3]]) Copy to clipboard Note Checkout from a head node in any branch that contains uncommitted data will lead to an automatic commit before the checkout. clear_cache()¶ Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. This is useful if you have multiple datasets with memory caches open, taking up too much RAM. Also useful when local cache is no longer needed for certain datasets and is taking up storage space. property client¶ Returns the client of the dataset. commit(message: Optional[str] = None, allow_empty=False) → str¶ Stores a snapshot of the current state of the dataset. Parameters: message (str, Optional) – Used to describe the commit. allow_empty (bool) – If True, commit even if there are no changes. Returns: the commit id of the saved commit that can be used to access the snapshot. Return type: str Raises: Exception – If dataset is a filtered view. EmptyCommitError – if there are no changes and user does not forced to commit unchanged data. Note Commiting from a non-head node in any branch, will lead to an automatic checkout to a new branch. This same behaviour will happen if new samples are added or existing samples are updated from a non-head node. property commit_id: Optional[str]¶ The lasted committed commit id of the dataset. If there are no commits, this returns None. property commits: List[Dict]¶ Lists all the commits leading to the current dataset state. Returns: List of dictionaries containing commit information. connect(creds_key: str, dest_path: Optional[str] = None, org_id: Optional[str] = None, ds_name: Optional[str] = None, token: Optional[str] = None)¶ Connect a Deep Lake cloud dataset through a deeplake path",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"Examples >>> ds.create_group('images') >>> ds['images'].create_tensor('cats') Copy to clipboard >>> ds.create_groups('images/jpg/cats') >>> ds['images'].create_tensor('png') >>> ds['images/jpg'].create_group('dogs') Copy to clipboard create_tensor(name: str, htype: str = 'unspecified', dtype: Union[str, dtype] = 'unspecified', sample_compression: str = 'unspecified', chunk_compression: str = 'unspecified', hidden: bool = False, create_sample_info_tensor: bool = True, create_shape_tensor: bool = True, create_id_tensor: bool = True, verify: bool = True, exist_ok: bool = False, **kwargs)¶ Creates a new tensor in the dataset . Examples >>> # create dataset >>> ds = deeplake.dataset('path/to/dataset') Copy to clipboard >>> # create tensors >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('data') >>> ds.create_tensor('point_clouds', htype='point_cloud') Copy to clipboard >>> # append data >>> ds.images.append(np.ones((400, 400, 3), dtype='uint8')) >>> ds.videos.append(deeplake.read('videos/sample_video.mp4')) >>> ds.data.append(np.zeros((100, 100, 2))) Copy to clipboard Parameters: name (str) – The name of the tensor to be created. htype (str) – The class of data for the tensor. The defaults for other parameters are determined in terms of this value. For example, htype='image' would have dtype default to uint8. These defaults can be overridden by explicitly passing any of the other parameters to this function. May also modify the defaults for other parameters. dtype (str) – Optionally override this tensor's dtype. All subsequent samples are required to have this dtype. sample_compression (str) – All samples will be compressed in the provided format. If None, samples are uncompressed. For link[] tensors, sample_compression is used only for optimizing dataset views. chunk_compression (str) – All chunks will be compressed in the provided format. If None, chunks are uncompressed. For link[] tensors, chunk_compression is used only for optimizing dataset views. hidden (bool) – If True, the tensor will be hidden from ds.tensors but can still be accessed via ds[tensor_name]. create_sample_info_tensor (bool) – If True, meta data of individual samples will be saved in a hidden tensor. This data can be accessed via tensor[i].sample_info. create_shape_tensor (bool) – If True, an associated tensor containing shapes of each sample will be created. create_id_tensor (bool) – If True, an associated tensor containing unique ids for each sample will be created. This is useful for merge operations. verify (bool) – Valid only for link htypes. If True, all links will be verified before they are added to the tensor. verify is always True even if specified as False if create_shape_tensor or create_sample_info_tensor is True. exist_ok (bool) – If True, the group is created if it does not exist. if False, an error is raised if the group already exists. **kwargs – htype defaults can be overridden by passing any of the compatible parameters. To see all htypes and their correspondent arguments, check out Htypes. Returns: The new tensor, which can be accessed by dataset[name] or dataset.name. Return type: Tensor Raises: TensorAlreadyExistsError – If the tensor already exists and exist_ok is False. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key') >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path . dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path. InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path. copy(dest: Union[str, Path], tensors: Optional[List[str]] = None, overwrite: bool = False, creds=None, token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False)¶ Copies this dataset or dataset view to dest. Version control history is not included. Parameters: dest (str, pathlib.Path) – Destination dataset or path to copy to. If a Dataset instance is provided, it is expected to be empty. tensors (List[str], optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied. overwrite (bool) – If True and a dataset exists at destination, it will be overwritten. Defaults to False. creds (dict, Optional) – creds required to create / overwrite datasets at dest. token (str, Optional) – token used to for fetching credentials to dest. num_workers (int) – The number of workers to use for copying. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for copying. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). public (bool) – Defines if the dataset will have public access. Applicable only if Deep Lake cloud storage is used and a new Dataset is being created. Defaults to False. Returns: New dataset object. Return type: Dataset Raises: DatasetHandlerError – If a dataset already exists at destination path and overwrite is False. create_group(name: str, exist_ok=False) → Dataset¶ Creates a tensor group. Intermediate groups in the path are also created. Parameters: name – The name of the group to create. exist_ok – If True, the group is created if it does not exist. If False, an error is raised if the group already exists. Defaults to False. Returns: The created group. Raises: TensorGroupAlreadyExistsError – If the group already exists and exist_ok is False",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"InvalidTensorNameError – If name is in dataset attributes . NotImplementedError – If trying to override chunk_compression. TensorMetaInvalidHtype – If invalid htype is specified. ValueError – If an illegal argument is specified. create_tensor_like(name: str, source: Tensor, unlink: bool = False) → Tensor¶ Copies the source tensor's meta information and creates a new tensor with it. No samples are copied, only the meta/info for the tensor is. Examples >>> ds.create_tensor_like('cats', ds['images']) Copy to clipboard Parameters: name (str) – Name for the new tensor. source (Tensor) – Tensor who's meta/info will be copied. May or may not be contained in the same dataset. unlink (bool) – Whether to unlink linked tensors. Returns: New Tensor object. Return type: Tensor dataloader()¶ Returns a DeepLakeDataLoader object. To use this, install deeplake with pip install deeplake[enterprise]. Returns: A deeplake.enterprise.DeepLakeDataLoader object. Return type: DeepLakeDataLoader Examples Creating a simple dataloader object which returns a batch of numpy arrays >>> import deeplake >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> train_loader = ds_train.dataloader().numpy() >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader with custom transformation and batch size >>> import deeplake >>> import torch >>> from torchvision import datasets, transforms, models >>> >>> ds_train = deeplake.load('hub://activeloop/fashion-mnist-train') >>> tform = transforms.Compose([ ... transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run ... transforms.RandomRotation(20), # Image augmentation ... transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run ... transforms.Normalize([0.5], [0.5]), ... ]) ... >>> batch_size = 32 >>> # create dataloader by chaining with transform function and batch size and returns batch of pytorch tensors >>> train_loader = ds_train.dataloader()\ ... .transform({'images': tform, 'labels': None})\ ... .batch(batch_size)\ ... .shuffle()\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Creating dataloader and chaining with query >>> ds = deeplake.load('hub://activeloop/coco-train') >>> train_loader = ds_train.dataloader()\ ... .query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)')\ ... .pytorch() ... >>> # loop over the elements >>> for i, data in enumerate(train_loader): ... # custom logic on data ... pass Copy to clipboard Restrictions The new high performance C++ dataloader is part of our Growth and Enterprise Plan . Users of our Community plan can create dataloaders on Activeloop datasets ('hub://activeloop/…' datasets). To run queries on your own datasets, upgrade your organization's plan. delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"delete_group(name: str, large_ok: bool = False)¶ Delete a tensor group from the dataset . Examples >>> ds.delete_group('images/dogs') Copy to clipboard Parameters: name (str) – The name of tensor group to be deleted. large_ok (bool) – Delete tensor groups larger than 1 GB. Disabled by default. Returns: None Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. delete_tensor(name: str, large_ok: bool = False)¶ Delete a tensor from the dataset. Examples >>> ds.delete_tensor('images/cats') Copy to clipboard Parameters: name (str) – The name of tensor to be deleted. large_ok (bool) – Delete tensors larger than 1 GB. Disabled by default. Returns: None Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorTooLargeToDelete – If the tensor is larger than 1 GB and large_ok is False. delete_view(id: str)¶ Deletes the view with given view id. Parameters: id (str) – Id of the view to delete. Raises: KeyError – if view with given id does not exist. diff(id_1: Optional[str] = None, id_2: Optional[str] = None, as_dict=False) → Optional[Dict]¶ Returns/displays the differences between commits/branches. For each tensor this contains information about the sample indexes that were added/modified as well as whether the tensor was created. Parameters: id_1 (str, Optional) – The first commit_id or branch name. id_2 (str, Optional) – The second commit_id or branch name. as_dict (bool, Optional) – If True, returns the diff as lists of commit wise dictionaries. Returns: Optional[Dict] Raises: ValueError – If id_1 is None and id_2 is not None. Note If both id_1 and id_2 are None, the differences between the current state and the previous commit will be calculated. If you're at the head of the branch, this will show the uncommitted changes, if any. If only id_1 is provided, the differences between the current state and id_1 will be calculated. If you're at the head of the branch, this will take into account the uncommitted changes, if any. If only id_2 is provided, a ValueError will be raised. If both id_1 and id_2 are provided, the differences between id_1 and id_2 will be calculated. Note A dictionary of the differences between the commits/branches is returned if as_dict is True. The dictionary will always have 2 keys, 'dataset' and 'tensors'. The values corresponding to these keys are detailed below: If id_1 and id_2 are None, both the keys will have a single list as their value. This list will contain a dictionary describing changes compared to the previous commit. If only id_1 is provided, both keys will have a tuple of 2 lists as their value. The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from current state and id_1 to most recent common ancestor the commits respectively. If only id_2 is provided, a ValueError will be raised",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"If both id_1 and id_2 are provided, both keys will have a tuple of 2 lists as their value . The lists will contain dictionaries describing commitwise differences between commits. The 2 lists will range from id_1 and id_2 to most recent common ancestor the commits respectively. None is returned if as_dict is False. extend(samples: Dict[str, Any], skip_ok: bool = False)¶ Appends multiple rows of samples to mutliple tensors at once. This method expects all tensors being updated to be of the same length. Parameters: samples (Dict[str, Any]) – Dictionary with tensor names as keys and samples as values. skip_ok (bool) – Skip tensors not in samples if set to True. Raises: KeyError – If any tensor in the dataset is not a key in samples and skip_ok is False. TensorDoesNotExistError – If tensor in samples does not exist. ValueError – If all tensors being updated are not of the same length. NotImplementedError – If an error occurs while writing tiles. Exception – Error while attempting to rollback appends. filter(function: Union[Callable, str], num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, save_result: bool = False, result_path: Optional[str] = None, result_ds_args: Optional[dict] = None)¶ Filters the dataset in accordance of filter function f(x: sample) -> bool Parameters: function (Callable, str) – Filter function that takes sample as argument and returns True / False if sample should be included in result. Also supports simplified expression evaluations. See deeplake.core.query.query.DatasetQuery for more details. num_workers (int) – Level of parallelization of filter evaluations. 0 indicates in-place for-loop evaluation, multiprocessing is used otherwise. scheduler (str) – Scheduler to use for multiprocessing evaluation. 'threaded' is default. progressbar (bool) – Display progress bar while filtering. True is default. save_result (bool) – If True, result of the filter will be saved to a dataset asynchronously. result_path (Optional, str) – Path to save the filter result. Only applicable if save_result is True. result_ds_args (Optional, dict) – Additional args for result dataset. Only applicable if save_result is True. Returns: View of Dataset with elements that satisfy filter function. Example Following filters are identical and return dataset view where all the samples have label equals to 2. >>> dataset.filter(lambda sample: sample.labels.numpy() == 2) >>> dataset.filter('labels == 2') Copy to clipboard flush()¶ Necessary operation after writes if caches are being used. Writes all the dirty data from the cache layers (if any) to the underlying storage. Here dirty data corresponds to data that has been changed/assigned and but hasn't yet been sent to the underlying storage. get_commit_details(commit_id) → Dict¶ Get details of a particular commit. Parameters: commit_id (str) – commit id of the commit. Returns: Dictionary of details with keys - commit, author, time, message. Return type: Dict Raises: KeyError – If given commit_id is was not found in the dataset",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"property pending_commit_id: str¶ The commit_id of the next commit that will be made to the dataset . If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: Optional[int] = None)¶ Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters: index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises: IndexError – If the index is out of range. populate_creds(creds_key: str, creds: dict)¶ Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) Copy to clipboard pytorch(transform: Optional[Callable] = None, tensors: Optional[Sequence[str]] = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Optional[Callable] = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, use_progress_bar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Optional[Dict[str, Any]] = None, decode_method: Optional[Dict[str, str]] = None)¶ Converts the dataset into a pytorch Dataloader. Parameters: transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs . Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False use_progress_bar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True. return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset. Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A torch.utils.data.DataLoader object. Raises: EmptyTensorError – If one or more tensors being passed to pytorch are empty. Note Pytorch does not support uint16, uint32, uint64 dtypes. These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str)¶ Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object Returns: A Dataset object. Return type: Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets)",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"get_creds_keys() → List[str]¶ Returns the list of creds keys added to the dataset . These are used to fetch external data in linked tensors get_view(id: str) → ViewEntry¶ Returns the dataset view corresponding to id. Examples >>> # save view >>> ds[:100].save_view(id='first_100') >>> # load view >>> first_100 = ds.get_view('first_100').load() >>> # 100 >>> print(len(first_100)) Copy to clipboard See Dataset.save_view() to learn more about saving views. Parameters: id (str) – id of required view. Returns: ViewEntry Raises: KeyError – If no such view exists. get_views(commit_id: Optional[str] = None) → List[ViewEntry]¶ Returns list of views stored in this Dataset. Parameters: commit_id (str, optional) – Commit from which views should be returned. If not specified, views from current commit is returned. If not specified, views from the currently checked out commit will be returned. Returns: List of ViewEntry instances. Return type: List[ViewEntry] property groups: Dict[str, Dataset]¶ All sub groups in this group property has_head_changes¶ Returns True if currently at head node and uncommitted changes are present. property info¶ Returns the information about the dataset. property is_view: bool¶ Returns True if this dataset is a view and False otherwise. load_view(id: str, optimize: Optional[bool] = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: Optional[bool] = True)¶ Loads the view and returns the Dataset by id. Equivalent to ds.get_view(id).load(). Parameters: id (str) – id of the view to be loaded. optimize (bool) – If True, the dataset view is optimized by copying and rechunking the required data before loading. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. tensors (Optional, List[str]) – Tensors to be copied if optimize is True. By default all tensors are copied. num_workers (int) – Number of workers to be used for the optimization process. Only applicable if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to use progressbar for optimization. Only applicable if optimize=True. Defaults to True. Returns: The loaded view. Return type: Dataset Raises: KeyError – if view with given id does not exist. log()¶ Displays the details of all the past commits. property max_len¶ Return the maximum length of the tensor. property max_view¶ Returns a view of the dataset in which shorter tensors are padded with None s to have the same length as the longest tensor. Example Creating a dataset with 5 images and 4 labels. ds.max_view will return a view with labels tensor padded to have 5 samples",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,">>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): .. . print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] (20, 20, 3) [None] Copy to clipboard merge(target_id: str, conflict_resolution: Optional[str] = None, delete_removed_tensors: bool = False, force: bool = False)¶ Merges the target_id into the current dataset. Parameters: target_id (str) – The commit_id or branch to merge. conflict_resolution (str, Optional) – The strategy to use to resolve merge conflicts. Conflicts are scenarios where both the current dataset and the target id have made changes to the same sample/s since their common ancestor. Must be one of the following None - this is the default value, will raise an exception if there are conflicts. 'ours' - during conflicts, values from the current dataset will be used. 'theirs' - during conflicts, values from target id will be used. delete_removed_tensors (bool) – If True, deleted tensors will be deleted from the dataset. force (bool) – Forces merge. force=True will have these effects in the following cases of merge conflicts: If tensor is renamed on target but is missing from HEAD, renamed tensor will be registered as a new tensor on current branch. If tensor is renamed on both target and current branch, tensor on target will be registered as a new tensor on current branch. If tensor is renamed on target and a new tensor of the new name was created on the current branch, they will be merged. Raises: Exception – if dataset is a filtered view. ValueError – if the conflict resolution strategy is not one of the None, 'ours', or 'theirs'. property meta: DatasetMeta¶ Returns the metadata of the dataset. property min_len¶ Return the minimum length of the tensor. property min_view¶ Returns a view of the dataset in which all tensors are sliced to have the same length as the shortest tensor. Example Creating a dataset with 5 images and 4 labels. ds.min_view will return a view in which tensors are sliced to have 4 samples. >>> import deeplake >>> ds = deeplake.dataset('../test/test_ds', overwrite=True) >>> ds.create_tensor('images', htype='link[image]', sample_compression='jpg') >>> ds.create_tensor('labels', htype='class_label') >>> ds.images.extend([deeplake.link('https://picsum.photos/20/20') for _ in range(5)]) >>> ds.labels.extend([0, 1, 2, 1]) >>> len(ds.images) 5 >>> len(ds.labels) 4 >>> for i, sample in enumerate(ds.max_view): ... print(sample['images'].shape, sample['labels'].numpy()) ... (20, 20, 3) [0] (20, 20, 3) [1] (20, 20, 3) [2] (20, 20, 3) [1] Copy to clipboard property num_samples: int¶ Returns the length of the smallest tensor. Ignores any applied indexing and returns the total length. property parent¶ Returns the parent of this group. Returns None if this is the root dataset",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"num_workers (int) – Number of workers to be used for optimization process . Applicable only if optimize=True. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. verbose (bool) – If True, logs will be printed. Defaults to True. ds_args (dict) – Additional args for creating VDS when path is specified. (See documentation for deeplake.dataset()) Returns: Path to the saved VDS. Return type: str Raises: ReadOnlyModeError – When attempting to save a view inplace and the user doesn't have write access. DatasetViewSavingError – If HEAD node has uncommitted changes. Note Specifying path makes the view external. External views cannot be accessed using the parent dataset's Dataset.get_view(), Dataset.load_view(), Dataset.delete_view() methods. They have to be loaded using deeplake.load(). size_approx()¶ Estimates the size in bytes of the dataset. Includes only content, so will generally return an under-estimate. summary()¶ Prints a summary of the dataset. tensorflow(tensors: Optional[Sequence[str]] = None, tobytes: Union[bool, Sequence[str]] = False, fetch_chunks: bool = True)¶ Converts the dataset into a tensorflow compatible format. See https://www.tensorflow.org/api_docs/python/tf/data/Dataset Parameters: tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). tobytes (bool) – If True, samples will not be decompressed and their raw bytes will be returned instead of numpy arrays. Can also be a list of tensors, in which case those tensors alone will not be decompressed. fetch_chunks – See fetch_chunks argument in deeplake.core.tensor.Tensor.numpy() Returns: tf.data.Dataset object that can be used for tensorflow training. property tensors: Dict[str, Tensor]¶ All tensors belonging to this group, including those within sub groups. Always returns the sliced tensors. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"To run queries on your own datasets, upgrade your organization's plan . property read_only¶ Returns True if dataset is in read-only mode and False otherwise. rechunk(tensors: Optional[Union[str, List[str]]] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True)¶ Rewrites the underlying chunks to make their sizes optimal. This is usually needed in cases where a lot of updates have been made to the data. Parameters: tensors (str, List[str], Optional) – Name/names of the tensors to rechunk. If None, all tensors in the dataset are rechunked. num_workers (int) – The number of workers to use for rechunking. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used for rechunking. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar If True (default). rename(path: Union[str, Path])¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. rename_group(name: str, new_name: str) → None¶ Renames group with name name to new_name Parameters: name (str) – Name of group to be renamed. new_name (str) – New name of group. Raises: TensorGroupDoesNotExistError – If tensor group of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorGroupNameError – If name is in dataset attributes. RenameError – If new_name points to a group different from name. rename_tensor(name: str, new_name: str) → Tensor¶ Renames tensor with name name to new_name Parameters: name (str) – Name of tensor to be renamed. new_name (str) – New name of tensor. Returns: Renamed tensor. Return type: Tensor Raises: TensorDoesNotExistError – If tensor of name name does not exist in the dataset. TensorAlreadyExistsError – Duplicate tensors are not allowed. TensorGroupAlreadyExistsError – Duplicate tensor groups are not allowed. InvalidTensorNameError – If new_name is in dataset attributes. RenameError – If new_name points to a group different from name. reset()¶ Resets the uncommitted changes present in the branch. Note The uncommitted data is deleted from underlying storage, this is not a reversible operation. property root¶ Returns the root dataset of a group. sample_by(weights: Union[str, list, tuple], replace: Optional[bool] = True, size: Optional[int] = None)¶ Returns a sliced Dataset with given weighted sampler applied. To use this, install deeplake with pip install deeplake[enterprise]. Parameters: weights – (Union[str, list, tuple]): If it's string then tql will be run to calculate the weights based on the expression. list and tuple will be treated as the list of the weights per sample. replace – Optional[bool] If true the samples can be repeated in the result view",activeloop Dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#dataset,"Defaults to True size – Optional[int] The length of the result view . Defaults to length of the dataset. Returns: A deeplake.Dataset object. Return type: Dataset Examples Sample the dataset with labels == 5 twice more than labels == 6 >>> from deeplake.experimental import query >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5)') Copy to clipboard Sample the dataset treating labels tensor as weights. >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> sampled_ds = ds.sample_by('max_weight(labels == 5: 10, labels == 6: 5')) Copy to clipboard Sample the dataset with the given weights; >>> ds = deeplake.load('hub://activeloop/coco-train') >>> weights = list() >>> for i in range(len(ds)): ... weights.append(i % 5) ... >>> sampled_ds = ds.sample_by(weights, replace=False) Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only use sample_by on Activeloop datasets ('hub://activeloop/…' datasets). To use sampling functionality on your own datasets, upgrade your organization's plan. property sample_indices¶ Returns all the indices pointed to by this dataset view. save_view(message: Optional[str] = None, path: Optional[Union[str, Path]] = None, id: Optional[str] = None, optimize: bool = False, tensors: Optional[List[str]] = None, num_workers: int = 0, scheduler: str = 'threaded', verbose: bool = True, **ds_args) → str¶ Saves a dataset view as a virtual dataset (VDS) Examples >>> # Save to specified path >>> vds_path = ds[:10].save_view(path='views/first_10', id='first_10') >>> vds_path views/first_10 Copy to clipboard >>> # Path unspecified >>> vds_path = ds[:100].save_view(id='first_100', message='first 100 samples') >>> # vds_path = path/to/dataset Copy to clipboard >>> # Random id >>> vds_path = ds[:100].save_view() >>> # vds_path = path/to/dataset/.queries/92f41922ed0471ec2d27690b7351fc96bea060e6c5ee22b14f7ffa5f291aa068 Copy to clipboard See Dataset.get_view() to learn how to load views by id. These virtual datasets can also be loaded from their path like normal datasets. Parameters: message (Optional, str) – Custom user message. path (Optional, str, pathlib.Path) – The VDS will be saved as a standalone dataset at the specified path. If not specified, the VDS is saved under .queries subdirectory of the source dataset's storage. If the user doesn't have write access to the source dataset and the source dataset is a Deep Lake cloud dataset, then the VDS is saved is saved under the user's Deep Lake account and can be accessed using deeplake.load(f'hub://{username}/queries/{query_hash}'). id (Optional, str) – Unique id for this view. Random id will be generated if not specified. optimize (bool) – If True, the dataset view will be optimized by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. You can also choose to optimize the saved view later by calling its ViewEntry.optimize() method. tensors (List, optional) – Names of tensors (and groups) to be copied. If not specified all tensors are copied",activeloop DeepLakeCloudDataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplakeclouddataset," class deeplake.core.dataset.DeepLakeCloudDataset¶ Bases: Dataset Subclass of Dataset. Deep Lake cloud datasets are those datasets which are stored on Activeloop servers, their paths look like: hub://username/dataset_name. add_creds_key(creds_key: str, managed: bool = False)¶ Adds a new creds key to the dataset. These keys are used for tensors that are linked to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') Copy to clipboard Parameters: creds_key (str) – The key to be added. managed (bool) – If True, the creds corresponding to the key will be fetched from activeloop platform. Note, this is only applicable for datasets that are connected to activeloop platform. Defaults to False. change_creds_management(creds_key: str, managed: bool)¶ Changes the management status of the creds key. Parameters: creds_key (str) – The key whose management status is to be changed. managed (bool) – The target management status. If True, the creds corresponding to the key will be fetched from activeloop platform. Raises: ValueError – If the dataset is not connected to activeloop platform. KeyError – If the creds key is not present in the dataset. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('hub://username/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # Populate the name added with creds dictionary >>> # These creds are only present temporarily and will have to be repopulated on every reload >>> ds.populate_creds('my_s3_key', {}) >>> # Change the management status of the key to True. Before doing this, ensure that the creds have been created on activeloop platform >>> # Now, this key will no longer use the credentials populated in the previous step but will instead fetch them from activeloop platform >>> # These creds don't have to be populated again on every reload and will be fetched every time the dataset is loaded >>> ds.change_creds_management('my_s3_key', True) Copy to clipboard property client¶ Returns the client of the dataset. connect(*args, **kwargs)¶ Connect a Deep Lake cloud dataset through a deeplake path. Examples >>> # create/load an s3 dataset >>> s3_ds = deeplake.dataset('s3://bucket/dataset') >>> ds = s3_ds.connect(dest_path='hub://my_org/dataset', creds_key='my_managed_credentials_key') >>> # or >>> ds = s3_ds.connect(org_id='my_org', creds_key='my_managed_credentials_key') Copy to clipboard Parameters: creds_key (str) – The managed credentials to be used for accessing the source path. dest_path (str, optional) – The full path to where the connected Deep Lake dataset will reside. Can be: a Deep Lake path like hub://organization/dataset org_id (str, optional) – The organization to where the connected Deep Lake dataset will be added. ds_name (str, optional) – The name of the connected Deep Lake dataset. Will be infered from dest_path or src_path if not provided. token (str, optional) – Activeloop token used to fetch the managed credentials. Returns: The connected Deep Lake dataset. Return type: Dataset Raises: InvalidSourcePathError – If the dataset's path is not a valid s3 or gcs path",activeloop ViewEntry¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#viewentry,"class deeplake.core.dataset.ViewEntry¶ Represents a view saved inside a dataset. delete()¶ Deletes the view. property id: str¶ Returns id of the view. load(verbose=True)¶ Loads the view and returns the Dataset. Parameters: verbose (bool) – If True, logs will be printed. Defaults to True. Returns: Loaded dataset view. Return type: Dataset property message: str¶ Returns the message with which the view was saved. optimize(tensors: Optional[List[str]] = None, unlink=True, num_workers=0, scheduler='threaded', progressbar=True)¶ Optimizes the dataset view by copying and rechunking the required data. This is necessary to achieve fast streaming speeds when training models using the dataset view. The optimization process will take some time, depending on the size of the data. Example >>> # save view >>> ds[:10].save_view(id='first_10') >>> # optimize view >>> ds.get_view('first_10').optimize() >>> # load optimized view >>> ds.load_view('first_10') Copy to clipboard Parameters: tensors (List[str]) – Tensors required in the optimized view. By default all tensors are copied. unlink (bool) – If True, this unlinks linked tensors (if any) by copying data from the links to the view. This does not apply to linked videos. Set deeplake.constants._UNLINK_VIDEOS to True to change this behavior. num_workers (int) – Number of workers to be used for the optimization process. Defaults to 0. scheduler (str) – The scheduler to be used for optimization. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Only applicable if optimize=True. Defaults to ‘threaded'. progressbar (bool) – Whether to display a progressbar. Returns: ViewEntry",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"property pending_commit_id: str¶ The commit_id of the next commit that will be made to the dataset . If you're not at the head of the current branch, this will be the same as the commit_id. pop(index: Optional[int] = None)¶ Removes a sample from all the tensors of the dataset. For any tensor if the index >= len(tensor), the sample won't be popped from it. Parameters: index (int, Optional) – The index of the sample to be removed. If it is None, the index becomes the length of the longest tensor - 1. Raises: IndexError – If the index is out of range. populate_creds(creds_key: str, creds: dict)¶ Populates the creds key added in add_creds_key with the given creds. These creds are used to fetch the external data. This needs to be done everytime the dataset is reloaded for datasets that contain links to external data. Examples >>> # create/load a dataset >>> ds = deeplake.dataset('path/to/dataset') >>> # add a new creds key >>> ds.add_creds_key('my_s3_key') >>> # populate the creds >>> ds.populate_creds('my_s3_key', {'aws_access_key_id': 'my_access_key', 'aws_secret_access_key': 'my_secret_key'}) Copy to clipboard pytorch(transform: Optional[Callable] = None, tensors: Optional[Sequence[str]] = None, num_workers: int = 1, batch_size: int = 1, drop_last: bool = False, collate_fn: Optional[Callable] = None, pin_memory: bool = False, shuffle: bool = False, buffer_size: int = 2048, use_local_cache: bool = False, use_progress_bar: bool = False, return_index: bool = True, pad_tensors: bool = False, transform_kwargs: Optional[Dict[str, Any]] = None, decode_method: Optional[Dict[str, str]] = None)¶ Converts the dataset into a pytorch Dataloader. Parameters: transform (Callable, Optional) – Transformation function to be applied to each sample. tensors (List, Optional) – Optionally provide a list of tensor names in the ordering that your training script expects. For example, if you have a dataset that has 'image' and 'label' tensors, if tensors=['image', 'label'], your training script should expect each batch will be provided as a tuple of (image, label). num_workers (int) – The number of workers to use for fetching data in parallel. batch_size (int) – Number of samples per batch to load. Default value is 1. drop_last (bool) – Set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. if False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. Default value is False. Read torch.utils.data.DataLoader docs for more details. collate_fn (Callable, Optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. Read torch.utils.data.DataLoader docs for more details. pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Default value is False. Read torch.utils.data.DataLoader docs for more details. shuffle (bool) – If True, the data loader will shuffle the data indices. Default value is False",activeloop DeepLakeCloudDataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplakeclouddataset,"InvalidDestinationPathError – If dest_path, or org_id and ds_name do not form a valid Deep Lake path . delete(large_ok=False)¶ Deletes the entire dataset from the cache layers (if any) and the underlying storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. Parameters: large_ok (bool) – Delete datasets larger than 1 GB. Defaults to False. Raises: DatasetTooLargeToDelete – If the dataset is larger than 1 GB and large_ok is False. property is_actually_cloud: bool¶ Datasets that are connected to Deep Lake cloud can still technically be stored anywhere. If a dataset is in Deep Lake cloud but stored without hub:// prefix, it should only be used for testing. rename(path)¶ Renames the dataset to path. Example >>> ds = deeplake.load('hub://username/dataset') >>> ds.rename('hub://username/renamed_dataset') Copy to clipboard Parameters: path (str, pathlib.Path) – New path to the dataset. Raises: RenameError – If path points to a different directory. property token¶ Get attached token of the dataset update_creds_key(old_creds_key: str, new_creds_key: str)¶ Replaces the old creds key with the new creds key. This is used to replace the creds key used for external data. visualize(width: Optional[Union[int, str]] = None, height: Optional[Union[int, str]] = None)¶ Visualizes the dataset in the Jupyter notebook. Parameters: width – Union[int, str, None] Optional width of the visualizer canvas. height – Union[int, str, None] Optional height of the visualizer canvas. Raises: Exception – If the dataset is not a Deep Lake cloud dataset and the visualization is attempted in colab.",activeloop deeplake.core.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.dataset.html#deeplake-core-dataset,"Details about how Deep Lake shuffles data can be found at Shuffling in ds.pytorch() buffer_size (int) – The size of the buffer used to shuffle the data in MBs . Defaults to 2048 MB. Increasing the buffer_size will increase the extent of shuffling. use_local_cache (bool) – If True, the data loader will use a local cache to store data. The default cache location is ~/.activeloop/cache, but it can be changed by setting the LOCAL_CACHE_PREFIX environment variable. This is useful when the dataset can fit on the machine and we don't want to fetch the data multiple times for each iteration. Default value is False use_progress_bar (bool) – If True, tqdm will be wrapped around the returned dataloader. Default value is True. return_index (bool) – If True, the returned dataloader will have a key 'index' that contains the index of the sample(s) in the original dataset. Default value is True. pad_tensors (bool) – If True, shorter tensors will be padded to the length of the longest tensor. Default value is False. transform_kwargs (optional, Dict[str, Any]) – Additional kwargs to be passed to transform. decode_method (Dict[str, str], Optional) – A dictionary of decode methods for each tensor. Defaults to None. Supported decode methods are: 'numpy': Default behaviour. Returns samples as numpy arrays. 'tobytes': Returns raw bytes of the samples. 'pil': Returns samples as PIL images. Especially useful when transformation use torchvision transforms, that require PIL images as input. Only supported for tensors with sample_compression='jpeg' or 'png'. Returns: A torch.utils.data.DataLoader object. Raises: EmptyTensorError – If one or more tensors being passed to pytorch are empty. Note Pytorch does not support uint16, uint32, uint64 dtypes. These are implicitly type casted to int32, int64 and int64 respectively. This spins up it's own workers to fetch data. query(query_string: str)¶ Returns a sliced Dataset with given query results. To use this, install deeplake with pip install deeplake[enterprise]. It allows to run SQL like queries on dataset and extract results. See supported keywords and the Tensor Query Language documentation here. Parameters: query_string (str) – An SQL string adjusted with new functionalities to run on the given Dataset object Returns: A Dataset object. Return type: Dataset Examples Query from dataset all the samples with lables other than 5 >>> import deeplake >>> ds = deeplake.load('hub://activeloop/fashion-mnist-train') >>> query_ds = ds.query('select * where labels != 5') Copy to clipboard Query from dataset first appeard 1000 samples where the categories is car and 1000 samples where the categories is motorcycle >>> ds_train = deeplake.load('hub://activeloop/coco-train') >>> query_ds_train = ds_train.query('(select * where contains(categories, 'car') limit 1000) union (select * where contains(categories, 'motorcycle') limit 1000)') Copy to clipboard Restrictions Querying datasets is part of our Growth and Enterprise Plan . Users of our Community plan can only perform queries on Activeloop datasets ('hub://activeloop/…' datasets)",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#deeplake-core-tensor," Tensor¶ class deeplake.core.tensor.Tensor¶ __len__()¶ Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns: The current length of this tensor. Return type: int __setitem__(item: Union[int, slice], value: Any)¶ Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard property _config¶ Returns a summary of the configuration of the tensor. _linked_sample()¶ Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_])¶ Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters: sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype¶ Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear()¶ Deletes all samples from the tensor data(aslist: bool = False, fetch_chunks: bool = False) → Any¶ Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False)¶ Return json data. Only applicable for tensors with ‘json' base htype. property dtype: Optional[dtype]¶ Dtype of the tensor. extend(samples: Union[ndarray, Sequence[Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]], Tensor], progressbar: bool = False)¶ Extends the end of the tensor by appending multiple elements from a sequence",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#deeplake-core-tensor,"Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files . See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters: samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. Raises: TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool¶ Whether this tensor is a hidden tensor. property htype¶ Htype of the tensor. property info: Info¶ Returns the information about the tensor. User can set info of tensor. Returns: Information about the tensor. Return type: Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard property is_dynamic: bool¶ Will return True if samples in this tensor have shapes that are unequal. property is_link¶ Whether this tensor is a link tensor. property is_sequence¶ Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False)¶ Return list data. Only applicable for tensors with ‘list' base htype. property meta¶ Metadata of the tensor. modified_samples(target_id: Optional[str] = None, return_indexes: Optional[bool] = False)¶ Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters: target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns: A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type: Tensor Raises: TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int¶ Number of dimensions of the tensor. property num_samples: int¶ Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length. numpy(aslist=False, fetch_chunks=False) → Union[ndarray, List[ndarray]]¶ Computes the contents of the tensor in numpy format. Parameters: aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#deeplake-core-tensor,"fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved . This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises: DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns: A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(fetch_chunks: bool = False)¶ Return path data. Only applicable for linked tensors play()¶ Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: Optional[int] = None)¶ Removes an element at the given index. property sample_indices¶ Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Union[Dict, List[Dict]]¶ Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[Optional[int], ...]¶ Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns: Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type: tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval¶ Returns a ShapeInterval object that describes this tensor's shape more accurately. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns: Object containing lower and upper properties. Return type: ShapeInterval Note If you are expecting a tuple, use shape instead. summary()¶ Prints a summary of the tensor. text(fetch_chunks: bool = False)¶ Return text data. Only applicable for tensors with ‘text' base htype. property timestamps: ndarray¶ Returns timestamps (in seconds) for video sample as numpy array",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#tensor," class deeplake.core.tensor.Tensor¶ __len__()¶ Returns the length of the primary axis of the tensor. Accounts for indexing into the tensor object. Examples >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 10, 10))) >>> len(tensor) 100 >>> len(tensor[5:10]) 5 Copy to clipboard Returns: The current length of this tensor. Return type: int __setitem__(item: Union[int, slice], value: Any)¶ Update samples with new values. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.shape (1, 10, 10) >>> tensor[0] = np.zeros((3, 3)) >>> tensor.shape (1, 3, 3) Copy to clipboard property _config¶ Returns a summary of the configuration of the tensor. _linked_sample()¶ Returns the linked sample at the given index. This is only applicable for tensors of link[] htype and can only be used for exactly one sample. >>> linked_sample = ds.abc[0]._linked_sample().path 'https://picsum.photos/200/300' Copy to clipboard append(sample: Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_])¶ Appends a single sample to the end of the tensor. Can be an array, scalar value, or the return value from deeplake.read(), which can be used to load files. See examples down below. Examples Numpy input: >>> len(tensor) 0 >>> tensor.append(np.zeros((28, 28, 1))) >>> len(tensor) 1 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.append(deeplake.read('path/to/file')) >>> len(tensor) 1 Copy to clipboard Parameters: sample (InputSample) – The data to append to the tensor. Sample is generated by deeplake.read(). See the above examples. property base_htype¶ Base htype of the tensor. Example >>> ds.create_tensor('video_seq', htype='sequence[video]', sample_compression='mp4') >>> ds.video_seq.htype sequence[video] >>> ds.video_seq.base_htype video Copy to clipboard clear()¶ Deletes all samples from the tensor data(aslist: bool = False, fetch_chunks: bool = False) → Any¶ Returns data in the tensor in a format based on the tensor's base htype. If tensor has text base htype Returns dict with dict['value'] = Tensor.text() If tensor has json base htype Returns dict with dict['value'] = Tensor.dict() If tensor has list base htype Returns dict with dict['value'] = Tensor.list() For video tensors, returns a dict with keys 'frames', 'timestamps' and 'sample_info': Value of dict['frames'] will be same as numpy(). Value of dict['timestamps'] will be same as timestamps corresponding to the frames. Value of dict['sample_info'] will be same as sample_info. For class_label tensors, returns a dict with keys 'value' and 'text'. Value of dict['value'] will be same as numpy(). Value of dict['text'] will be list of class labels as strings. For image or dicom tensors, returns dict with keys 'value' and 'sample_info'. Value of dict['value'] will be same as numpy(). Value of dict['sample_info'] will be same as sample_info. For all else, returns dict with key 'value' with value same as numpy(). dict(fetch_chunks: bool = False)¶ Return json data. Only applicable for tensors with ‘json' base htype. property dtype: Optional[dtype]¶ Dtype of the tensor. extend(samples: Union[ndarray, Sequence[Union[Sample, ndarray, int, float, bool, dict, list, str, integer, floating, bool_]], Tensor], progressbar: bool = False)¶ Extends the end of the tensor by appending multiple elements from a sequence",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#tensor,"Accepts a sequence, a single batched numpy array, or a sequence of deeplake.read() outputs, which can be used to load files . See examples down below. Example Numpy input: >>> len(tensor) 0 >>> tensor.extend(np.zeros((100, 28, 28, 1))) >>> len(tensor) 100 Copy to clipboard File input: >>> len(tensor) 0 >>> tensor.extend([ deeplake.read('path/to/image1'), deeplake.read('path/to/image2'), ]) >>> len(tensor) 2 Copy to clipboard Parameters: samples (np.ndarray, Sequence, Sequence[Sample]) – The data to add to the tensor. The length should be equal to the number of samples to add. progressbar (bool) – Specifies whether a progressbar should be displayed while extending. Raises: TensorDtypeMismatchError – Dtype for array must be equal to or castable to this tensor's dtype. property hidden: bool¶ Whether this tensor is a hidden tensor. property htype¶ Htype of the tensor. property info: Info¶ Returns the information about the tensor. User can set info of tensor. Returns: Information about the tensor. Return type: Info Example >>> # update info >>> ds.images.info.update(large=True, gray=False) >>> # get info >>> ds.images.info {'large': True, 'gray': False} Copy to clipboard >>> ds.images.info = {'complete': True} >>> ds.images.info {'complete': True} Copy to clipboard property is_dynamic: bool¶ Will return True if samples in this tensor have shapes that are unequal. property is_link¶ Whether this tensor is a link tensor. property is_sequence¶ Whether this tensor is a sequence tensor. list(fetch_chunks: bool = False)¶ Return list data. Only applicable for tensors with ‘list' base htype. property meta¶ Metadata of the tensor. modified_samples(target_id: Optional[str] = None, return_indexes: Optional[bool] = False)¶ Returns a slice of the tensor with only those elements that were modified/added. By default the modifications are calculated relative to the previous commit made, but this can be changed by providing a target id. Parameters: target_id (str, optional) – The commit id or branch name to calculate the modifications relative to. Defaults to None. return_indexes (bool, optional) – If True, returns the indexes of the modified elements. Defaults to False. Returns: A new tensor with only the modified elements if return_indexes is False. Tuple[Tensor, List[int]]: A new tensor with only the modified elements and the indexes of the modified elements if return_indexes is True. Return type: Tensor Raises: TensorModifiedError – If a target id is passed which is not an ancestor of the current commit. property ndim: int¶ Number of dimensions of the tensor. property num_samples: int¶ Returns the length of the primary axis of the tensor. Ignores any applied indexing and returns the total length. numpy(aslist=False, fetch_chunks=False) → Union[ndarray, List[ndarray]]¶ Computes the contents of the tensor in numpy format. Parameters: aslist (bool) – If True, a list of np.ndarrays will be returned. Helpful for dynamic tensors. If False, a single np.ndarray will be returned unless the samples are dynamically shaped, in which case an error is raised",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#tensor,"fetch_chunks (bool) – If True, full chunks will be retrieved from the storage, otherwise only required bytes will be retrieved . This will always be True even if specified as False in the following cases: The tensor is ChunkCompressed. The chunk which is being accessed has more than 128 samples. Raises: DynamicTensorNumpyError – If reading a dynamically-shaped array slice without aslist=True. ValueError – If the tensor is a link and the credentials are not populated. Returns: A numpy array containing the data represented by this tensor. Note For tensors of htype polygon, aslist is always True. path(fetch_chunks: bool = False)¶ Return path data. Only applicable for linked tensors play()¶ Play video sample. Plays video in Jupyter notebook or plays in web browser. Video is streamed directly from storage. This method will fail for incompatible htypes. Example >>> ds = deeplake.load('./test/my_video_ds') >>> # play second sample >>> ds.videos[2].play() Copy to clipboard Note Video streaming is not yet supported on colab. pop(index: Optional[int] = None)¶ Removes an element at the given index. property sample_indices¶ Returns all the indices pointed to by this tensor in the dataset view. property sample_info: Union[Dict, List[Dict]]¶ Returns info about particular samples in a tensor. Returns dict in case of single sample, otherwise list of dicts. Data in returned dict would depend on the tensor's htype and the sample itself. Example >>> ds.videos[0].sample_info {'duration': 400400, 'fps': 29.97002997002997, 'timebase': 3.3333333333333335e-05, 'shape': [400, 360, 640, 3], 'format': 'mp4', 'filename': '../deeplake/tests/dummy_data/video/samplemp4.mp4', 'modified': False} >>> ds.images[:2].sample_info [{'exif': {'Software': 'Google'}, 'shape': [900, 900, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/cat.jpeg', 'modified': False}, {'exif': {}, 'shape': [495, 750, 3], 'format': 'jpeg', 'filename': '../deeplake/tests/dummy_data/images/car.jpg', 'modified': False}] Copy to clipboard property shape: Tuple[Optional[int], ...]¶ Get the shape of this tensor. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape (2, 10, None) Copy to clipboard Returns: Tuple where each value is either None (if that axis is dynamic) or an int (if that axis is fixed). Return type: tuple Note If you don't want None in the output shape or want the lower/upper bound shapes, use shape_interval instead. property shape_interval: ShapeInterval¶ Returns a ShapeInterval object that describes this tensor's shape more accurately. Length is included. Example >>> tensor.append(np.zeros((10, 10))) >>> tensor.append(np.zeros((10, 15))) >>> tensor.shape_interval ShapeInterval(lower=(2, 10, 10), upper=(2, 10, 15)) >>> str(tensor.shape_interval) (2, 10, 10:15) Copy to clipboard Returns: Object containing lower and upper properties. Return type: ShapeInterval Note If you are expecting a tuple, use shape instead. summary()¶ Prints a summary of the tensor. text(fetch_chunks: bool = False)¶ Return text data. Only applicable for tensors with ‘text' base htype. property timestamps: ndarray¶ Returns timestamps (in seconds) for video sample as numpy array",activeloop deeplake.core.tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#deeplake-core-tensor,"Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes¶ Returns the bytes of the tensor . Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns: The bytes of the tensor. Return type: bytes Raises: ValueError – If the tensor has multiple samples. property verify¶ Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop Tensor¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.tensor.html#tensor,"Example >>> # Return timestamps for all frames of first video sample >>> ds.videos[0].timestamps.shape (400,) >>> # Return timestamps for 5th to 10th frame of first video sample >>> ds.videos[0, 5:10].timestamps array([0.2002 , 0.23356667, 0.26693332, 0.33366665, 0.4004 ], dtype=float32) Copy to clipboard tobytes() → bytes¶ Returns the bytes of the tensor . Only works for a single sample of tensor. If the tensor is uncompressed, this returns the bytes of the numpy array. If the tensor is sample compressed, this returns the compressed bytes of the sample. If the tensor is chunk compressed, this raises an error. Returns: The bytes of the tensor. Return type: bytes Raises: ValueError – If the tensor has multiple samples. property verify¶ Whether linked data will be verified when samples are added. Applicable only to tensors with htype link[htype].",activeloop deeplake.api¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.html#deeplake-api,deeplake.api.dataset dataset deeplake.api.info Info deeplake.api.link link() deeplake.api.read read() deeplake.api.tiled tiled(),activeloop deeplake.auto¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.html#deeplake-auto,deeplake.auto.structured deeplake.auto.structured.base deeplake.auto.structured.dataframe deeplake.auto.unstructured deeplake.auto.unstructured.base deeplake.auto.unstructured.image_classification deeplake.auto.unstructured.kaggle,activeloop deeplake.util¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.html#deeplake-util,deeplake.util.shape_interval ShapeInterval deeplake.util.remove_cache remove_memory_cache() get_base_storage() get_dataset_with_zero_size_cache() create_read_copy_dataset() deeplake.util.notebook is_notebook() is_jupyter() is_colab() deeplake.util.exceptions ExternalCommandError KaggleError KaggleMissingCredentialsError KaggleDatasetAlreadyDownloadedError InvalidPathException AutoCompressionError InvalidFileExtension SamePathException TensorInvalidSampleShapeError TensorMetaMissingKey TensorDoesNotExistError TensorAlreadyExistsError TensorGroupDoesNotExistError TensorGroupAlreadyExistsError InvalidTensorNameError InvalidTensorGroupNameError DynamicTensorNumpyError InvalidShapeIntervalError InvalidKeyTypeError UnsupportedTensorTypeError InvalidBytesRequestedError ProviderListEmptyError DirectoryAtPathException FileAtPathException ProviderSizeListMismatch ModuleNotInstalledException LoginException UserNotLoggedInException InvalidHubPathException PathNotEmptyException AuthenticationException AuthorizationException InvalidPasswordException CouldNotCreateNewDatasetException ResourceNotFoundException BadRequestException OverLimitException ServerException BadGatewayException GatewayTimeoutException WaitTimeoutException LockedException UnexpectedStatusCodeException EmptyTokenException S3Error S3GetError S3SetError S3DeletionError S3ListError UnsupportedCompressionError SampleCompressionError SampleDecompressionError InvalidImageDimensions TensorUnsupportedSampleType MetaError MetaDoesNotExistError MetaAlreadyExistsError MetaInvalidKey MetaInvalidRequiredMetaKey TensorMetaInvalidHtype TensorMetaInvalidHtypeOverwriteValue TensorMetaMissingRequiredValue TensorMetaInvalidHtypeOverwriteKey TensorDtypeMismatchError InvalidTensorLinkError TensorMetaMutuallyExclusiveKeysError ReadOnlyModeError TransformError FilterError InvalidInputDataError UnsupportedSchedulerError TensorMismatchError InvalidOutputDatasetError InvalidTransformDataset HubComposeEmptyListError HubComposeIncompatibleFunction DatasetUnsupportedPytorch CorruptedMetaError ChunkEngineError FullChunkError ChunkIdEncoderError ChunkSizeTooSmallError DatasetHandlerError MemoryDatasetCanNotBePickledError CorruptedSampleError VersionControlError MergeError MergeNotSupportedError MergeMismatchError MergeConflictError CheckoutError CommitError EmptyCommitError TensorModifiedError GCSDefaultCredsNotFoundError InvalidOperationError AgreementError AgreementNotAcceptedError NotLoggedInError RenameError BufferError InfoError OutOfChunkCountError OutOfSampleCountError SampleHtypeMismatchError EmptyTensorError DatasetViewSavingError ManagedCredentialsNotFoundError UnableToReadFromUrlError InvalidTokenException TokenPermissionError,activeloop deeplake.client.log¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.client.log.html#deeplake-client-log,Deep Lake does logging using the 'deeplake' logger. Logging level is logging.INFO by default. See example on how to change this. Copy to clipboard,activeloop deeplake.core.transform¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.transform.html#deeplake-core-transform,"Example: @deeplake.compute def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0): samples_out.my_tensor.append(my_arg0 * my_arg1) # This transform can be used using the eval method in one of these 2 ways:- # Directly evaluating the method # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn my_fn(arg0, arg1).eval(data_in, ds_out, scheduler='threaded', num_workers=5) # As a part of a Transform pipeline containing other functions pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)]) pipeline.eval(data_in, ds_out, scheduler='processed', num_workers=2) Copy to clipboard Note pad_data_in is only applicable if data_in is a Deep Lake dataset. ",activeloop deeplake.core.transform¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.transform.html#deeplake-core-transform," class deeplake.core.transform.Pipeline(functions: List[ComputeFunction])¶ eval(data_in, ds_out: Optional[Dataset] = None, num_workers: int = 0, scheduler: str = 'threaded', progressbar: bool = True, skip_ok: bool = False, check_lengths: bool = True, pad_data_in: bool = False, **kwargs)¶ Evaluates the pipeline on data_in to produce an output dataset ds_out. Parameters: data_in – Input passed to the transform to generate output dataset. Should support __getitem__ and __len__. Can be a Deep Lake dataset. ds_out (Dataset, optional) – The dataset object to which the transform will get written. If this is not provided, data_in will be overwritten if it is a Deep Lake dataset, otherwise error will be raised. It should have all keys being generated in output already present as tensors. It's initial state should be either: Empty, i.e., all tensors have no samples. In this case all samples are added to the dataset. All tensors are populated and have same length. In this case new samples are appended to the dataset. num_workers (int) – The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler. scheduler (str) – The scheduler to be used to compute the transformation. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. Defaults to ‘threaded'. progressbar (bool) – Displays a progress bar if True (default). skip_ok (bool) – If True, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition. This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to False. check_lengths (bool) – If True, checks whether ds_out has tensors of same lengths initially. pad_data_in (bool) – If True, pads tensors of data_in to match the length of the largest tensor in data_in. Defaults to False. **kwargs – Additional arguments. Raises: InvalidInputDataError – If data_in passed to transform is invalid. It should support __getitem__ and __len__ operations. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as data_in will also raise this. InvalidOutputDatasetError – If all the tensors of ds_out passed to transform don't have the same length. Using scheduler other than 'threaded' with deeplake dataset having base storage as memory as ds_out will also raise this. TensorMismatchError – If one or more of the outputs generated during transform contain different tensors than the ones present in ‘ds_out' provided to transform. UnsupportedSchedulerError – If the scheduler passed is not recognized. Supported values include: ‘serial', ‘threaded', ‘processed' and ‘ray'. TransformError – All other exceptions raised if there are problems while running the pipeline",activeloop _,https://docs.deeplake.ai/en/v3.1.0/genindex.html,__contains__() (deeplake.core.storage.GCSProvider method) __delitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getitem__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getstate__() (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) __init__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.sample.Sample method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.util.shape_interval.ShapeInterval method) __iter__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __len__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __repr__() (deeplake.core.index.Index method) __setitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __setstate__() (deeplake.core.storage.LRUCache method) __str__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) __weakref__ (deeplake.core.index.Index attribute) (deeplake.core.index.IndexEntry attribute) (deeplake.core.storage.StorageProvider attribute) _all_keys() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) _check_is_file() (deeplake.core.storage.LocalProvider method) _check_update_creds() (deeplake.core.storage.S3Provider method) _config (deeplake.core.tensor.Tensor property) _flush_if_not_read_only() (deeplake.core.storage.LRUCache method) _forward() (deeplake.core.storage.LRUCache method) _forward_value() (deeplake.core.storage.LRUCache method) _free_up_space() (deeplake.core.storage.LRUCache method) _insert_in_cache() (deeplake.core.storage.LRUCache method) _is_hub_path (deeplake.core.storage.StorageProvider attribute) _linked_sample() (deeplake.core.tensor.Tensor method) _pop_from_cache() (deeplake.core.storage.LRUCache method) _set_hub_creds_info() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.S3Provider method) _state_keys() (deeplake.core.storage.S3Provider method),activeloop Index,https://docs.deeplake.ai/en/v3.1.0/genindex.html,_ABCDEFGHIKLMNOPQRSTUVW,activeloop C,https://docs.deeplake.ai/en/v3.1.0/genindex.html,change_creds_management() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) check_readonly() (deeplake.core.storage.StorageProvider method) checkout() (deeplake.core.dataset.Dataset method) CheckoutError (class in deeplake.util.exceptions) ChunkEngineError (class in deeplake.util.exceptions) ChunkIdEncoderError (class in deeplake.util.exceptions) ChunkSizeTooSmallError (class in deeplake.util.exceptions) clear() (deeplake.api.info.Info method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) clear_cache() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) clear_deeplake_objects() (deeplake.core.storage.LRUCache method) client (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) commit() (deeplake.core.dataset.Dataset method) commit_id (deeplake.core.dataset.Dataset property) CommitError (class in deeplake.util.exceptions) commits (deeplake.core.dataset.Dataset property) compose() (in module deeplake) compose_at() (deeplake.core.index.Index method) compressed_bytes() (deeplake.core.sample.Sample method) compute() (in module deeplake) connect() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (in module deeplake) copy() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.storage.StorageProvider method) (in module deeplake) CorruptedMetaError (class in deeplake.util.exceptions) CorruptedSampleError (class in deeplake.util.exceptions) CouldNotCreateNewDatasetException (class in deeplake.util.exceptions) create_group() (deeplake.core.dataset.Dataset method) create_read_copy_dataset() (in module deeplake.util.remove_cache) create_tensor() (deeplake.core.dataset.Dataset method) create_tensor_like() (deeplake.core.dataset.Dataset method),activeloop B,https://docs.deeplake.ai/en/v3.1.0/genindex.html,BadGatewayException (class in deeplake.util.exceptions) BadRequestException (class in deeplake.util.exceptions) base_htype (deeplake.core.tensor.Tensor property) batch() (deeplake.enterprise.DeepLakeDataLoader method) branch (deeplake.core.dataset.Dataset property) branches (deeplake.core.dataset.Dataset property) BufferError (class in deeplake.util.exceptions),activeloop D,https://docs.deeplake.ai/en/v3.1.0/genindex.html,data() (deeplake.core.tensor.Tensor method) DataFrame (class in deeplake.auto.structured.dataframe) dataloader() (deeplake.core.dataset.Dataset method) dataset (class in deeplake.api.dataset) Dataset (class in deeplake.core.dataset) dataset() (in module deeplake) DatasetHandlerError (class in deeplake.util.exceptions) DatasetUnsupportedPytorch (class in deeplake.util.exceptions) DatasetViewSavingError (class in deeplake.util.exceptions) deepcopy() (deeplake.api.dataset.dataset static method) (in module deeplake) deeplake module deeplake.api.info module deeplake.integrations.wandb.wandb module DeepLakeCloudDataset (class in deeplake.core.dataset) DeepLakeDataLoader (class in deeplake.enterprise) delete() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) delete_group() (deeplake.core.dataset.Dataset method) delete_tensor() (deeplake.core.dataset.Dataset method) delete_view() (deeplake.core.dataset.Dataset method) dict() (deeplake.core.tensor.Tensor method) diff() (deeplake.core.dataset.Dataset method) DirectoryAtPathException (class in deeplake.util.exceptions) disable_readonly() (deeplake.core.storage.StorageProvider method) download_kaggle_dataset() (in module deeplake.auto.unstructured.kaggle) dtype (deeplake.core.tensor.Tensor property) DynamicTensorNumpyError (class in deeplake.util.exceptions),activeloop A,https://docs.deeplake.ai/en/v3.1.0/genindex.html,add_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) AgreementError (class in deeplake.util.exceptions) AgreementNotAcceptedError (class in deeplake.util.exceptions) append() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) apply() (deeplake.core.index.Index method) apply_squeeze() (deeplake.core.index.Index method) array (deeplake.core.sample.Sample property) AuthenticationException (class in deeplake.util.exceptions) AuthorizationException (class in deeplake.util.exceptions) AutoCompressionError (class in deeplake.util.exceptions),activeloop G,https://docs.deeplake.ai/en/v3.1.0/genindex.html,GatewayTimeoutException (class in deeplake.util.exceptions) GCSDefaultCredsNotFoundError (class in deeplake.util.exceptions) GCSProvider (class in deeplake.core.storage) GDriveProvider (class in deeplake.core.storage) get() (deeplake.api.info.Info method) get_base_storage() (in module deeplake.util.remove_cache) get_bytes() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) get_commit_details() (deeplake.core.dataset.Dataset method) get_creds_keys() (deeplake.core.dataset.Dataset method) get_dataset_with_zero_size_cache() (in module deeplake.util.remove_cache) get_deeplake_object() (deeplake.core.storage.LRUCache method) get_view() (deeplake.core.dataset.Dataset method) get_views() (deeplake.core.dataset.Dataset method) groups (deeplake.core.dataset.Dataset property),activeloop F,https://docs.deeplake.ai/en/v3.1.0/genindex.html,FileAtPathException (class in deeplake.util.exceptions) fill_dataset() (deeplake.auto.structured.dataframe.DataFrame method) filter() (deeplake.core.dataset.Dataset method) FilterError (class in deeplake.util.exceptions) find_axis() (deeplake.core.index.Index method) flush() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.StorageProvider method) FullChunkError (class in deeplake.util.exceptions),activeloop H,https://docs.deeplake.ai/en/v3.1.0/genindex.html,has_head_changes (deeplake.core.dataset.Dataset property) hidden (deeplake.core.tensor.Tensor property) htype (deeplake.core.tensor.Tensor property) HubComposeEmptyListError (class in deeplake.util.exceptions) HubComposeIncompatibleFunction (class in deeplake.util.exceptions),activeloop E,https://docs.deeplake.ai/en/v3.1.0/genindex.html,empty() (deeplake.api.dataset.dataset static method) (in module deeplake) EmptyCommitError (class in deeplake.util.exceptions) EmptyTensorError (class in deeplake.util.exceptions) EmptyTokenException (class in deeplake.util.exceptions) enable_readonly() (deeplake.core.storage.StorageProvider method) eval() (deeplake.core.transform.Pipeline method) exists() (deeplake.api.dataset.dataset static method) (in module deeplake) extend() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) ExternalCommandError (class in deeplake.util.exceptions),activeloop I,https://docs.deeplake.ai/en/v3.1.0/genindex.html,id (deeplake.core.dataset.ViewEntry property) ImageClassification (class in deeplake.auto.unstructured.image_classification) Index (class in deeplake.core.index) IndexEntry (class in deeplake.core.index) indices() (deeplake.core.index.IndexEntry method) Info (class in deeplake.api.info) info (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) InfoError (class in deeplake.util.exceptions) ingest() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_dataframe() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_huggingface() (in module deeplake) ingest_kaggle() (deeplake.api.dataset.dataset static method) (in module deeplake) InvalidBytesRequestedError (class in deeplake.util.exceptions) InvalidFileExtension (class in deeplake.util.exceptions) InvalidHubPathException (class in deeplake.util.exceptions) InvalidImageDimensions (class in deeplake.util.exceptions) InvalidInputDataError (class in deeplake.util.exceptions) InvalidKeyTypeError (class in deeplake.util.exceptions) InvalidOperationError (class in deeplake.util.exceptions) InvalidOutputDatasetError (class in deeplake.util.exceptions) InvalidPasswordException (class in deeplake.util.exceptions) InvalidPathException (class in deeplake.util.exceptions) InvalidShapeIntervalError (class in deeplake.util.exceptions) InvalidTensorGroupNameError (class in deeplake.util.exceptions) InvalidTensorLinkError (class in deeplake.util.exceptions) InvalidTensorNameError (class in deeplake.util.exceptions) InvalidTokenException (class in deeplake.util.exceptions) InvalidTransformDataset (class in deeplake.util.exceptions) is_actually_cloud (deeplake.core.dataset.DeepLakeCloudDataset property) is_colab() (in module deeplake.util.notebook) is_dynamic (deeplake.core.tensor.Tensor property) is_jupyter() (in module deeplake.util.notebook) is_link (deeplake.core.tensor.Tensor property) is_notebook() (in module deeplake.util.notebook) is_sequence (deeplake.core.tensor.Tensor property) is_trivial() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) is_view (deeplake.core.dataset.Dataset property) items() (deeplake.api.info.Info method),activeloop K,https://docs.deeplake.ai/en/v3.1.0/genindex.html,KaggleDatasetAlreadyDownloadedError (class in deeplake.util.exceptions) KaggleError (class in deeplake.util.exceptions) KaggleMissingCredentialsError (class in deeplake.util.exceptions) keys() (deeplake.api.info.Info method),activeloop L,https://docs.deeplake.ai/en/v3.1.0/genindex.html,length() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) like() (deeplake.api.dataset.dataset static method) (in module deeplake) link() (in module deeplake) (in module deeplake.api.link) LinkedSample (class in deeplake.core.linked_sample) list() (deeplake.api.dataset.dataset static method) (deeplake.core.tensor.Tensor method) (in module deeplake) load() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) load_view() (deeplake.core.dataset.Dataset method) LocalProvider (class in deeplake.core.storage) LockedException (class in deeplake.util.exceptions) log() (deeplake.core.dataset.Dataset method) LoginException (class in deeplake.util.exceptions) LRUCache (class in deeplake.core.storage),activeloop M,https://docs.deeplake.ai/en/v3.1.0/genindex.html,ManagedCredentialsNotFoundError (class in deeplake.util.exceptions) max_len (deeplake.core.dataset.Dataset property) max_view (deeplake.core.dataset.Dataset property) maybe_flush() (deeplake.core.storage.StorageProvider method) MemoryDatasetCanNotBePickledError (class in deeplake.util.exceptions) MemoryProvider (class in deeplake.core.storage) merge() (deeplake.core.dataset.Dataset method) merge_slices() (in module deeplake.core.index) MergeConflictError (class in deeplake.util.exceptions) MergeError (class in deeplake.util.exceptions) MergeMismatchError (class in deeplake.util.exceptions) MergeNotSupportedError (class in deeplake.util.exceptions) message (deeplake.core.dataset.ViewEntry property) meta (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) MetaAlreadyExistsError (class in deeplake.util.exceptions) MetaDoesNotExistError (class in deeplake.util.exceptions) MetaError (class in deeplake.util.exceptions) MetaInvalidKey (class in deeplake.util.exceptions) MetaInvalidRequiredMetaKey (class in deeplake.util.exceptions) min_len (deeplake.core.dataset.Dataset property) min_view (deeplake.core.dataset.Dataset property) modified_samples() (deeplake.core.tensor.Tensor method) module deeplake deeplake.api.info deeplake.integrations.wandb.wandb ModuleNotInstalledException (class in deeplake.util.exceptions),activeloop O,https://docs.deeplake.ai/en/v3.1.0/genindex.html,optimize() (deeplake.core.dataset.ViewEntry method) OutOfChunkCountError (class in deeplake.util.exceptions) OutOfSampleCountError (class in deeplake.util.exceptions) OverLimitException (class in deeplake.util.exceptions),activeloop N,https://docs.deeplake.ai/en/v3.1.0/genindex.html,nbytes (deeplake.api.info.Info property) ndim (deeplake.core.tensor.Tensor property) need_to_reload_creds() (deeplake.core.storage.S3Provider method) NotLoggedInError (class in deeplake.util.exceptions) num_samples (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) numpy() (deeplake.core.tensor.Tensor method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop P,https://docs.deeplake.ai/en/v3.1.0/genindex.html,parent (deeplake.core.dataset.Dataset property) PartialSample (class in deeplake.core.partial_sample) path() (deeplake.core.tensor.Tensor method) PathNotEmptyException (class in deeplake.util.exceptions) pending_commit_id (deeplake.core.dataset.Dataset property) pil (deeplake.core.sample.Sample property) Pipeline (class in deeplake.core.transform) play() (deeplake.core.tensor.Tensor method) pop() (deeplake.api.info.Info method) (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) popitem() (deeplake.api.info.Info method) populate_creds() (deeplake.core.dataset.Dataset method) ProviderListEmptyError (class in deeplake.util.exceptions) ProviderSizeListMismatch (class in deeplake.util.exceptions) pytorch() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop S,https://docs.deeplake.ai/en/v3.1.0/genindex.html,S3DeletionError (class in deeplake.util.exceptions) S3Error (class in deeplake.util.exceptions) S3GetError (class in deeplake.util.exceptions) S3ListError (class in deeplake.util.exceptions) S3Provider (class in deeplake.core.storage) S3SetError (class in deeplake.util.exceptions) SamePathException (class in deeplake.util.exceptions) Sample (class in deeplake.core.sample) sample_by() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method) sample_indices (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) sample_info (deeplake.core.tensor.Tensor property) SampleCompressionError (class in deeplake.util.exceptions) SampleDecompressionError (class in deeplake.util.exceptions) SampleHtypeMismatchError (class in deeplake.util.exceptions) save_view() (deeplake.core.dataset.Dataset method) ServerException (class in deeplake.util.exceptions) set_bytes() (deeplake.core.storage.StorageProvider method) setdefault() (deeplake.api.info.Info method) shape (deeplake.core.tensor.Tensor property) shape_interval (deeplake.core.tensor.Tensor property) ShapeInterval (class in deeplake.util.shape_interval) shuffle() (deeplake.enterprise.DeepLakeDataLoader method) size_approx() (deeplake.core.dataset.Dataset method) slice_at_int() (in module deeplake.core.index) slice_length() (in module deeplake.core.index) StorageProvider (class in deeplake.core.storage) structure() (deeplake.auto.unstructured.image_classification.ImageClassification method) StructuredDataset (class in deeplake.auto.structured.base) subscriptable() (deeplake.core.index.IndexEntry method) summary() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) sync() (deeplake.core.storage.GDriveProvider method),activeloop T,https://docs.deeplake.ai/en/v3.1.0/genindex.html,Tensor (class in deeplake.core.tensor) TensorAlreadyExistsError (class in deeplake.util.exceptions) TensorDoesNotExistError (class in deeplake.util.exceptions) TensorDtypeMismatchError (class in deeplake.util.exceptions) tensorflow() (deeplake.core.dataset.Dataset method) TensorGroupAlreadyExistsError (class in deeplake.util.exceptions) TensorGroupDoesNotExistError (class in deeplake.util.exceptions) TensorInvalidSampleShapeError (class in deeplake.util.exceptions) TensorMetaInvalidHtype (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteKey (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteValue (class in deeplake.util.exceptions) TensorMetaMissingKey (class in deeplake.util.exceptions) TensorMetaMissingRequiredValue (class in deeplake.util.exceptions) TensorMetaMutuallyExclusiveKeysError (class in deeplake.util.exceptions) TensorMismatchError (class in deeplake.util.exceptions) TensorModifiedError (class in deeplake.util.exceptions) tensors (deeplake.core.dataset.Dataset property) TensorUnsupportedSampleType (class in deeplake.util.exceptions) text() (deeplake.core.tensor.Tensor method) tiled() (in module deeplake) (in module deeplake.api.tiled) timestamps (deeplake.core.tensor.Tensor property) tobytes() (deeplake.core.tensor.Tensor method) token (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) TokenPermissionError (class in deeplake.util.exceptions) transform() (deeplake.enterprise.DeepLakeDataLoader method) TransformError (class in deeplake.util.exceptions),activeloop U,https://docs.deeplake.ai/en/v3.1.0/genindex.html,UnableToReadFromUrlError (class in deeplake.util.exceptions) uncompressed_bytes() (deeplake.core.sample.Sample method) UnexpectedStatusCodeException (class in deeplake.util.exceptions) UnstructuredDataset (class in deeplake.auto.unstructured.base) UnsupportedCompressionError (class in deeplake.util.exceptions) UnsupportedSchedulerError (class in deeplake.util.exceptions) UnsupportedTensorTypeError (class in deeplake.util.exceptions) update() (deeplake.api.info.Info method) update_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) UserNotLoggedInException (class in deeplake.util.exceptions),activeloop R,https://docs.deeplake.ai/en/v3.1.0/genindex.html,read() (in module deeplake) (in module deeplake.api.read) read_only (deeplake.core.dataset.Dataset property) ReadOnlyModeError (class in deeplake.util.exceptions) rechunk() (deeplake.core.dataset.Dataset method) register_deeplake_object() (deeplake.core.storage.LRUCache method) remove_deeplake_object() (deeplake.core.storage.LRUCache method) remove_memory_cache() (in module deeplake.util.remove_cache) rename() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.S3Provider method) (in module deeplake) rename_group() (deeplake.core.dataset.Dataset method) rename_tensor() (deeplake.core.dataset.Dataset method) RenameError (class in deeplake.util.exceptions) replace_with() (deeplake.api.info.Info method) reset() (deeplake.core.dataset.Dataset method) ResourceNotFoundException (class in deeplake.util.exceptions) root (deeplake.core.dataset.Dataset property),activeloop Q,https://docs.deeplake.ai/en/v3.1.0/genindex.html,query() (deeplake.core.dataset.Dataset method) (deeplake.enterprise.DeepLakeDataLoader method),activeloop V,https://docs.deeplake.ai/en/v3.1.0/genindex.html,validate() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) values() (deeplake.api.info.Info method) verify (deeplake.core.tensor.Tensor property) VersionControlError (class in deeplake.util.exceptions) ViewEntry (class in deeplake.core.dataset) visualize() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method),activeloop W,https://docs.deeplake.ai/en/v3.1.0/genindex.html,WaitTimeoutException (class in deeplake.util.exceptions),activeloop Python Module Index,https://docs.deeplake.ai/en/v3.1.0/py-modindex.html,d d deeplake deeplake.api.info deeplake.integrations.wandb.wandb,activeloop Installation¶,https://docs.deeplake.ai/en/v3.0.16/Installation.html#installation,"Deep Lake can be installed with pip Copy to clipboard Deep Lake has the following extras that you can choose to install according to your needs. Htype configs¶ Install command Description Dependencies installed pip install 'deeplake[av]' Audio and video support via PyAV av pip install 'deeplake[visualizer]' Visualize Deep Lake datasets within notebooks. This is required for Dataset.visualize to work. IPython, flask pip install 'deeplake[gcp]' GCS support google-cloud-storage, google-auth, google-auth-oauthlib pip install 'deeplake[dicom]' DICOM data support pydicom pip install 'deeplake[gdrive]' Google Drive support google-api-python-client, oauth2client, google-auth, google-auth-oauthlib pip install 'deeplake[point_cloud]' Support for LiDAR point cloud data laspy pip install 'deeplake[all]' Installs all of the above",activeloop Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#datasets," Creating Datasets¶ deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format. Loading Datasets¶ deeplake.load Loads an existing dataset Deleting and Renaming Datasets¶ deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path. Copying Datasets¶ deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history. Dataset Operations¶ Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset. Dataset Visualization¶ Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset Credentials¶ Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset. Dataset Properties¶ Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset. Dataset Version Control¶ Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset",activeloop Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#datasets,"Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset . Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset. Dataset Views¶ A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter() or querying a dataset with Dataset.query(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here. Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated. Example >>> import deeplake >>> # load dataset >>> ds = deeplake.load('hub://activeloop/mnist-train') >>> # filter dataset >>> zeros = ds.filter('labels == 0') >>> # save view >>> zeros.save_view(id='zeros') >>> # load_view >>> zeros = ds.load_view(id='zeros') >>> len(zeros) 5923 Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise.",activeloop Creating Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#creating-datasets,deeplake.dataset Returns a Dataset object referencing either a new or existing dataset. deeplake.empty Creates an empty dataset deeplake.like Creates a new dataset by copying the source dataset's structure to a new location. deeplake.ingest Ingests a dataset from a source and stores it as a structured dataset to destination. deeplake.ingest_kaggle Download and ingest a kaggle dataset and store it as a structured dataset to destination. deeplake.ingest_dataframe Convert pandas dataframe to a Deep Lake Dataset. deeplake.ingest_huggingface Converts Hugging Face datasets to Deep Lake format.,activeloop Loading Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#loading-datasets,deeplake.load Loads an existing dataset,activeloop Copying Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#copying-datasets,deeplake.copy Copies dataset at src to dest. deeplake.deepcopy Copies dataset at src to dest including version control history.,activeloop Dataset Operations¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-operations,Dataset.summary Prints a summary of the dataset. Dataset.append Append samples to mutliple tensors at once. Dataset.extend Appends multiple rows of samples to mutliple tensors at once. Dataset.query Returns a sliced Dataset with given query results. Dataset.copy Copies this dataset or dataset view to dest. Dataset.delete Deletes the entire dataset from the cache layers (if any) and the underlying storage. Dataset.rename Renames the dataset to path. Dataset.visualize Visualizes the dataset in the Jupyter notebook. Dataset.pop Removes a sample from all the tensors of the dataset. Dataset.rechunk Rewrites the underlying chunks to make their sizes optimal. Dataset.flush Necessary operation after writes if caches are being used. Dataset.clear_cache Flushes (see Dataset.flush()) the contents of the cache layers (if any) and then deletes contents of all the layers of it. Dataset.size_approx Estimates the size in bytes of the dataset.,activeloop Dataset Credentials¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-credentials,Dataset.add_creds_key Adds a new creds key to the dataset. Dataset.populate_creds Populates the creds key added in add_creds_key with the given creds. Dataset.update_creds_key Replaces the old creds key with the new creds key. Dataset.change_creds_management Changes the management status of the creds key. Dataset.get_creds_keys Returns the list of creds keys added to the dataset.,activeloop Dataset Version Control¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-version-control,Dataset.commit Stores a snapshot of the current state of the dataset. Dataset.diff Returns/displays the differences between commits/branches. Dataset.checkout Checks out to a specific commit_id or branch. Dataset.merge Merges the target_id into the current dataset. Dataset.log Displays the details of all the past commits. Dataset.reset Resets the uncommitted changes present in the branch. Dataset.get_commit_details Get details of a particular commit. Dataset.commit_id The lasted committed commit id of the dataset. Dataset.branch The current branch of the dataset Dataset.pending_commit_id The commit_id of the next commit that will be made to the dataset. Dataset.has_head_changes Returns True if currently at head node and uncommitted changes are present. Dataset.commits Lists all the commits leading to the current dataset state. Dataset.branches Lists all the branches of the dataset.,activeloop Dataset Visualization¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-visualization,Dataset.visualize Visualizes the dataset in the Jupyter notebook.,activeloop Dataset Properties¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-properties,"Dataset.tensors All tensors belonging to this group, including those within sub groups. Dataset.groups All sub groups in this group Dataset.num_samples Returns the length of the smallest tensor. Dataset.read_only Returns True if dataset is in read-only mode and False otherwise. Dataset.info Returns the information about the dataset.",activeloop Dataset Views¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#dataset-views,"A dataset view is a subset of a dataset that points to specific samples (indices) in an existing dataset. Dataset views can be created by indexing a dataset, filtering a dataset with Dataset.filter() or querying a dataset with Dataset.query(). Filtering is done with user-defined functions or simplified expressions whereas query can perform SQL-like queries with our Tensor Query Language. See the full TQL spec here.Dataset views can only be saved when a dataset has been committed and has no changes on the HEAD node, in order to preserve data lineage and prevent the underlying data from changing after the query or filter conditions have been evaluated.Example Copy to clipboard Dataset.query Returns a sliced Dataset with given query results. Dataset.filter Filters the dataset in accordance of filter function f(x: sample) -> bool Dataset.save_view Saves a dataset view as a virtual dataset (VDS) Dataset.get_view Returns the dataset view corresponding to id. Dataset.load_view Loads the view and returns the Dataset by id. Dataset.delete_view Deletes the view with given view id. Dataset.get_views Returns list of views stored in this Dataset. Dataset.is_view Returns True if this dataset is a view and False otherwise.",activeloop Deleting and Renaming Datasets¶,https://docs.deeplake.ai/en/v3.0.16/Datasets.html#deleting-and-renaming-datasets,deeplake.delete Deletes a dataset at a given path. deeplake.rename Renames dataset at old_path to new_path.,activeloop Tensors¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#tensors,Creating Tensors¶ Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it. Deleting and Renaming Tensors¶ Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name Adding and deleting samples¶ Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values. Retrieving samples¶ Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index. Tensor Properties¶ Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added. Info¶ Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor. Video features¶ Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Creating Tensors¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#creating-tensors,Dataset.create_tensor Creates a new tensor in the dataset. Dataset.create_group Creates a tensor group. Dataset.create_tensor_like Copies the source tensor's meta information and creates a new tensor with it.,activeloop Adding and deleting samples¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#adding-and-deleting-samples,Tensor.append Appends a single sample to the end of the tensor. Tensor.extend Extends the end of the tensor by appending multiple elements from a sequence. Tensor.pop Removes an element at the given index. Tensor.clear Deletes all samples from the tensor Tensor.__setitem__ Update samples with new values.,activeloop Retrieving samples¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#retrieving-samples,Tensor.numpy Computes the contents of the tensor in numpy format. Tensor.data Returns data in the tensor in a format based on the tensor's base htype. Tensor.tobytes Returns the bytes of the tensor. Tensor.text Return text data. Tensor.dict Return json data. Tensor.list Return list data. Tensor._linked_sample Returns the linked sample at the given index.,activeloop Deleting and Renaming Tensors¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#deleting-and-renaming-tensors,Dataset.delete_tensor Delete a tensor from the dataset. Dataset.delete_group Delete a tensor group from the dataset. Dataset.rename_tensor Renames tensor with name name to new_name Dataset.rename_group Renames group with name name to new_name,activeloop Tensor Properties¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#tensor-properties,Tensor.htype Htype of the tensor. Tensor.base_htype Base htype of the tensor. Tensor.dtype Dtype of the tensor. Tensor.shape Get the shape of this tensor. Tensor.shape_interval Returns a ShapeInterval object that describes this tensor's shape more accurately. Tensor.ndim Number of dimensions of the tensor. Tensor.num_samples Returns the length of the primary axis of the tensor. Tensor.__len__ Returns the length of the primary axis of the tensor. Tensor.is_dynamic Will return True if samples in this tensor have shapes that are unequal. Tensor.is_sequence Whether this tensor is a sequence tensor. Tensor.is_link Whether this tensor is a link tensor. Tensor.verify Whether linked data will be verified when samples are added.,activeloop Video features¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#video-features,Tensor.play Play video sample. Tensor.timestamps Returns timestamps (in seconds) for video sample as numpy array.,activeloop Info¶,https://docs.deeplake.ai/en/v3.0.16/Tensors.html#info,Tensor.info Returns the information about the tensor. Tensor.sample_info Returns info about particular samples in a tensor.,activeloop Compressions¶,https://docs.deeplake.ai/en/v3.0.16/Compressions.html#compressions,"Deep Lake can read, compress, decompress and recompress data to different formats. The supported htype-compression configurations are given below. Sample Type Htype Compressions Image image bmp, dib, gif, ico, jpeg, jpeg2000, pcx, png, ppm, sgi, tga, tiff, webp, wmf, xbm, eps, fli, im, msp, mpo, apng Video video mp4, mkv, avi Audio audio flac, mp3, wav Dicom dicom dcm Point Cloud point_cloud las Other bbox, text, list, json, generic, etc. lz4 Sample Compression¶ If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed. Note For audio and video, we don't support compressing raw frames but only reading compressed audio and video data. Examples: >>> ds.create_tensor('images', htype='image', sample_compression='jpg') >>> ds.create_tensor('videos', htype='video', sample_compression='mp4') >>> ds.create_tensor('point_clouds', htype='point_cloud', sample_compression='las') Copy to clipboard Structure of sample-wise compressed tensor.¶ Chunk Compression¶ If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise. Note Chunk-wise compression is not supported for audio, video and point_cloud htypes. Examples: >>> ds.create_tensor('images', htype='image', chunk_compression='jpg') >>> ds.create_tensor('boxes', htype='bbox', chunk_compression='lz4') Copy to clipboard Structure of chunk-wise compressed tensor.¶ Note See deeplake.read() to learn how to read data from files and populate these tensors.",activeloop Sample Compression¶,https://docs.deeplake.ai/en/v3.0.16/Compressions.html#sample-compression,"If sample compression is specified when creating tensors, samples will be compressed to the given format if possible. If given data is already compressed and matches the provided sample_compression, it will be stored as is. If left as None, given samples are uncompressed.NoteFor audio and video, we don't support compressing raw frames but only reading compressed audio and video data.Examples: Copy to clipboard Structure of sample-wise compressed tensor.¶",activeloop Chunk Compression¶,https://docs.deeplake.ai/en/v3.0.16/Compressions.html#chunk-compression,"If chunk compression is specified when creating tensors, addded samples will be clubbed together and compressed to the given format chunk-wise. If given data is already compressed, it will be uncompressed and then recompressed chunk-wise.NoteChunk-wise compression is not supported for audio, video and point_cloud htypes.Examples: Copy to clipboard Structure of chunk-wise compressed tensor.¶ NoteSee deeplake.read() to learn how to read data from files and populate these tensors.",activeloop PyTorch and Tensorflow Support¶,https://docs.deeplake.ai/en/v3.0.16/Pytorch-and-Tensorflow-Support.html#pytorch-and-tensorflow-support,Deep Lake datasets can be easily converted to Torch dataloaders or Tensorflow datasets for training. Dataset.pytorch Converts the dataset into a pytorch Dataloader. Dataset.tensorflow Converts the dataset into a tensorflow compatible format.,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/Utility-Functions.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/Weights-and-Biases.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/Dataloader.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/Tensor-Query-Language.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.dataset.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.tensor.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.api.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.auto.html,Enable JavaScript and cookies to continue,activeloop Checking if the site connection is secure,https://docs.deeplake.ai/en/v3.0.16/deeplake.util.html,Enable JavaScript and cookies to continue,activeloop deeplake.client.log¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.client.log.html#deeplake-client-log,Deep Lake does logging using the 'deeplake' logger. Logging level is logging.INFO by default. See example on how to change this. Copy to clipboard,activeloop Index,https://docs.deeplake.ai/en/v3.0.16/genindex.html,_ABCDEFGHIKLMNOPQRSTUVW,activeloop _,https://docs.deeplake.ai/en/v3.0.16/genindex.html,__contains__() (deeplake.core.storage.GCSProvider method) __delitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getitem__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __getstate__() (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) __init__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) (deeplake.core.sample.Sample method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.util.shape_interval.ShapeInterval method) __iter__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) __len__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __repr__() (deeplake.core.index.Index method) __setitem__() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) __setstate__() (deeplake.core.storage.LRUCache method) __str__() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) __weakref__ (deeplake.core.index.Index attribute) (deeplake.core.index.IndexEntry attribute) (deeplake.core.storage.StorageProvider attribute) _all_keys() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) _check_is_file() (deeplake.core.storage.LocalProvider method) _check_update_creds() (deeplake.core.storage.S3Provider method) _config (deeplake.core.tensor.Tensor property) _flush_if_not_read_only() (deeplake.core.storage.LRUCache method) _forward() (deeplake.core.storage.LRUCache method) _forward_value() (deeplake.core.storage.LRUCache method) _free_up_space() (deeplake.core.storage.LRUCache method) _insert_in_cache() (deeplake.core.storage.LRUCache method) _is_hub_path (deeplake.core.storage.StorageProvider attribute) _linked_sample() (deeplake.core.tensor.Tensor method) _pop_from_cache() (deeplake.core.storage.LRUCache method) _set_hub_creds_info() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.S3Provider method) _state_keys() (deeplake.core.storage.S3Provider method),activeloop A,https://docs.deeplake.ai/en/v3.0.16/genindex.html,add_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) AgreementError (class in deeplake.util.exceptions) AgreementNotAcceptedError (class in deeplake.util.exceptions) append() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) apply() (deeplake.core.index.Index method) apply_squeeze() (deeplake.core.index.Index method) array (deeplake.core.sample.Sample property) AuthenticationException (class in deeplake.util.exceptions) AuthorizationException (class in deeplake.util.exceptions) AutoCompressionError (class in deeplake.util.exceptions),activeloop B,https://docs.deeplake.ai/en/v3.0.16/genindex.html,BadGatewayException (class in deeplake.util.exceptions) BadRequestException (class in deeplake.util.exceptions) base_htype (deeplake.core.tensor.Tensor property) batch() (deeplake.experimental.DeepLakeDataLoader method) branch (deeplake.core.dataset.Dataset property) branches (deeplake.core.dataset.Dataset property) BufferError (class in deeplake.util.exceptions),activeloop C,https://docs.deeplake.ai/en/v3.0.16/genindex.html,change_creds_management() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) check_readonly() (deeplake.core.storage.StorageProvider method) checkout() (deeplake.core.dataset.Dataset method) CheckoutError (class in deeplake.util.exceptions) ChunkEngineError (class in deeplake.util.exceptions) ChunkIdEncoderError (class in deeplake.util.exceptions) ChunkSizeTooSmallError (class in deeplake.util.exceptions) clear() (deeplake.api.info.Info method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.GDriveProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.MemoryProvider method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) (deeplake.core.tensor.Tensor method) clear_cache() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) clear_deeplake_objects() (deeplake.core.storage.LRUCache method) client (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) commit() (deeplake.core.dataset.Dataset method) commit_id (deeplake.core.dataset.Dataset property) CommitError (class in deeplake.util.exceptions) commits (deeplake.core.dataset.Dataset property) compose() (in module deeplake) compose_at() (deeplake.core.index.Index method) compressed_bytes() (deeplake.core.sample.Sample method) compute() (in module deeplake) connect() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (in module deeplake) copy() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.storage.StorageProvider method) (in module deeplake) CorruptedMetaError (class in deeplake.util.exceptions) CorruptedSampleError (class in deeplake.util.exceptions) CouldNotCreateNewDatasetException (class in deeplake.util.exceptions) create_group() (deeplake.core.dataset.Dataset method) create_read_copy_dataset() (in module deeplake.util.remove_cache) create_tensor() (deeplake.core.dataset.Dataset method) create_tensor_like() (deeplake.core.dataset.Dataset method),activeloop D,https://docs.deeplake.ai/en/v3.0.16/genindex.html,data() (deeplake.core.tensor.Tensor method) DataFrame (class in deeplake.auto.structured.dataframe) dataloader() (in module deeplake.experimental) dataset (class in deeplake.api.dataset) Dataset (class in deeplake.core.dataset) dataset() (in module deeplake) DatasetHandlerError (class in deeplake.util.exceptions) DatasetUnsupportedPytorch (class in deeplake.util.exceptions) DatasetViewSavingError (class in deeplake.util.exceptions) deepcopy() (deeplake.api.dataset.dataset static method) (in module deeplake) deeplake module deeplake.api.info module deeplake.integrations.wandb.wandb module DeepLakeCloudDataset (class in deeplake.core.dataset) DeepLakeDataLoader (class in deeplake.experimental) delete() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) delete_group() (deeplake.core.dataset.Dataset method) delete_tensor() (deeplake.core.dataset.Dataset method) delete_view() (deeplake.core.dataset.Dataset method) dict() (deeplake.core.tensor.Tensor method) diff() (deeplake.core.dataset.Dataset method) DirectoryAtPathException (class in deeplake.util.exceptions) disable_readonly() (deeplake.core.storage.StorageProvider method) download_kaggle_dataset() (in module deeplake.auto.unstructured.kaggle) dtype (deeplake.core.tensor.Tensor property) DynamicTensorNumpyError (class in deeplake.util.exceptions),activeloop G,https://docs.deeplake.ai/en/v3.0.16/genindex.html,GatewayTimeoutException (class in deeplake.util.exceptions) GCSDefaultCredsNotFoundError (class in deeplake.util.exceptions) GCSProvider (class in deeplake.core.storage) GDriveProvider (class in deeplake.core.storage) get() (deeplake.api.info.Info method) get_base_storage() (in module deeplake.util.remove_cache) get_bytes() (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.S3Provider method) (deeplake.core.storage.StorageProvider method) get_commit_details() (deeplake.core.dataset.Dataset method) get_creds_keys() (deeplake.core.dataset.Dataset method) get_dataset_with_zero_size_cache() (in module deeplake.util.remove_cache) get_deeplake_object() (deeplake.core.storage.LRUCache method) get_view() (deeplake.core.dataset.Dataset method) get_views() (deeplake.core.dataset.Dataset method) groups (deeplake.core.dataset.Dataset property),activeloop F,https://docs.deeplake.ai/en/v3.0.16/genindex.html,FileAtPathException (class in deeplake.util.exceptions) fill_dataset() (deeplake.auto.structured.dataframe.DataFrame method) filter() (deeplake.core.dataset.Dataset method) FilterError (class in deeplake.util.exceptions) find_axis() (deeplake.core.index.Index method) flush() (deeplake.core.dataset.Dataset method) (deeplake.core.storage.LRUCache method) (deeplake.core.storage.StorageProvider method) FullChunkError (class in deeplake.util.exceptions),activeloop E,https://docs.deeplake.ai/en/v3.0.16/genindex.html,empty() (deeplake.api.dataset.dataset static method) (in module deeplake) EmptyCommitError (class in deeplake.util.exceptions) EmptyTensorError (class in deeplake.util.exceptions) EmptyTokenException (class in deeplake.util.exceptions) enable_readonly() (deeplake.core.storage.StorageProvider method) exists() (deeplake.api.dataset.dataset static method) (in module deeplake) extend() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) ExternalCommandError (class in deeplake.util.exceptions),activeloop I,https://docs.deeplake.ai/en/v3.0.16/genindex.html,id (deeplake.core.dataset.ViewEntry property) ImageClassification (class in deeplake.auto.unstructured.image_classification) Index (class in deeplake.core.index) IndexEntry (class in deeplake.core.index) indices() (deeplake.core.index.IndexEntry method) Info (class in deeplake.api.info) info (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) InfoError (class in deeplake.util.exceptions) ingest() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_dataframe() (deeplake.api.dataset.dataset static method) (in module deeplake) ingest_huggingface() (in module deeplake) ingest_kaggle() (deeplake.api.dataset.dataset static method) (in module deeplake) InvalidBytesRequestedError (class in deeplake.util.exceptions) InvalidFileExtension (class in deeplake.util.exceptions) InvalidHubPathException (class in deeplake.util.exceptions) InvalidImageDimensions (class in deeplake.util.exceptions) InvalidInputDataError (class in deeplake.util.exceptions) InvalidKeyTypeError (class in deeplake.util.exceptions) InvalidOperationError (class in deeplake.util.exceptions) InvalidOutputDatasetError (class in deeplake.util.exceptions) InvalidPasswordException (class in deeplake.util.exceptions) InvalidPathException (class in deeplake.util.exceptions) InvalidShapeIntervalError (class in deeplake.util.exceptions) InvalidTensorGroupNameError (class in deeplake.util.exceptions) InvalidTensorLinkError (class in deeplake.util.exceptions) InvalidTensorNameError (class in deeplake.util.exceptions) InvalidTokenException (class in deeplake.util.exceptions) InvalidTransformDataset (class in deeplake.util.exceptions) is_actually_cloud (deeplake.core.dataset.DeepLakeCloudDataset property) is_colab() (in module deeplake.util.notebook) is_dynamic (deeplake.core.tensor.Tensor property) is_jupyter() (in module deeplake.util.notebook) is_link (deeplake.core.tensor.Tensor property) is_notebook() (in module deeplake.util.notebook) is_sequence (deeplake.core.tensor.Tensor property) is_trivial() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) is_view (deeplake.core.dataset.Dataset property) items() (deeplake.api.info.Info method),activeloop L,https://docs.deeplake.ai/en/v3.0.16/genindex.html,length() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) like() (deeplake.api.dataset.dataset static method) (in module deeplake) link() (in module deeplake) (in module deeplake.api.link) LinkedSample (class in deeplake.core.linked_sample) list() (deeplake.api.dataset.dataset static method) (deeplake.core.tensor.Tensor method) (in module deeplake) load() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.ViewEntry method) (in module deeplake) load_view() (deeplake.core.dataset.Dataset method) LocalProvider (class in deeplake.core.storage) LockedException (class in deeplake.util.exceptions) log() (deeplake.core.dataset.Dataset method) LoginException (class in deeplake.util.exceptions) LRUCache (class in deeplake.core.storage),activeloop M,https://docs.deeplake.ai/en/v3.0.16/genindex.html,ManagedCredentialsNotFoundError (class in deeplake.util.exceptions) max_len (deeplake.core.dataset.Dataset property) maybe_flush() (deeplake.core.storage.StorageProvider method) MemoryDatasetCanNotBePickledError (class in deeplake.util.exceptions) MemoryProvider (class in deeplake.core.storage) merge() (deeplake.core.dataset.Dataset method) merge_slices() (in module deeplake.core.index) MergeConflictError (class in deeplake.util.exceptions) MergeError (class in deeplake.util.exceptions) MergeMismatchError (class in deeplake.util.exceptions) MergeNotSupportedError (class in deeplake.util.exceptions) message (deeplake.core.dataset.ViewEntry property) meta (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) MetaAlreadyExistsError (class in deeplake.util.exceptions) MetaDoesNotExistError (class in deeplake.util.exceptions) MetaError (class in deeplake.util.exceptions) MetaInvalidKey (class in deeplake.util.exceptions) MetaInvalidRequiredMetaKey (class in deeplake.util.exceptions) min_len (deeplake.core.dataset.Dataset property) modified_samples() (deeplake.core.tensor.Tensor method) module deeplake deeplake.api.info deeplake.integrations.wandb.wandb ModuleNotInstalledException (class in deeplake.util.exceptions),activeloop H,https://docs.deeplake.ai/en/v3.0.16/genindex.html,has_head_changes (deeplake.core.dataset.Dataset property) hidden (deeplake.core.tensor.Tensor property) htype (deeplake.core.tensor.Tensor property) HubComposeEmptyListError (class in deeplake.util.exceptions) HubComposeIncompatibleFunction (class in deeplake.util.exceptions),activeloop N,https://docs.deeplake.ai/en/v3.0.16/genindex.html,nbytes (deeplake.api.info.Info property) ndim (deeplake.core.tensor.Tensor property) need_to_reload_creds() (deeplake.core.storage.S3Provider method) NotLoggedInError (class in deeplake.util.exceptions) num_samples (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) numpy() (deeplake.core.tensor.Tensor method) (deeplake.experimental.DeepLakeDataLoader method),activeloop O,https://docs.deeplake.ai/en/v3.0.16/genindex.html,optimize() (deeplake.core.dataset.ViewEntry method) OutOfChunkCountError (class in deeplake.util.exceptions) OutOfSampleCountError (class in deeplake.util.exceptions) OverLimitException (class in deeplake.util.exceptions),activeloop P,https://docs.deeplake.ai/en/v3.0.16/genindex.html,parent (deeplake.core.dataset.Dataset property) PartialSample (class in deeplake.core.partial_sample) path() (deeplake.core.tensor.Tensor method) PathNotEmptyException (class in deeplake.util.exceptions) pending_commit_id (deeplake.core.dataset.Dataset property) play() (deeplake.core.tensor.Tensor method) pop() (deeplake.api.info.Info method) (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) popitem() (deeplake.api.info.Info method) populate_creds() (deeplake.core.dataset.Dataset method) ProviderListEmptyError (class in deeplake.util.exceptions) ProviderSizeListMismatch (class in deeplake.util.exceptions) pytorch() (deeplake.core.dataset.Dataset method) (deeplake.experimental.DeepLakeDataLoader method),activeloop Q,https://docs.deeplake.ai/en/v3.0.16/genindex.html,query() (deeplake.core.dataset.Dataset method) (deeplake.experimental.DeepLakeDataLoader method),activeloop K,https://docs.deeplake.ai/en/v3.0.16/genindex.html,KaggleDatasetAlreadyDownloadedError (class in deeplake.util.exceptions) KaggleError (class in deeplake.util.exceptions) KaggleMissingCredentialsError (class in deeplake.util.exceptions) keys() (deeplake.api.info.Info method),activeloop R,https://docs.deeplake.ai/en/v3.0.16/genindex.html,read() (in module deeplake) (in module deeplake.api.read) read_only (deeplake.core.dataset.Dataset property) ReadOnlyModeError (class in deeplake.util.exceptions) rechunk() (deeplake.core.dataset.Dataset method) register_deeplake_object() (deeplake.core.storage.LRUCache method) remove_deeplake_object() (deeplake.core.storage.LRUCache method) remove_memory_cache() (in module deeplake.util.remove_cache) rename() (deeplake.api.dataset.dataset static method) (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) (deeplake.core.storage.GCSProvider method) (deeplake.core.storage.LocalProvider method) (deeplake.core.storage.S3Provider method) (in module deeplake) rename_group() (deeplake.core.dataset.Dataset method) rename_tensor() (deeplake.core.dataset.Dataset method) RenameError (class in deeplake.util.exceptions) replace_with() (deeplake.api.info.Info method) reset() (deeplake.core.dataset.Dataset method) ResourceNotFoundException (class in deeplake.util.exceptions) root (deeplake.core.dataset.Dataset property),activeloop S,https://docs.deeplake.ai/en/v3.0.16/genindex.html,S3DeletionError (class in deeplake.util.exceptions) S3Error (class in deeplake.util.exceptions) S3GetError (class in deeplake.util.exceptions) S3ListError (class in deeplake.util.exceptions) S3Provider (class in deeplake.core.storage) S3SetError (class in deeplake.util.exceptions) SamePathException (class in deeplake.util.exceptions) Sample (class in deeplake.core.sample) sample_by() (deeplake.core.dataset.Dataset method) (deeplake.experimental.DeepLakeDataLoader method) sample_indices (deeplake.core.dataset.Dataset property) (deeplake.core.tensor.Tensor property) sample_info (deeplake.core.tensor.Tensor property) SampleCompressionError (class in deeplake.util.exceptions) SampleDecompressionError (class in deeplake.util.exceptions) SampleHtypeMismatchError (class in deeplake.util.exceptions) save_view() (deeplake.core.dataset.Dataset method) ServerException (class in deeplake.util.exceptions) set_bytes() (deeplake.core.storage.StorageProvider method) setdefault() (deeplake.api.info.Info method) shape (deeplake.core.tensor.Tensor property) shape_interval (deeplake.core.tensor.Tensor property) ShapeInterval (class in deeplake.util.shape_interval) shuffle() (deeplake.experimental.DeepLakeDataLoader method) size_approx() (deeplake.core.dataset.Dataset method) slice_at_int() (in module deeplake.core.index) slice_length() (in module deeplake.core.index) StorageProvider (class in deeplake.core.storage) structure() (deeplake.auto.unstructured.image_classification.ImageClassification method) StructuredDataset (class in deeplake.auto.structured.base) subscriptable() (deeplake.core.index.IndexEntry method) summary() (deeplake.core.dataset.Dataset method) (deeplake.core.tensor.Tensor method) sync() (deeplake.core.storage.GDriveProvider method),activeloop U,https://docs.deeplake.ai/en/v3.0.16/genindex.html,UnableToReadFromUrlError (class in deeplake.util.exceptions) uncompressed_bytes() (deeplake.core.sample.Sample method) UnexpectedStatusCodeException (class in deeplake.util.exceptions) UnstructuredDataset (class in deeplake.auto.unstructured.base) UnsupportedCompressionError (class in deeplake.util.exceptions) UnsupportedSchedulerError (class in deeplake.util.exceptions) UnsupportedTensorTypeError (class in deeplake.util.exceptions) update() (deeplake.api.info.Info method) update_creds_key() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method) UserNotLoggedInException (class in deeplake.util.exceptions),activeloop T,https://docs.deeplake.ai/en/v3.0.16/genindex.html,Tensor (class in deeplake.core.tensor) TensorAlreadyExistsError (class in deeplake.util.exceptions) TensorDoesNotExistError (class in deeplake.util.exceptions) TensorDtypeMismatchError (class in deeplake.util.exceptions) tensorflow() (deeplake.core.dataset.Dataset method) TensorGroupAlreadyExistsError (class in deeplake.util.exceptions) TensorGroupDoesNotExistError (class in deeplake.util.exceptions) TensorInvalidSampleShapeError (class in deeplake.util.exceptions) TensorMetaInvalidHtype (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteKey (class in deeplake.util.exceptions) TensorMetaInvalidHtypeOverwriteValue (class in deeplake.util.exceptions) TensorMetaMissingKey (class in deeplake.util.exceptions) TensorMetaMissingRequiredValue (class in deeplake.util.exceptions) TensorMetaMutuallyExclusiveKeysError (class in deeplake.util.exceptions) TensorMismatchError (class in deeplake.util.exceptions) TensorModifiedError (class in deeplake.util.exceptions) tensors (deeplake.core.dataset.Dataset property) TensorUnsupportedSampleType (class in deeplake.util.exceptions) text() (deeplake.core.tensor.Tensor method) tiled() (in module deeplake) (in module deeplake.api.tiled) timestamps (deeplake.core.tensor.Tensor property) tobytes() (deeplake.core.tensor.Tensor method) token (deeplake.core.dataset.Dataset property) (deeplake.core.dataset.DeepLakeCloudDataset property) TokenPermissionError (class in deeplake.util.exceptions) transform() (deeplake.experimental.DeepLakeDataLoader method) TransformError (class in deeplake.util.exceptions),activeloop V,https://docs.deeplake.ai/en/v3.0.16/genindex.html,validate() (deeplake.core.index.Index method) (deeplake.core.index.IndexEntry method) values() (deeplake.api.info.Info method) verify (deeplake.core.tensor.Tensor property) VersionControlError (class in deeplake.util.exceptions) ViewEntry (class in deeplake.core.dataset) visualize() (deeplake.core.dataset.Dataset method) (deeplake.core.dataset.DeepLakeCloudDataset method),activeloop W,https://docs.deeplake.ai/en/v3.0.16/genindex.html,WaitTimeoutException (class in deeplake.util.exceptions),activeloop Python Module Index,https://docs.deeplake.ai/en/v3.0.16/py-modindex.html,d d deeplake deeplake.api.info deeplake.integrations.wandb.wandb,activeloop How to Install Deep Lake and Get Started,https://docs.activeloop.ai/getting-started/vector-store/step-1-hello-world#how-to-install-deep-lake-and-get-started,"Deep Lake can be installed through pip. By default, Deep Lake does not install dependencies for audio, video, google-cloud, and other features. Details on all installation options are available here. ! pip install deeplake",activeloop How to Create a Deep Lake Vector Store,https://docs.activeloop.ai/getting-started/vector-store/step-2-creating-deep-lake-vector-stores#how-to-create-a-deep-lake-vector-store,Let's create a Vector Store in LangChain for storing and searching information about the Twitter OSS recommendation algorithm.,activeloop Downloading and Preprocessing the Data,https://docs.activeloop.ai/getting-started/vector-store/step-2-creating-deep-lake-vector-stores#downloading-and-preprocessing-the-data,"First, let's import necessary packages and make sure the Activeloop and OpenAI keys are in the environmental variables ACTIVELOOP_TOKEN, OPENAI_API_KEY.from deeplake.core.vectorstore import VectorStoreimport openaiimport osNext, let's clone the Twitter OSS recommendation algorithm and define paths for for source data and the Vector Store.!git clone https://github.com/twitter/the-algorithmvector_store_path = '/vector_store_getting_started'repo_path = '/the-algorithm'Next, let's load all the files from the repo into list of data that will be added to the Vector Store (chunked_text and metadata). We use simple text chunking based on a constant number of characters. CHUNK_SIZE = 1000chunked_text = []metadata = []for dirpath, dirnames, filenames in os.walk(repo_path): for file in filenames: try: full_path = os.path.join(dirpath,file) with open(full_path, 'r') as f: text = f.read() new_chunkned_text = [text[i:i+1000] for i in range(0,len(text), CHUNK_SIZE)] chunked_text += new_chunkned_text metadata += [{'filepath': full_path} for i in range(len(new_chunkned_text))] except Exception as e: print(e) passNext, let's define an embedding function using OpenAI. It must work for a single string and a list of strings, so that it can both be used to embed a prompt and a batch of texts. def embedding_function(texts, model='text-embedding-ada-002'): if isinstance(texts, str): texts = [texts] texts = [t.replace('\n', ' ') for t in texts] return [data['embedding']for data in openai.Embedding.create(input = texts, model=model)['data']]Finally, let's create the Deep Lake Vector Store and populate it with data. We use a default tensor configuration, which creates tensors with text (str), metadata(json), id (str, auto-populated), embedding (float32). Learn more about tensor customizability here. vector_store = VectorStore( path = vector_store_path,)vector_store.add(text = chunked_text, embedding_function = embedding_function, embedding_data = chunked_text, metadata = metadata)The Vector Store's data structure can be summarized using vector_store.summary(), which shows 4 tensors with 21055 samples: tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (21055, 1536) float32 None id text (21055, 1) str None metadata json (21055, 1) str None text text (21055, 1) str None To create a vector store using pre-compute embeddings, instead of embedding_data and embedding_function, you may run:# vector_store.add(text = chunked_text, # embedding = <list_of_embeddings>, # metadata = [{'source': source_text}]*len(chunked_text))",activeloop How to Search the Deep Lake Vector Store,https://docs.activeloop.ai/getting-started/vector-store/step-3-performing-search-in-the-vector-store#how-to-search-the-deep-lake-vector-store,Deep Lake offers highly-flexible vector search and hybrid search options discussed in detail in this tutorial.,activeloop Performing Vector Search,https://docs.activeloop.ai/getting-started/vector-store/step-3-performing-search-in-the-vector-store#performing-vector-search,"First, let's show a simple example of vector search using default options, which performs simple cosine similarity search in Python on the client (your machine). prompt = 'What do trust and safety models do?'search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)The search_results is a dictionary with keys for the text, score, id, and metadata, with data ordered by score. By default, the search returns the top 4 results which can be verified using: len(search_results['text']) # Returns 4If we examine the first returned text, it appears to contain the text about trust and safety models that is relevant to the prompt.search_results['text'][0]Returns:Trust and Safety Models=======================We decided to open source the training code of the following models:- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.We can also retrieve the corresponding filename from the metadata, which shows the top result came from the README.search_results['metadata'][0]# Returns: {'filepath': '/the-algorithm/trust_and_safety_models/README.md'}",activeloop Customization of Vector Search,https://docs.activeloop.ai/getting-started/vector-store/step-3-performing-search-in-the-vector-store#customization-of-vector-search,"You can customize your vector search with simple parameters, such as selecting the distance_metric and top k results:search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_functiondding, k=10, distance_metric='l2')The search now returns 10 search results:len(search_results['text']) # Returns: 10The first search result with the L2 distance metric returns the same text as the previous Cos search:search_results['text'][0]Returns:Trust and Safety Models=======================We decided to open source the training code of the following models:- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.",activeloop Full Customization of Vector Search,https://docs.activeloop.ai/getting-started/vector-store/step-3-performing-search-in-the-vector-store#full-customization-of-vector-search,"Deep Lake's Compute Engine can be used to rapidly execute a variety of different search logic. It is available with !pip install 'deeplake[enterprise]' (Make sure to restart your kernel after installation), and it is only available for data stored in or connected to Deep Lake. Let's load a representative Vector Store that is already stored in Deep Lake Tensor Database. If data is not being written, is advisable to use read_only = True.vector_store = VectorStore( path = 'hub://activeloop/twitter-algorithm', read_only=True)The query should be constructed using the Tensor Query Language (TQL) syntax.prompt = 'What do trust and safety models do?'embedding = embedding_function(prompt)[0]# Format the embedding array or list as a string, so it can be passed in the REST API request.embedding_string = ','.join([str(item) for item in embedding])tql_query = f'select * from (select text, cosine_similarity(embedding, ARRAY[{embedding_string}]) as score) order by score desc limit 5'Let's run the query, noting that the query execution happens in the Managed Tensor Database, and not on the client.search_results = vector_store.search(query=tql_query)If we examine the first returned text, it appears to contain the same text about trust and safety models that is relevant to the prompt.search_results['text'][0]Returns:Trust and Safety Models=======================We decided to open source the training code of the following models:- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.We can also retrieve the corresponding filename from the metadata, which shows the top result came from the README.print(search_results['metadata'][0])# Returns {'filepath': '/Users/istranic/ActiveloopCode/the-algorithm/trust_and_safety_models/README.md', 'extension': '.md'}",activeloop Search Options for Deep Lake Vector Stores in the Deep Lake API,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#search-options-for-deep-lake-vector-stores-in-the-deep-lake-api,This tutorial requires installation of:!pip3 install 'deeplake[enterprise]' langchain openai tiktoken,activeloop Vector Search on the Client,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#vector-search-on-the-client,"Let's load the same vector store used in the Quickstart and run embeddings search based on a user prompt using the Deep Lake Vector Store module. (Note: DeepLakeVectorStore class is deprecated, but you can still use it. The new API for calling Deep Lake's Vector Store is: VectorStore)from deeplake.core.vectorstore import VectorStoreimport openaiimport osos.environ['OPENAI_API_KEY'] = <OPENAI_API_KEY>vector_store_path = 'hub://activeloop/paul_graham_essay'vector_store = VectorStore( path = vector_store_path, read_only = True)Next, let's define an embedding function using OpenAI. It must work for a single string and a list of strings so that it can be used to embed a prompt and a batch of texts. def embedding_function(texts, model = 'text-embedding-ada-002'): if isinstance(texts, str): texts = [texts] texts = [t.replace('\n', ' ') for t in texts] return [data['embedding']for data in openai.Embedding.create(input = texts, model=model)['data']]",activeloop Simple Vector Search,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#simple-vector-search,"Let's run a simple vector search using default options, which performs a simple cosine similarity search in Python on the client. prompt = 'What are the first programs he tried writing?'search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)The search_results is a dictionary with keys for the text, score, id, and metadata, with data ordered by score. By default, it returns 4 samples ordered by similarity score, and if we examine the first returned text, it appears to contain the text about trust and safety models that is relevant to the prompt.search_results['text'][0]Returns:What I Worked OnFebruary 2021Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.The first programs I tried writing were on the IBM 1401 that our school district used for what was then called 'data processing.' This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.",activeloop Hybrid Search Using UDFs,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#hybrid-search-using-udfs,"Vector search can be combined with other search logic for performing more advanced queries. Let's define a function compatible with deeplake.filter for filtering data before the vector search. The function below will filter samples that contain the word 'program' in the text tensor.def filter_fn(x): # x is a single row in Deep Lake, 'text' is the tensor name, .data()['value'] is the method for fetching the data return 'program' in x['text'].data()['value'].lower()Let's run the vector search with the filter above, and return more samples (k = 10), and perform similarity search using L2 metric (distance_metric = 'l2'):prompt = 'What are the first programs he tried writing?'search_results_filter = vector_store.search(embedding_data = prompt, embedding_function = embedding_function, filter = filter_fn, k = 10, distance_metric = 'l2', exec_option = 'python')We can verity that the word 'program' is present in all of the results:all(['program' in result for result in search_results_filter['text']])# Returns TrueUDFs are only supported with query execution using the Python engine, so in the search above, exec_option = 'python' should be specified.",activeloop Hybrid Search Using Metadata Filters,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#hybrid-search-using-metadata-filters,"Instead of using UDFs, a filter can be specified using dictionary syntax. For json tensors, the syntax is filter = {'tensor_name': {'key': 'value'}}. For text tensors, it is filter = {'tensor': 'value'}. In all cases, an exact match is performed.search_results_filter = vector_store.search(embedding_data = prompt, embedding_function = embedding_function, filter = {'metadata': {'source': 'paul_graham_essay.txt'}})",activeloop Hybrid Search using TQL Vector Search,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#hybrid-search-using-tql-vector-search,"Deep Lake offers advanced search that executes queries with higher performance in C++, and offers querying using Deep Lake's Tensor Query Language (TQL). In order to use Compute Engine, Deep Lake data must be stored in Deep Lake Storage, or in the user's cloud while being connected to Deep Lake using Managed Credentials. Let's load a larger Vector Store for running more interesting queries:vector_store_path = 'hub://activeloop/twitter-algorithm'vector_store = VectorStore( path = vector_store_path, read_only = True)NOTE: this Vector Store is stored in us-east, and query performance may vary significantly depending on your location. In real-world use-cases, users would store their Vector Stores in regions optimized for their use case.Now let's run a search that includes filtering of text, metadata, and embedding tensors. We do this using TQL by combining embedding search syntax (cosine_similarity(embedding, ...)) and filtering syntax (where ....). We are interested in answering a prompt based on the question:prompt = 'What does the python code do?'Therefore, we apply a filter to only search for text that contains the word 'python' and metadata where the source key contains '.py'.embedding = embedding_function(prompt)[0]# Format the embedding array or list as a string, so it can be passed in the REST API request.embedding_string = ','.join([str(item) for item in embedding])tql_query = f'select * from (select text, metadata, cosine_similarity(embedding, ARRAY[{embedding_string}]) as score where contains(text, 'python') or contains(metadata['source'], '.py')) order by score desc limit 5'search_results = vector_store.search(query = tql_query)",activeloop Vector Search Using the Managed Tensor Database (Server-Side),https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/deep-lake-vector-store-api#vector-search-using-the-managed-tensor-database-server-side,"For Vector Stored in the Managed Tensor Database, queries will automatically execute on the database (instead of the client). Vector Stores are created in the Managed Tensor Database by specifying vector_store_path = hub://org_id/dataset_name and runtime = {'tensor_db': True} during Vector Store creation.# vector_store = VectorStore(# path = 'hub://<org_id>/<dataset_name>',# runtime = {'tensor_db': True}# )search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)If Vector Stores are not in the Managed Tensor Database, they can be migrated using these steps:",activeloop How to Run Vector Search in the Deep Lake Tensor Database using the REST API,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/rest-api#how-to-run-vector-search-in-the-deep-lake-tensor-database-using-the-rest-api,"The REST API is currently in Alpha, and the syntax may change without announcement.To use the REST API, Deep Lake data must be stored in the Managed Tensor Database by specifying the deeplake_path = hub://org_id/dataset_name and runtime = {'tensor_db': True}. Full details on path and storage management are available here.",activeloop Performing Vector Search Using the REST API,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/rest-api#performing-vector-search-using-the-rest-api,"Let's query this Vector Store stored in the Managed Tensor Database using the REST API. The steps are:1.Define the authentication tokens and search terms2.Embed the search search term using OpenAI3.Reformat the embedding to an embedding_search string that can be passed to the REST API request.4.Create the query string using Deep Lake TQL. The dataset_path and embedding_search are a part of the query string. 5.Submit the request and print the response data dataimport requestsimport openaiimport os# Tokens should be set in environmental variables.ACTIVELOOP_TOKEN = os.environ['ACTIVELOOP_TOKEN']DATASET_PATH = 'hub://activeloop/twitter-algorithm'ENDPOINT_URL = 'https://app.activeloop.ai/api/query/v1'SEARCH_TERM = 'What do the trust and safety models do?'# os.environ['OPENAI_API_KEY'] OPEN AI TOKEN should also exist in env variables# The headers contains the user tokenheaders = { 'Authorization': f'Bearer {ACTIVELOOP_TOKEN}',}# Embed the search termembedding = openai.Embedding.create(input=SEARCH_TERM, model='text-embedding-ada-002')['data'][0]['embedding']# Format the embedding array or list as a string, so it can be passed in the REST API request.embedding_string = ','.join([str(item) for item in embedding])# Create the query using TQLquery = f'select * from (select text, cosine_similarity(embedding, ARRAY[{embedding_string}]) as score from \'{dataset_path}\') order by score desc limit 5' # Submit the request response = requests.post(ENDPOINT_URL, json={'query': query}, headers=headers)data = response.json()print(data)const axios = require('axios');OPENAI_API_KEY = process.env.OPENAI_API_KEY;ACTIVELOOP_TOKEN = process.env.ACTIVELOOP_TOKEN;const QUERY = 'What do the trust and safety models do?';const DATASET_PATH = 'hub://activeloop/twitter-algorithm';const ENDPOINT_URL = 'https://app.activeloop.ai/api/query/v1';// Function to get the embeddings of a text from Open AI APIasync function getEmbedding(text) { const response = await axios.post('https://api.openai.com/v1/embeddings', { input: text, model: 'text-embedding-ada-002' }, { headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${OPENAI_API_KEY}` } }); return response.data;}// Function to search the dataset using the given query on Activeloopasync function searchDataset(query) { const response = await axios.post(${ENDPOINT_URL}, { query: query, }, { headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${ACTIVELOOP_TOKEN}` } }); return response.data;}// Main function to search for similar texts in the dataset based on the query_termasync function searchSimilarTexts(query, dataset_path) { // Get the embedding of the query_term const embedding = await getEmbedding(query); const embedding_search = embedding.data[0].embedding.join(','); // Construct the search query const TQL = `SELECT * FROM ( SELECT text, l2_norm(embedding - ARRAY[${embedding_search}]) AS score from '${dataset_path}' ) ORDER BY score DESC LIMIT 5`; // Search the dataset using the constructed query const response = await searchDataset(TQL); // Log the search results console.log(response);}searchSimilarTexts(QUERY, DATASET_PATH)Congrats! You performed a vector search using the Deep Lake Managed Database! 🎉",activeloop How to Execute Vector Search Using Deep Lake in LangChain,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/langchain-api#how-to-execute-vector-search-using-deep-lake-in-langchain,"This tutorial requires installation of:!pip3 install langchain deeplake openai tiktokenLet's load the same vector store used in the Quickstart and run embeddings search based on a user prompt using the LangChain API. from langchain.vectorstores import DeepLakefrom langchain.chains import RetrievalQAfrom langchain.llms import OpenAIChatfrom langchain.embeddings.openai import OpenAIEmbeddingsimport osos.environ['OPENAI_API_KEY'] = <OPENAI_API_KEY>vector_store_path = 'hub://activeloop/paul_graham_essay'embedding_function = OpenAIEmbeddings(model = 'text-embedding-ada-002')# Re-load the vector storedb = DeepLake(dataset_path = vector_store_path, embedding = embedding_function, read_only = True)qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model = 'gpt-3.5-turbo'), chain_type = 'stuff', retriever = db.as_retriever())",activeloop Vector Similarity Search,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/langchain-api#vector-similarity-search,"Let's run a similarity search on Paul Graham's essay based on a query we want to answer. The query is embedded and a similarity search is performed against the stored embeddings, with execution taking place on the client.prompt = 'What are the first programs he tried writing?'query_docs = db.similarity_search(query = prompt)If we print the first document using query_docs[0].page_content, it appears to be relevant to the query:What I Worked OnFebruary 2021Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.The first programs I tried writing were on the IBM 1401 that our school district used for what was then called 'data processing.' This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.",activeloop Vector Search Using the Managed Tensor Database,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/langchain-api#vector-search-using-the-managed-tensor-database,"For Vector Stores in the Managed Tensor Database, queries will automatically execute on the database (instead of the client). Vector Stores are created in the Managed Tensor Database by specifying vector_store_path = hub://org_id/dataset_name and runtime = {'tensor_db': True} during Vector Store creation.# db = DeepLake(dataset_path = 'hub://<org_id>/<dataset_name>', # runtime = {'tensor_db': True},# embedding = embedding_function# )If Vector Stores are not in the Managed Tensor Database, they can be migrated using these steps:",activeloop Vector Search in an LLM Context,https://docs.activeloop.ai/tutorials/vector-store/vector-search-options/langchain-api#vector-search-in-an-llm-context,"We can directly use LangChain to run a Q&A using an LLM and answer the question about Paul Graham's essay. Internally, this API performs an embedding search to find the most relevant data to feeds them into the LLM context.qa = RetrievalQA.from_chain_type(llm = OpenAIChat(model = 'gpt-3.5-turbo'), chain_type = 'stuff', retriever = db.as_retriever())qa.run(prompt)",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/data-processing-using-parallel-computing,Step 8 in the Getting Started Guide highlights how deeplake.compute can be used to rapidly upload datasets. This tutorial expands further and highlights the power of parallel computing for dataset processing.,activeloop Transformations on New Datasets,https://docs.activeloop.ai/tutorials/deep-learning/data-processing-using-parallel-computing#transformations-on-new-datasets,"Computer vision applications often require users to process and transform their data. For example, you may perform perspective transforms, resize images, adjust their coloring, or many others. In this example, a flipped version of the MNIST dataset is created, which may be useful for training a model that identifies text in scenes where the camera orientation is unknown. First, let's define a function that will flip the dataset images.import deeplakefrom PIL import Imageimport numpy as np@deeplake.computedef flip_vertical(sample_in, sample_out): ## First two arguments are always default arguments containing: # 1st argument is an element of the input iterable (list, dataset, array,...) # 2nd argument is a dataset sample # Append the label and image to the output sample sample_out.append({'labels': sample_in.labels.numpy(), 'images': np.flip(sample_in.images.numpy(), axis = 0)}) return sample_outNext, the existing MNIST dataset is loaded, and deeplake.like is used to create an empty dataset with the same tensor structure.ds_mnist = deeplake.load('deeplake://activeloop/mnist-train')#We use the overwrite=True to make this code re-runnableds_mnist_flipped = deeplake.like('./mnist_flipped', ds_mnist, overwrite = True)Finally, the flipping operation is evaluated for the 1st 100 elements in the input dataset ds_in, and the result is automatically stored in ds_out.flip_vertical().eval(ds_mnist[0:100], ds_mnist_flipped, num_workers = 2)Let's check out the flipped images:Image.fromarray(ds_mnist.images[0].numpy())Image.fromarray(ds_mnist_flipped.images[0].numpy())",activeloop Dataset Processing Pipelines,https://docs.activeloop.ai/tutorials/deep-learning/data-processing-using-parallel-computing#dataset-processing-pipelines,"In order to modularize your dataset processing, it is helpful to create functions for specific data processing tasks and combine them in pipelines. In this example, you can create a pipeline using the flip_vertical function from the first example and the resize function below.@deeplake.computedef resize(sample_in, sample_out, new_size): ## First two arguments are always default arguments containing: # 1st argument is an element of the input iterable (list, dataset, array,...) # 2nd argument is a dataset sample ## Third argument is the required size for the output images # Append the label and image to the output sample sample_out.labels.append(sample_in.labels.numpy()) sample_out.images.append(np.array(Image.fromarray(sample_in.images.numpy()).resize(new_size))) return sample_outFunctions decorated using deeplake.compute can be combined into pipelines using deeplake.compose. Required arguments for the functions must be passed into the pipeline in this step:pipeline = deeplake.compose([flip_vertical(), resize(new_size = (64,64))])Just like for the single-function example above, the input and output datasets are created first, and the pipeline is evaluated for the 1st 100 elements in the input dataset ds_in. The result is automatically stored in ds_out.#We use the overwrite=True to make this code re-runnableds_mnist_pipe = deeplake.like('./mnist_pipeline', ds_mnist, overwrite = True)pipeline.eval(ds_mnist[0:100], ds_mnist_pipe, num_workers = 2)",activeloop Transformations on Existing Datasets,https://docs.activeloop.ai/tutorials/deep-learning/data-processing-using-parallel-computing#transformations-on-existing-datasets,"In the previous example, a new dataset was created while performing a transformation. In this example, a transformation is used to modify an existing dataset. First, download and unzip the small classification dataset below called animals. Next, use deeplake.ingest_classification to automatically convert this image classification dataset into Deep Lake format and save it in ./animals_deeplake.ds = deeplake.ingest_classification('./animals', './animals_deeplake') # Creates the datasetThe first image in the dataset is a picture of a cat:Image.fromarray(ds.images[0].numpy())The images in the dataset can now be flipped by evaluating the flip_vertical() transformation function from the previous example. If a second dataset is not specified as an input to .eval(), the transformation is applied to the input dataset. flip_vertical().eval(ds, num_workers = 2)The picture of the cat is now flipped:Image.fromarray(ds.images[0].numpy())",activeloop Recovering From Errors,https://docs.activeloop.ai/tutorials/deep-learning/data-processing-using-parallel-computing#recovering-from-errors,"If an error occurs related to a specific sample_in, deplake.compute will throw a TransformError and the error-causing index or sample can be caught using:# from deeplake.util.exceptions import TransformError# try:# compute_fn.eval(...)# except TransformError as e:# failed_idx = e.index# failed_sample = e.sampleThe traceback also typically shows information such as the filename of the data that was causing issues. One the problematic sample has been identified, it should be removed from the list of input samples and the deeplake.compute function should be re-executed. Congrats! You just learned how to make parallelize your computations using Deep Lake! 🎉",activeloop Installing Deep Lake,https://docs.activeloop.ai/getting-started/deep-learning/hello-world#installing-deep-lake,"Deep Lake can be installed through pip. By default, Deep Lake does not install dependencies for audio, video, google-cloud, and other features. Details on all installation options are available here. ! pip install deeplake",activeloop Fetching Your First Deep Lake Dataset,https://docs.activeloop.ai/getting-started/deep-learning/hello-world#fetching-your-first-deep-lake-dataset,"Let's load MNIST, the hello world dataset of machine learning. First, instantiate a Dataset by pointing to its storage location. Datasets hosted on Activeloop Platform are typically identified by the namespace of the organization followed by the dataset name: activeloop/mnist-train.import deeplakedataset_path = 'hub://activeloop/mnist-train'ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally",activeloop Reading Samples From a Deep Lake Dataset,https://docs.activeloop.ai/getting-started/deep-learning/hello-world#reading-samples-from-a-deep-lake-dataset,"Data is not immediately read into memory because Deep Lake operates lazily. You can fetch data by calling the .numpy() method, which reads data into a NumPy array.# Indexingimg = ds.images[0].numpy() # Fetch the 1st image and return a NumPy arraylabel = ds.labels[0].numpy(aslist=True) # Fetch the 1st label and store it as a # as a list text_labels = ds.labels[0].data()['text'] # Fetch the first labels and return them as text# Slicingimgs = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array # The method above produces an exception if # the images are not all the same sizelabels = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of NumPy arraysCongratulations, you've got Deep Lake working on your local machine🤓",activeloop "Data in Deep Lake can be stored in raw uncompressed format. However, compression is highly recommended for achieving optimal performance in terms of speed and storage.",https://docs.activeloop.ai/getting-started/deep-learning/understanding-compression#data-in-deep-lake-can-be-stored-in-raw-uncompressed-format.-however-compression-is-highly-recommende,"Compression is specified separately for each tensor, and it can occur at the sample or chunk level. For example, when creating a tensor for storing images, you can choose the compression technique for the image samples using the sample_compression input:ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg')In this example, every image added in subsequent .append(...) calls is compressed using the specified sample_compression method. The full list of available compressions is shown in the API Reference.",activeloop Choosing the Right Compression,https://docs.activeloop.ai/getting-started/deep-learning/understanding-compression#choosing-the-right-compression,"There is no single answer for choosing the right compression, and the tradeoffs are described in detail in the next section. However, good rules of thumb are:1.For data that has application-specific compressors (image, audio, video,...), choose the sample_compression technique that is native to the application such as jpg, mp3, mp4,...2.For other data containing large samples (i.e. large arrays with >100 values), lz4 is a generic compressor that works well in most applications.1.lz4 can be used as a sample_compression or chunk_compression . In most cases, sample_compression is sufficient, but in theory, chunk_compression produces slightly smaller data.3.For other data containing small samples (i.e. labels with <100 values), it is not necessary to use compression.",activeloop Compression Tradeoffs,https://docs.activeloop.ai/getting-started/deep-learning/understanding-compression#compression-tradeoffs,"Lossiness - Certain compression techniques are lossy, meaning that there is irreversible information loss when compressing the data. Lossless compression is less important for data such as images and videos, but it is critical for label data such as numerical labels, binary masks, and segmentation data.Memory - Different compression techniques have substantially different memory footprints. For instance, png vs jpeg compression may result in a 10X difference in the size of a Hub dataset. Runtime - The primary variables affecting download and upload speeds for generating usable data are the network speed and available compute power for processing the data. In most cases, the network speed is the limiting factor. Therefore, the highest end-to-end throughput for non-local applications is achieved by maximizing compression and utilizing compute power to decompress/convert the data to formats that are consumed by deep learning models (i.e. arrays). Upload Considerations - When applicable, the highest uploads speeds can be achieved when the sample_compression input matches the compression of the source data, such as:# sample_compression and my_image are 'jpeg'ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg')ds.images.append(deeplake.read('my_image.jpeg'))In this case, the input data is a .jpg, and the Deep Lake sample_compression is jpg. However, a mismatch between the compression of the source data and sample_compression in Deep Lake results in significantly slower upload speeds, because Deep Lake must decompress the source data and recompress it using the specified sample_compression before saving.Therefore, due to the computational costs associated with decompressing and recompressing data, it is important that you consider the runtime implications of uploading source data that is compressed differently than the specified sample_compression.",activeloop Loading Datasets,https://docs.activeloop.ai/getting-started/deep-learning/accessing-datasets#loading-datasets,"Deep Lake Datasets can be loaded from a variety of storage locations using:import deeplake# Local Filepathds = deeplake.load('./my_dataset_path') # Similar functionality to deeplake.dataset(path)# S3ds = deeplake.load('s3://my_dataset_bucket', creds={...})# Public Dataset hosted by Activeloop## Activeloop Storage - See Step 6ds = deeplake.load('hub://activeloop/public_dataset_name')# Dataset in another organization on Activeloop Platformds = deeplake.load('hub://org_name/dataset_name')Since ds = deeplake.dataset(path)can be used to both create and load datasets, you may accidentally create a new dataset if there is a typo in the path you provided while intending to load a dataset. If that occurs, simply use ds.delete() to remove the unintended dataset permanently.",activeloop Updating Data,https://docs.activeloop.ai/getting-started/deep-learning/accessing-datasets#updating-data,"Existing data in a Deep Lake dataset can be updated using:ds.images[1] = deeplake.read('https://i.postimg.cc/Yq2SNz9J/photo-1534567110243-8875d64ca8ff.jpg') # If the URI is not public, credentials should be specified using deeplake.read(URI, creds = {...})ds.labels[1] = 'giraffe' # Tensors of htype = 'class_label' can be updated with either numeric values or textImage.fromarray(ds.images[1].numpy())",activeloop Accessing Data,https://docs.activeloop.ai/getting-started/deep-learning/accessing-datasets#accessing-data,"Data within the tensors is loaded and accessed using the .numpy() , .data() , and .tobytes() commands. When the underlying data can be converted to a numpy array, .data() and .numpy() return equivalent objects.# Indexingimg = ds.images[0].numpy() # Fetch the 1st image and return a NumPy arraylabel = ds.labels[0].numpy(aslist=True) # Fetch the 1st label and store it as a # as a list # frame = ds.videos[0][4].numpy() # Fetch the 5th frame in the 1st video # and return a NumPy array text_labels = ds.labels[0].data()['value'] # Fetch the first labels and return them as text# Slicingimgs = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array # The method above produces an exception if # the images are not all the same sizelabels = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of NumPy arraysThe .numpy()method produces an exception if all samples in the requested tensor do not have a uniform shape. If that's the case, running .numpy(aslist=True)returns a list of NumPy arrays, where the indices of the list correspond to different samples.",activeloop Referencing Tensors,https://docs.activeloop.ai/getting-started/deep-learning/accessing-datasets#referencing-tensors,Deep Lake allows you to reference specific tensors using keys or via the '.' notation outlined below. Note: data is still not loaded by these commands.### NO HIERARCHY ###ds.images # is equivalent tods['images']ds.labels # is equivalent tods['labels']### WITH HIERARCHY ###ds.localization.boxes # is equivalent tods['localization/boxes']ds.localization.labels # is equivalent tods['localization/labels'],activeloop Register,https://docs.activeloop.ai/getting-started/deep-learning/using-activeloop-storage#register,You can store your Deep Lake Datasets with Activeloop by first creating an account in the Deep Lake App or in the CLI using:activeloop register,activeloop Login,https://docs.activeloop.ai/getting-started/deep-learning/using-activeloop-storage#login,"In order for the Python API to authenticate with your account, you can use API tokens (see below), or log in from the CLI using:!activeloop login# Alternatively, you can directly input your username and password in the same line:# activeloop login -u <your_username> -p <your_password>You can then access or create Deep Lake Datasets by passing the Deep Lake path to deeplake.dataset()import deeplakedeeplake_path = 'hub://organization_name/dataset_name' #'hub://jane_smith/my_awesome_dataset' ds = deeplake.dataset(deeplake_path)When you create an account in Deep Lake, a default organization is created that has the same name as your username. You can also create other organizations that represent companies, teams, or other collections of multiple users. Public datasets such as 'hub://activeloop/mnist-train' can be accessed without logging in.",activeloop API Tokens,https://docs.activeloop.ai/getting-started/deep-learning/using-activeloop-storage#api-tokens,"Once you have an Activeloop account, you can create tokens in the Deep Lake App (Organization Details -> API Tokens) and authenticate by setting the environmental variable: os.environ['ACTIVELOOP_TOKEN'] = <your_token>Or login in the CLI using the token:!activeloop login --token <your_token>If you are not logged in through the CLI, you may also pass the token to python commands that require authentication:ds = deeplake.load(deeplake_path, token = 'xyz')",activeloop How to Accelerate Deep Lake Workflows with Parallel Computing,https://docs.activeloop.ai/getting-started/deep-learning/parallel-computing#how-to-accelerate-deep-lake-workflows-with-parallel-computing,Deep Lake enables you to easily run computations in parallel and significantly accelerate your data processing workflows. This example primarily focuses on parallel dataset uploading.Parallel computing use cases such as dataset transformations can be found in this tutorial.Parallel compute using Deep Lake has two core steps: 1.Define a function or pipeline that will run in parallel and2.Evaluate the function using the appropriate inputs and outputs.,activeloop Defining the parallel computing function,https://docs.activeloop.ai/getting-started/deep-learning/parallel-computing#defining-the-parallel-computing-function,"The first step is to define a function that will run in parallel by decorating it using @deeplake.compute. In the example below, file_to_deeplake converts data from files into Deep Lake format, just like in Step 2: Creating Hub Datasets Manually. If you have not completed Step 2, please download and unzip the example image classification dataset below:animals datasetimport deeplakefrom PIL import Imageimport numpy as npimport os@deeplake.computedef file_to_deeplake(file_name, sample_out, class_names): ## First two arguments are always default arguments containing: # 1st argument is an element of the input iterable (list, dataset, array,...) # 2nd argument is a dataset sample # Other arguments are optional # Find the label number corresponding to the file label_text = os.path.basename(os.path.dirname(file_name)) label_num = class_names.index(label_text) # Append the label and image to the output sample sample_out.append({'labels': np.uint32(label_num), 'images': deeplake.read(file_name)}) return sample_outIn all functions decorated using @deeplake.compute, the first argument must be a single element of any input iterable that is being processed in parallel. In this case, that is a filename file_name, because file_to_deeplake reads image files and populates data in the dataset's tensors. The second argument is a dataset sample sample_out, which can be operated on using similar syntax to dataset objects, such as sample_out.append(...), sample_out.extend(...), etc.The function decorated using @deeplake.compute must return sample_out, which represents the data that is added or modified by that function.",activeloop Executing the parallel computation,https://docs.activeloop.ai/getting-started/deep-learning/parallel-computing#executing-the-parallel-computation,"To execute the parallel computation, you must define the dataset that will be modified.ds = deeplake.empty('./animals_deeplake_transform') # Creates the datasetNext, you define the input iterable that describes the information that will be operated on in parallel. In this case, that is a list of files files_list:# Find the class_names and list of files that need to be uploadeddataset_folder = './animals'class_names = os.listdir(dataset_folder)files_list = []for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename))You can now create the tensors for the dataset and run the parallel computation using the .eval syntax. Pass the optional input arguments to file_to_deeplake and skip the first two default arguments file_name and sample_out. The input iterable files_list and output dataset ds is passed to the .eval method as the first and second argument respectively.with ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) file_to_deeplake(class_names=class_names).eval(files_list, ds, num_workers = 2)Additional parallel computing use cases such as dataset transformations can be found in this tutorial.Image.fromarray(ds.images[0].numpy())Congrats! You just created a dataset using parallel computing! 🎈",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-image-classification-model-in-pytorch,Deep Lake enables users to manage their data more easily so they can train better ML models. This tutorial shows you how to train a simple image classification model while streaming data from a Deep Lake dataset stored in the cloud.,activeloop Data Preprocessing,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-image-classification-model-in-pytorch#data-preprocessing,"The first step is to select a dataset for training. This tutorial uses the Fashion MNIST dataset that has already been converted into Deep Lake format. It is a simple image classification dataset that categorizes images by clothing type (trouser, shirt, etc.)import deeplakefrom PIL import Imageimport numpy as npimport os, timeimport torchfrom torchvision import transforms, models# Connect to the training and testing datasetsds_train = deeplake.load('hub://activeloop/fashion-mnist-train')ds_test = deeplake.load('hub://activeloop/fashion-mnist-test')The next step is to define a transformation function that will process the data and convert it into a format that can be passed into a deep learning model. In this particular example, torchvision.transforms is used as a part of the transformation pipeline that performs operations such as normalization and image augmentation (rotation).tform = transforms.Compose([ transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5], [0.5]),])You can now create a pytorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds.pytorch(). This method automatically applies the transformation function, takes care of random shuffling (if desired), and converts Deep Lake data to PyTorch tensors. The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.The transform input is a dictionary where the key is the tensor name and the value is the transformation function that should be applied to that tensor. If a specific tensor's data does not need to be returned, it should be omitted from the keys. If the transformation function is set as None, the input tensor is converted to a torch tensor without additional modification.# Since torchvision transforms expect PIL images, we use the 'pil' decode_method for the 'images' tensor. This is much faster than running ToPILImage inside the transformtrain_loader = ds_train.pytorch(num_workers = 0, shuffle = True, transform = {'images': tform, 'labels': None}, batch_size = batch_size, decode_method = {'images': 'pil'})test_loader = ds_test.pytorch(num_workers = 0, transform = {'images': tform, 'labels': None}, batch_size = batch_size, decode_method = {'images': 'pil'})",activeloop Model Definition,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-image-classification-model-in-pytorch#model-definition,"This tutorial uses a pre-trained ResNet18 neural network from the torchvision.models module, converted to a single-channel network for grayscale images.Training is run on a GPU if possible. Otherwise, run on a CPU.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(device)# Use a pre-trained ResNet18model = models.resnet18(pretrained=True)# Convert model to grayscalemodel.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)# Update the fully connected layer based on the number of classes in the datasetmodel.fc = torch.nn.Linear(model.fc.in_features, len(ds_train.labels.info.class_names))model.to(device)# Specity the loss function and optimizercriterion = torch.nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.1)",activeloop Training the Model,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-image-classification-model-in-pytorch#training-the-model,"Helper functions for training and testing the model are defined. Note that the output from Deep Lake's PyTorch dataloader is fed into the model just like data from ordinary PyTorch dataloaders.def train_one_epoch(model, optimizer, data_loader, device): model.train() # Zero the performance stats for each epoch running_loss = 0.0 start_time = time.time() total = 0 correct = 0 for i, data in enumerate(data_loader): # get the inputs; data is a list of [inputs, labels] inputs = data['images'] labels = torch.squeeze(data['labels']) inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs.float()) loss = criterion(outputs, labels) loss.backward() optimizer.step() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total # Print performance statistics running_loss += loss.item() if i % 10 == 0: # print every 10 batches batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f, accuracy: %.2f %%' % (i, running_loss, speed, accuracy)) running_loss = 0.0 total = 0 correct = 0 def test_model(model, data_loader): model.eval() start_time = time.time() total = 0 correct = 0 with torch.no_grad(): for i, data in enumerate(data_loader): # get the inputs; data is a list of [inputs, labels] inputs = data['images'] labels = torch.squeeze(data['labels']) inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs.float()) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total print('Finished Testing') print('Testing accuracy: %.1f %%' %(accuracy))The model and data are ready for training🚀!num_epochs = 3for epoch in range(num_epochs): # loop over the dataset multiple times print('------------------ Training Epoch {} ------------------'.format(epoch+1)) train_one_epoch(model, optimizer, train_loader, device) test_model(model, test_loader)print('Finished Training')Congrats! You successfully trained an image classification model while streaming data directly from the cloud! 🎉",activeloop How to Train Deep Learning models using Deep Lake and MMDetection,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#how-to-train-deep-learning-models-using-deep-lake-and-mmdetection,"This tutorial assumes the reader has experience training models using MMDET and has installed it successfully. At the bottom of the page, we provide a high-level overview of MMDetection fundamentals.Deep Lake offers an integration with MMDetection, a popular open-source object detection toolbox based on PyTorch. The integration enables users to train models while streaming Deep Lake dataset using the transformation, training, and evaluation tools built by MMDet.",activeloop Integration Interface,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#integration-interface,"Training using MMDET is typically executed using wrapper scripts like the one provided here in their repo. In the example below, we write a similar simplified wrapper script for training using a Deep Lake dataset.The integrations with MMDET occurs in the deeplake.integrations.mmdet module. At a high-level, Deep Lake is responsible for the pytorch dataloader that streams data to the training framework, while MMDET is used for the training, transformation, and evaluation logic.In the example script below, the user should apply the build_detector and train_detector provided by Deep Lake. The build_detector is mostly boilerplate. and the Deep Lake-related features primarily exist in train_detector.import osfrom mmcv import Configimport mmcvfrom deeplake.integrations import mmdet as mmdet_deeplakeimport argparsedef parse_args(): parser = argparse.ArgumentParser(description='Deep Lake Training Using MMDET') parser.add_argument( '--cfg_file', type=str, required=True, help='Path for loading the config file', ) parser.add_argument( '--validate', action='store_true', default=True, help='Whether to run dataset validation', ) parser.add_argument( '--distributed', action='store_true', default=False, help='Whether to run distributed training', ) parser.add_argument( '--num_classes', type=int, default=None, help='Number of classes in the model', ) args = parser.parse_args() return argsif __name__ == '__main__': args = parse_args() # Read the config file cfg = Config.fromfile(args.cfg_file) cfg.model.bbox_head.num_classes = args.num_classes # Build the detector model = mmdet_deeplake.build_detector(cfg.model) # Create work_dir mmcv.mkdir_or_exist(os.path.abspath(cfg.work_dir)) # Run the training mmdet_deeplake.train_detector(model, cfg, distributed=args.distributed, validate=args.validate)",activeloop Inputs to train_detector,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#inputs-to-train_detector,"Inputs to the Deep Lake train_detector are a modified MMDET config file, optional dataset objects (see below), and flags for specifying whether to perform distributed training and validation.",activeloop Modifications to the cfg file,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#modifications-to-the-cfg-file,"The Deep Lake train_detector takes in a standard MMDET config file, but it also expect the inputs highlighted in the ----Deep Lake Inputs---- section in the config file below:#--------------------------------------DEEPLAKE INPUTS------------------------------------------------------------#TOKEN = 'INSERT_YOUR_DEEPLAKE_TOKEN'data = dict( # samples_per_gpu=4, # Is used instead of batch_size if deeplake_dataloader is not specified below # workers_per_gpu=8, # Is used instead of num_workers if deeplake_dataloader is not specified below train=dict( pipeline=train_pipeline, # Credentials for authentication. See documendataion for deeplake.load() for details deeplake_path='hub://activeloop/coco-train', deeplake_credentials={ 'username': None, 'password': None, 'token': TOKEN, 'creds': None, }, #OPTIONAL - Checkout teh specified commit_id before training deeplake_commit_id='', #OPTIONAL - Loads a dataset view for training based on view_id deeplake_view_id='', # OPTIONAL - {'mmdet_key': 'deep_lake_tensor',...} - Maps Deep Lake tensors to MMDET dictionary keys. # If not specified, Deep Lake will auto-infer the mapping, but it might make mistakes if datasets have many tensors deeplake_tensors = {'img': 'images', 'gt_bboxes': 'boxes', 'gt_labels': 'categories'}, # OPTIONAL - Parameters to use for the Deep Lake dataloader. If unspecified, the integration uses # the parameters in other parts of the cfg file such as samples_per_gpu, and others. deeplake_dataloader = {'shuffle': True, 'batch_size': 4, 'num_workers': 8} ), # Parameters as the same as for train val=dict( pipeline=test_pipeline, deeplake_path='hub://activeloop/coco-val', deeplake_credentials={ 'username': None, 'password': None, 'token': TOKEN, 'creds': None, }, deeplake_tensors = {'img': 'images', 'gt_bboxes': 'boxes', 'gt_labels': 'categories'}, deeplake_dataloader = {'shuffle': False, 'batch_size': 1, 'num_workers': 8} ),)# Which dataloader to usedeeplake_dataloader_type = 'c++' # 'c++' is available to enterprise users. Otherwise use 'python'# Which metrics to use for evaulation. In MMDET (without Deeplake), this is inferred from the dataset type.# In the Deep Lake integration, since the format is standardized, a variety of metrics can be used for a given dataset.deeplake_metrics_format = 'COCO'#----------------------------------END DEEPLAKE INPUTS------------------------------------------------------------#",activeloop Passing Deep Lake dataset objects to the train_detector (Optional),https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#passing-deep-lake-dataset-objects-to-the-train_detector-optional,"The Deep Lake dataset object or dataset view can be passed to the train_detector directly, thus overwriting any dataset information in the config file. Below are the respective modifications that should be made to the training script above:ds_train = deeplake.load(dataset_path, token, ...)ds_train.checkout(commit_id)ds_train_view = ds_train.query('Add query string')mmdet_deeplake.train_detector(model, cfg, ds_train = ds_train_view, ds_val = ..., distributed = args.distributed, validate = args.validate)Congrats! You're now able to train models using MMDET while streaming Deep Lake Datasets! 🎉",activeloop What is MMDetection?,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#what-is-mmdetection,"MMDetection is a powerful open-source object detection toolbox that provides a flexible and extensible platform for computer vision tasks. Developed by the Multimedia Laboratory (MMLab) as part of the OpenMMLab project, MMDetection is built upon the PyTorch framework and offers a composable and modular API design. This unique feature enables developers to easily construct custom object detection and segmentation pipelines. This article will delve deeper into how to use MMDetection with Activeloop Deep Lake.",activeloop MMDetection Features,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#mmdetection-features,"MMDetection's Modular and Composable API Design MMDetection's API design follows a modular approach, enabling seamless integration with frameworks like Deep Lake and easy component customization. This flexibility allows users to adapt the object detection pipeline to meet specific project requirements.",activeloop Custom Object Detection and Segmentation Pipelines,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#custom-object-detection-and-segmentation-pipelines,"MMDetection streamlines custom pipeline creation, allowing users to construct tailored models by selecting and combining different backbones, necks, and heads for more accurate and efficient computer vision pipelines.",activeloop Comprehensive Training & Inference Support,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#comprehensive-training-and-inference-support,"MMDetection's toolbox supports various data augmentation techniques, distributed training, mixed-precision training, and detailed evaluation metrics to help users assess their model's performance and identify areas for improvement.",activeloop Extensive Model Zoo & Configurations,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#extensive-model-zoo-and-configurations,"MMDetection offers a vast model zoo with numerous pre-trained models and configuration files for diverse computer vision tasks, such as object detection, instance segmentation, and panoptic segmentation.",activeloop MMDetection Backbone,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#mmdetection-backbone,"Backbones pre-trained convolutional neural networks (CNNs) to extract feature maps. Popular backbones include ResNet, VGG, and MobileNet.",activeloop MMDetection Head,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#mmdetection-head,"These components are meant for specific tasks, e.g. to generate the final predictions, such as bounding boxes, class labels, or masks. Examples include RPN (Region Proposal Network), FCOS (Fully Convolutional One-Stage Object Detector), and Mask Head. Neck: Components, like FPN (Feature Pyramid Network) and PAN (Path Aggregation Network), refine and consolidate features extracted by backbones, connecting them to the head.",activeloop MMDetection ROI Extractor,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#mmdetection-roi-extractor,"Region of Interest Extractor is a critical MMDetection component extracting RoI features from the feature maps generated by the backbone and neck components, improving the accuracy of final predictions (e.g., bounding boxes and class labels). One of the most popular methods for RoI feature extraction is RoI Align (a technique that addresses the issue of misalignment between RoI features and the input image due to quantization in RoI Pooling).",activeloop Loss,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-mmdetection#loss,"The loss component calculates loss values during training, estimating the difference between model predictions and ground truth labels. Users can choose suitable loss functions (e.g., Focal Loss, GHMLoss, L1 Loss) for specific use cases to evaluate and improve the model's performance.",activeloop How to Train models using Deep Lake and PyTorch Lightning,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#how-to-train-models-using-deep-lake-and-pytorch-lightning,"This tutorial is also available as a Colab Notebook.Deep Lake's integration with PyTorch can also be used to train models using an integration with PyTorch Lightning, a popular open-source high-level interface for PyTorch.",activeloop Overview,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#overview,"At a high-level, Deep Lake is connected to PyTorch lightning by passing the Deep Lake's PyTorch dataloader to any PyTorch Lightning API that expects a dataloader parameter, such as trainer.fit(..., train_dataloaders = deeplake_dataloader). The only caveats are:Deep Lake handles distributed training via it's distributed parameter in the .pytorch() method. Therefore, the PyTorch Lightning Trainer class should be initialized with replace_sampler_ddp = False.",activeloop Example Code,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#example-code,This tutorial uses PyTorch Lightning to execute the identical training workflow that is shown here in PyTorch.,activeloop Data Preprocessing,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#data-preprocessing,"The first step is to load the dataset for training. This tutorial uses the Fashion MNIST dataset that has already been converted into Deep Leake format. It is a simple image classification dataset that categorizes images by clothing type (trouser, shirt, etc.)import deeplakefrom PIL import Imageimport torchfrom torchvision import transforms, modelsimport pytorch_lightning as pl# Connect to the training and testing datasetsds_train = deeplake.load('hub://activeloop/fashion-mnist-train')ds_val = deeplake.load('hub://activeloop/fashion-mnist-test')The next step is to define a transformation function that will process the data and convert it into a format that can be passed into a deep learning model. In this particular example, torchvision.transforms is used as a part of the transformation pipeline that performs operations such as normalization and image augmentation (rotation).tform = transforms.Compose([ transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5], [0.5]),])You can now create a PyTorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds.pytorch(). This method automatically applies the transformation function and takes care of random shuffling (if desired). The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.The transform input is a dictionary where the key is the tensor name and the value is the transformation function that should be applied to that tensor. If a specific tensor's data does not need to be returned, it should be omitted from the keys. If the transformation function is set as None, the input tensor is converted to a torch tensor without additional modification.batch_size = 32# Since torchvision transforms expect PIL images, we use the 'pil' decode_method for the 'images' tensor. This is much faster than running ToPILImage inside the transformtrain_loader = ds_train.pytorch(num_workers = 0, shuffle = True, transform = {'images': tform, 'labels': None}, batch_size = batch_size, decode_method = {'images': 'pil'})val_loader = ds_val.pytorch(num_workers = 0, transform = {'images': tform, 'labels': None}, batch_size = batch_size, decode_method = {'images': 'pil'})",activeloop Model and LightningModule Definition,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#model-and-lightningmodule-definition,"This tutorial uses a pre-trained ResNet18 neural network from the torchvision.models module, converted to a single-channel network for grayscale images. The LightningModule organizes the training code.# Use a pre-trained ResNet18def get_model(num_classes): model = models.resnet18(pretrained=True) # Convert model to grayscale model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) # Update the fully connected layer based on the number of classes in the dataset model.fc = torch.nn.Linear(model.fc.in_features, num_classes) return modelclass FashionMnistModule(pl.LightningModule): def __init__(self, num_classes): ''' Inputs: num_classes: Number of classes in the dataset and model ''' super().__init__() # Create the model self.model = get_model(num_classes) # Create loss module self.loss_module = torch.nn.CrossEntropyLoss() def forward(self, imgs): return self.model(imgs) def configure_optimizers(self): return torch.optim.SGD(self.model.parameters(), lr=0.001, momentum=0.1) def training_step(self, batch, batch_idx): images = batch['images'] labels = torch.squeeze(batch['labels']) preds = self.model(images) loss = self.loss_module(preds, labels) acc = (preds.argmax(dim=-1) == labels).float().mean() self.log('train_acc', acc, on_step=True, on_epoch=True) self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): images = batch['images'] labels = torch.squeeze(batch['labels']) preds = self.model(images).argmax(dim=-1) acc = (labels == preds).float().mean() # Log the valdation accuracy to the progress bar at the end of each epoch self.log('val_acc', acc, on_epoch=True, prog_bar=True)",activeloop Training the Model,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-models-using-pytorch-lightning#training-the-model,"PyTorchLightning takes care of the training loop, so the remaining steps are to initialize the Trainer and call the .fit() method using the training and validation dataloaders.trainer = pl.Trainer(max_epochs = 3)trainer.fit(model=FashionMnistModule(len(ds_train.labels.info.class_names)), train_dataloaders = train_loader, val_dataloaders = val_loader)Congrats! You successfully trained an image classification model using PyTorch Lightning while streaming data directly from the cloud! 🎉",activeloop How to Train an PyTorch Image Classification Model on AWS SageMaker Using Deep Lake Datasets,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-on-aws-sagemaker#how-to-train-an-pytorch-image-classification-model-on-aws-sagemaker-using-deep-lake-datasets,"AWS SageMaker provides scalable infrastructure for developing, training, and deploying deep learning models. In this tutorial, we demonstrate how to run SageMaker training jobs for training a PyTorch image classification model using a Deep Lake dataset. This tutorial will focus on the SageMaker integration, and less so on the details of the training (see other training tutorials for details)",activeloop Dataset,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-on-aws-sagemaker#dataset,"In this tutorial we will use the Stanford Cars Dataset, which classifies the make+model+year of various vehicles. Though the dataset contains bounding boxes, we ignore those and only use the data for classification purposes.",activeloop Training Script,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-on-aws-sagemaker#training-script,"ds_train = deeplake.load(args.train_dataset, read_only = True, token = args.token, creds = args.creds) ds_val = deeplake.load(args.val_dataset, read_only = True, token = args.token, creds = args.creds) train_loader = ds_train.dataloader()\ .batch(args.batch_size)\ .shuffle(args.shuffle)\ .transform(transform = {'images': tform_train, 'car_models': None})\ .pytorch(num_workers = args.num_workers, decode_method = {'images': 'pil'}) val_loader = ds_val.dataloader()\ .batch(args.batch_size)\ .transform(transform = {'images': tform_val, 'car_models': None})\ .pytorch(num_workers = args.num_workers, decode_method = {'images': 'pil'}) # Load the model model = get_model_classification(len(ds_train.car_models.info.class_names)) model = model.to(device) # Define the optimizer, loss, and learning rate scheduler optimizer = optim.Adam(model.parameters(), lr=args.lr) criterion = nn.CrossEntropyLoss() lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.2) # Run the training for epoch in range(args.num_epochs): logger.debug('Training Epoch: {}'.format(epoch)) train_one_epoch(model, optimizer, criterion, train_loader, device, args.log_interval) lr_scheduler.step() accuracy = test_model(model, val_loader, device) logger.debug('Validation Accuracy: {}'.format(accuracy)) logger.debug('Finished Training') save_model(model, args.model_dir)if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--train-dataset', type=str, required=True, help='path to deeplake training dataset', ) parser.add_argument( '--val-dataset', type=str, required=True, help='path to deeplake validation dataset', ) parser.add_argument( '--batch-size', type=int, default=64, help='input batch size for training (default: 64)', ) parser.add_argument( '--num-workers', type=int, default=8, help='number of workers for the dataloaders (default: 8)', ) parser.add_argument( '--shuffle', type=bool, default=True, help='shuffling for the training dataloader (default: True)', ) parser.add_argument( '--num-epochs', type=int, default=10, help='number of epochs to train (default: 10)', ) parser.add_argument( '--lr', type=float, default=0.001, help='learning rate (default: 0.001)' ) parser.add_argument( '--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status (default: 10)', ) parser.add_argument( '--token', type=str, default=None, help='token for accessing the Deep Lake dataset (default: None)' ) parser.add_argument( '--creds', type=dict, default=None, help='creds dictionary for accessing the Deep Lake dataset (default: None)' ) parser.add_argument( '--model_dir', type=str, default=os.environ['SM_MODEL_DIR']) train(parser.parse_args())Congrats! You're now able to train models using AWS SageMaker Jobs while streaming Deep Lake Datasets! 🎉 ",activeloop Running the Sagemaker Job,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-on-aws-sagemaker#running-the-sagemaker-job,"We run the SageMaker job using the docker container below that can be found among these deep learning containers provided by AWS. The SageMaker job is initiated using the script below. By also running this script in a SageMaker notebook, the permissions and role access are automatically taken care of within the AWS environment.import sagemakersess = sagemaker.Session()role = sagemaker.get_execution_role()The training script (entry_point) and the directory (source_dir) containing the training script and requirements.txt file is passed to the Estimator. The argparse parameters for the training script are passed via the hyperparameters dictinary in the Estimator. Note that we also pass the Deep Lake paths to the training and validation datasets via this input. estimator = sagemaker.estimator.Estimator( source_dir = './train_code', # Directory of the training script entry_point = 'train_cars.py', # File for the training script image_uri = image_name, role = role, instance_count = 1, instance_type = instance_type, output_path = output_path, sagemaker_session = sess, max_run = 2*60*60, hyperparameters = {'train-dataset': 'hub://activeloop/stanford-cars-train', 'val-dataset': 'hub://activeloop/stanford-cars-test', 'batch-size': 64, 'num-epochs': 40, })The training job is triggered using the command below. Typically, the .fit() function accepts as inputs the S3 bucket containing the training data, which is then downloaded onto the local storage of the SageMaker job. Since we've passed the Deep Lake dataset paths via the hyperparameters, and since Deep Lake does not require data to be downloaded prior to training, we skip these inputs. estimator.fit()SageMaker offers a variety of method for advanced data logging. In this example, we can monitor the training performance in real-time in the training notebook where the jobs are triggered, or in the CloudWatch logs for each job. We observe that the validation accuracy after 40 epochs is 75%.",activeloop Training Script,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-on-aws-sagemaker#training-script," The contents of the train_code folder, as well as the train_cars.py file, are shown below. The training script follow the same workflow as other PyTorch training workflows using Deep Lake. As mentioned above, the inputs to the argparse function are those from the hyperparameters inputs in the estimator. import deeplakeimport argparseimport loggingimport osimport sysimport time import torchimport torch.nn as nnimport torch.optim as optimimport torch.utils.dataimport torch.utils.data.distributedfrom torchvision import transforms, modelslogger = logging.getLogger(__name__)logger.setLevel(logging.DEBUG)logger.addHandler(logging.StreamHandler(sys.stdout))#----------- Define transformations and their parameters -----------#WIDTH = 320HEIGHT = 320tform_train = transforms.Compose([# transforms.ToPILImage(), # Not needed because decode_method is set to PIL in the dataloader transforms.RandomResizedCrop((WIDTH, HEIGHT), scale=(0.75, 1.0), ratio=(0.75, 1.25)), transforms.RandomRotation(25), transforms.ColorJitter(brightness=(0.8,1.2), contrast=(0.8,1.2), saturation=(0.8,1.2), hue=(-0.1,0.1)), transforms.ToTensor(), transforms.Lambda(lambda x: x.repeat(int(3 / x.shape[0]), 1, 1)), # Adjust tensor if the image is grayscale transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),]) tform_val = transforms.Compose([# transforms.ToPILImage(), # Not needed because decode_method is set to PIL in the dataloader transforms.Resize((WIDTH, HEIGHT)), transforms.ToTensor(), transforms.Lambda(lambda x: x.repeat(int(3 / x.shape[0]), 1, 1)), # Adjust tensor if the image is grayscale transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),]) #----------- Define helper functions -----------## Helper function for loading the modeldef get_model_classification(num_classes): # Load a pre-trained classification model model = models.resnet34(pretrained=True) # Adjust the fully connected layer based on the number of classes in the dataset model.fc = torch.nn.Linear(model.fc.in_features, num_classes) return model# Helper function for training for 1 epochdef train_one_epoch(model, optimizer, criterion, data_loader, device, log_interval): # Set the model to train mode model.train() # Zero the performance stats for each epoch running_loss = 0.0 start_time = time.time() total = 0 correct = 0 for i, data in enumerate(data_loader): # Parse the inputs inputs = data['images'] labels = data['car_models'][:, 0] # Get rid of the extra axis inputs = inputs.to(device) labels = labels.to(device) # Zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs.float()) loss = criterion(outputs, labels.long()) loss.backward() optimizer.step() # Update the accuracy for the epoch _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total # Print statistics running_loss += loss.item() batch_time = time.time() if i % log_interval == 0: # print every 100 mini-batches speed_cumulative = (i+1)/(batch_time-start_time) logger.debug('[%5d] running loss: %.3f, epoch accuracy: %.3f, cumulative speed: %.2f ' % (i, running_loss, accuracy, speed_cumulative)) running_loss = 0.0 # Helper function for testing the model def test_model(model, data_loader, device): # Set the model to eval mode model.eval() total = 0 correct = 0 with torch.no_grad(): for i, data in enumerate(data_loader): # Parse the inputs inputs = data['images'] labels = data['car_models'][:, 0] inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs.float()) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total return accuracy # Helper function for saving the model def save_model(model, model_dir): logger.info('Saving the model') path = os.path.join(model_dir, 'model.pth') torch.save(model.state_dict(), path) def train(args): device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # Load dataset and create dataloaders",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-object-detection-and-segmentation-model-in-pytorch,"The primary objective for Deep Lake is to enable users to manage their data more easily so they can train better ML models. This tutorial shows you how to train an object detection and instance segmentation model while streaming data from a Deep Lake dataset stored in the cloud.Since these models are often complex, this tutorial will focus on data-preprocessing for connecting the data to the model. The user should take additional steps to scale up the code for logging, collecting additional metrics, model testing, and running on GPUs.This tutorial is inspired by this PyTorch tutorial on training object detection and segmentation models.",activeloop Data Preprocessing,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-object-detection-and-segmentation-model-in-pytorch#data-preprocessing," The first step is to select a dataset for training. This tutorial uses the COCO dataset that has already been converted into Deep Lake format. It is a multi-modal image dataset that contains bounding boxes, segmentation masks, keypoints, and other data.import deeplakeimport numpy as npimport mathimport sysimport timeimport torchvisionimport albumentations as Afrom albumentations.pytorch import ToTensorV2import torchfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictorfrom torchvision.models.detection.mask_rcnn import MaskRCNNPredictorimport torchvision.models.detection.mask_rcnn# Connect to the training datasetds_train = deeplake.load('hub://activeloop/coco-train')Note that the dataset can be visualized at the link printed by the deeplake.load command above.We extract the number of classes for use later:num_classes = len(ds_train.categories.info.class_names)For complex dataset like this one, it's critical to carefully define the pre-processing function that returns the torch tensors that are use for training. Here we use an Albumentations augmentation pipeline combined with additional pre-processing steps that are necessary for this particular model.Note: This tutorial assumes that the number of masks and bounding boxes for each image is equal# Augmentation pipeline using Albumentationstform_train = A.Compose([ A.RandomSizedBBoxSafeCrop(width=128, height=128, erosion_rate = 0.2), A.HorizontalFlip(p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2(), # transpose_mask = True], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels', 'bbox_ids'], min_area=25, min_visibility=0.6)) # 'label_fields' and 'box_ids' are all the fields that will be cut when a bounding box is cut.# Transformation function for pre-processing the Deep Lake sample before sending it to the modeldef transform(sample_in): # Convert boxes to Pascal VOC format boxes = coco_2_pascal(sample_in['boxes']) # Convert any grayscale images to RGB images = sample_in['images'] if images.shape[2] == 1: images = np.repeat(images, int(3/images.shape[2]), axis = 2) # Pass all data to the Albumentations transformation # Mask must be converted to a list masks = sample_in['masks'] mask_shape = masks.shape # This if-else statement was not necessary in Albumentations <1.3.x, because the empty mask scenario was handled gracefully inside of Albumentations. In Albumebtations >1.3.x, empty list of masks fails if mask_shape[2]>0: transformed = tform_train(image = images, masks = [masks[:,:,i].astype(np.uint8) for i in range(mask_shape[2])], bboxes = boxes, bbox_ids = np.arange(boxes.shape[0]), class_labels = sample_in['categories'], ) else: transformed = tform_train(image = images, bboxes = boxes, bbox_ids = np.arange(boxes.shape[0]), class_labels = sample_in['categories'], ) # Convert boxes and labels from lists to torch tensors, because Albumentations does not do that automatically",activeloop Data Preprocessing,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-object-detection-and-segmentation-model-in-pytorch#data-preprocessing,"# Be very careful with rounding and casting to integers, becuase that can create bounding boxes with invalid dimensions labels_torch = torch.tensor(transformed['class_labels'], dtype = torch.int64) boxes_torch = torch.zeros((len(transformed['bboxes']), 4), dtype = torch.int64) for b, box in enumerate(transformed['bboxes']): boxes_torch[b,:] = torch.tensor(np.round(box)) # Filter out the masks that were dropped by filtering of bounding box area and visibility masks_torch = torch.zeros((len(transformed['bbox_ids']), transformed['image'].shape[1], transformed['image'].shape[2]), dtype = torch.int64) if len(transformed['bbox_ids'])>0: masks_torch = torch.tensor(np.stack([transformed['masks'][i] for i in transformed['bbox_ids']], axis = 0), dtype = torch.uint8) # Put annotations in a separate object target = {'masks': masks_torch, 'labels': labels_torch, 'boxes': boxes_torch} return transformed['image'], target# Conversion script for bounding boxes from coco to Pascal VOC formatdef coco_2_pascal(boxes): # Convert bounding boxes to Pascal VOC format and clip bounding boxes to make sure they have non-negative width and height return np.stack((boxes[:,0], boxes[:,1], boxes[:,0]+np.clip(boxes[:,2], 1, None), boxes[:,1]+np.clip(boxes[:,3], 1, None)), axis = 1)def collate_fn(batch): return tuple(zip(*batch))You can now create a PyTorch dataloader that connects the Deep Lake dataset to the PyTorch model using the provided method ds.pytorch() . This method automatically applies the transformation function and takes care of random shuffling (if desired). The num_workers parameter can be used to parallelize data preprocessing, which is critical for ensuring that preprocessing does not bottleneck the overall training workflow.Since the dataset contains many tensors that are not used for training, a list of tensors for loading is specified in order to avoid streaming of unused data.batch_size = 8train_loader = ds_train.pytorch(num_workers = 2, shuffle = False, tensors = ['images', 'masks', 'categories', 'boxes'], # Specify the tensors that are needed, so we don't load unused data transform = transform, batch_size = batch_size, collate_fn = collate_fn)",activeloop Model Definition,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-object-detection-and-segmentation-model-in-pytorch#model-definition,"This tutorial uses a pre-trained torchvision neural network from the torchvision.models module.Training is performed on a GPU if possible. Otherwise, it's on a CPU.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(device)# Helper function for loading the modeldef get_model_instance_segmentation(num_classes): # Load an instance segmentation model pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # Get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # Get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # Replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return modelLet's initialize the model and optimizer.model = get_model_instance_segmentation(num_classes)model.to(device)# Specity the optimizerparams = [p for p in model.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)",activeloop Training the Model,https://docs.activeloop.ai/tutorials/deep-learning/training-models/training-an-object-detection-and-segmentation-model-in-pytorch#training-the-model,"Helper functions for training and testing the model are defined. Note that the output from Deep Lake's PyTorch dataloader is fed into the model just like data from ordinary PyTorch dataloaders.# Helper function for training for 1 epochdef train_one_epoch(model, optimizer, data_loader, device): model.train() start_time = time.time() for i, data in enumerate(data_loader): images = list(image.to(device) for image in data[0]) targets = [{k: v.to(device) for k, v in t.items()} for t in data[1]] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) loss_value = losses.item() # Print performance statistics batch_time = time.time() speed = (i+1)/(batch_time-start_time) print('[%5d] loss: %.3f, speed: %.2f' % (i, loss_value, speed)) if not math.isfinite(loss_value): print(f'Loss is {loss_value}, stopping training') print(loss_dict) break optimizer.zero_grad() losses.backward() optimizer.step()The model and data are ready for training 🚀!# Train the model for 1 epochnum_epochs = 1for epoch in range(num_epochs): # loop over the dataset multiple times print('------------------ Training Epoch {} ------------------'.format(epoch+1)) train_one_epoch(model, optimizer, train_loader, device) # --- Insert Testing Code Here --- print('Finished Training')Congrats! You successfully trained an object detection and instance segmentation model while streaming data directly from the cloud! 🎉",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets,"After creating a Deep Lake dataset, you may need to edit it by adding, deleting, and modifying the data. In this tutorial, we show best practices for updating datasets.",activeloop Create a Representative Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#create-a-representative-deep-lake-dataset,"First, let's download and unzip representative source data and create a Deep Lake dataset for this tutorial:This dataset includes segmentation and object detection of vehicle damage, but for this tutorial, we will only upload the images and labels (damage location)import deeplakeimport pandas as pdimport osfrom PIL import Imageimages_directory = '/damaged_cars_tutorial' # Path to the COCO images directoryannotation_file = '/damaged_cars_tutorial/COCO_mul_val_annos.json' # Path to the COCO annotations filedeeplake_path = '/damaged_cars_dataset' # Path to the Deep Lake datasetds = deeplake.ingest_coco(images_directory, annotation_file, deeplake_path, key_to_tensor_mapping={'category_id': 'labels'}, # Rename category_id to labels ignore_keys=['area', 'image_id', 'id', 'segmentation', 'image_id', 'bbox', 'iscrowd']) shows the dataset has two tensors with 11 samples: tensor htype shape dtype compression ------- ------- ------- ------- ------- images image (11, 1024, 1024, 3) uint8 jpeg labels class_label (11, 2:7) uint32 None We can explore the damage in the first sample using ds.labels[0].data(), which prints:{'value': array([0, 1, 2], dtype=uint32), 'text': ['rear_bumper', 'door', 'headlamp']}",activeloop Add Data to a New Tensor,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#add-data-to-a-new-tensor,"Suppose you have another data source with supplemental data about the color of the vehicles. Let's create a Pandas DataFrame with this data.color_data = {'filename': ['1.jpg', '9.jpg', '62.jpg', '24.jpg'], 'color': ['gray', 'blue', 'green', 'gray']} df_color = pd.DataFrame(color_data)There are two approaches for adding this new data to the Deep Lake dataset:",activeloop Iterate through the supplemental data and add data at the corresponding Deep Lake index,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#iterate-through-the-supplemental-data-and-add-data-at-the-corresponding-deep-lake-index,"This approach is recommended when the data updates are sparseFirst, let's create a color2 tensor, and the load all the existing Deep Lake filenames into memory. We then iterate through the supplemental data and find the corresponding Deep Lake index to insert the color information.with ds: ds.create_tensor('color2', htype = 'class_label') filenames = [os.path.basename(sample_info['filename']) for sample_info in ds.images.sample_info] for fn in df_color['filename'].values: index = filenames.index(fn) ds.color2[index] = df_color[df_color['filename'] == fn]['color'].values[0]Now we see that ds.summary() shows 4 tensors, each with 11 samples (though the color and color2 tensors have several empty samples): tensor htype shape dtype compression ------- ------- ------- ------- ------- images image (10, 1024, 1024, 3) uint8 jpeg labels class_label (10, 2:7) uint32 None color class_label (10, 0:1) uint32 None color2 class_label (10, 0:1) uint32 None",activeloop 1. Iterate through the Deep Lake samples and append data,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#1.-iterate-through-the-deep-lake-samples-and-append-data,"This approach is recommended when most Deep Lake samples are being updated using the supplemental data (dense update).First, we create a color tensor and iterate through the samples. For each sample, we lookup the color from the df_color DataFrame and append it to the color tensor. If no color exists for a filename, it is appended as None. We use the filename as the key to perform the lookup, which is available in ds.images[index].sample_info dictionary.with ds: ds.create_tensor('color', htype = 'class_label') # After creating an empty tensor, the length of the dataset is 0 # Therefore, we iterate over ds.max_view, which is the padded version of the dataset for i, sample in enumerate(ds.max_view): filename = os.path.basename(sample.images.sample_info['filename']) color = df_color[df_color['filename'] == filename]['color'].values ds.color.append(None if len(color)==0 else color)Learn more about dataset lengths and padding here.Now we see that ds.summary() shows 3 tensors, each with 11 samples (though the color tensor has several empty samples): tensor htype shape dtype compression ------- ------- ------- ------- ------- images image (11, 1024, 1024, 3) uint8 jpeg labels class_label (11, 2:7) uint32 None color class_label (11, 0:1) uint32 None",activeloop Delete Samples,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#delete-samples,"Rows from a dataset can be deleted using ds.pop(). To delete the row at index 8 we run:ds.pop(8)Now we see that ds.summary() shows 10 rows in the dataset (instead of 11): tensor htype shape dtype compression ------- ------- ------- ------- ------- images image (10, 1024, 1024, 3) uint8 jpeg labels class_label (10, 2:7) uint32 None color class_label (10, 0:1) uint32 None color2 class_label (10, 0:1) uint32 None To replace data with empty data without deleting a row, you can run: ds.color[index] = NoneCongrats! You just learned how to make a variety of updates to Deep Lake datasets! 🎉",activeloop Update Existing Rows without TQL,https://docs.activeloop.ai/tutorials/deep-learning/updating-datasets#update-existing-rows-without-tql,"Originally, we did not specify a color for image 3.jpg. Let's find the index for this image, look at it, and add the color manually. We've already loaded the Deep Lake dataset's filenames into memory above, so we can find the index using:index = filenames.index('3.jpg')Let's visualize the image using PIL. We could also visualize it using ds.visualize() (must pip install 'deeplake[visualizer]') or using the Deep Lake App.Image.fromarray(ds.images[index].numpy())Since the image is white, let's update the color using:ds.color[index] = 'white'",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-video-datasets,"Video datasets are becoming increasingly common in Computer Vision applications. This tutorial demonstrates how to convert a simple video classification dataset into Deep Lake format. Uploading videos in Deep Lake is nearly identical as uploading images, aside from minor differences in sample compression that are described below.When using Deep Lake with videos, make sure to install it using one of the following options:",activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-video-datasets#create-the-deep-lake-dataset,"The first step is to download the small dataset below called running walking.animals object detection datasetThe dataset has the following folder structure:data_dir|_running |_video_1.mp4 |_video_2.mp4|_walking |_video_3.mp4 |_video_4.mp4Now that you have the data, let's create a Deep Lake Dataset in the ./running_walking_deeplake folder by running:import deeplakefrom PIL import Image, ImageDrawimport numpy as npimport osds = deeplake.empty('./running_walking_deeplake') # Create the dataset locallyNext, let's inspect the folder structure for the source dataset ./running_walking to find the class names and the files that need to be uploaded to the Deep Lake dataset.# Find the class_names and list of files that need to be uploadeddataset_folder = './running_walking'class_names = os.listdir(dataset_folder)fn_vids = []for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: fn_vids.append(os.path.join(dirpath, filename))Finally, let's create the tensors and iterate through all the images in the dataset in order to upload the data in Deep Lake.They key difference between video and image htypes is that Deep Lake does not explicitly perform compression for videos. The sample_compression input in the create_tensor function is used to verify that the compression of the input video file to deeplake.read()matches the sample_compression parameter. If there is a match, the video is uploaded in compressed format. Otherwise, an error is thrown. Images have a slightly different behavior, because the input image files are stored and re-compressed (if necessary) to the sample_compression format.with ds: ds.create_tensor('videos', htype='video', sample_compression = 'mp4') ds.create_tensor('labels', htype='class_label', class_names = class_names) for fn_vid in fn_vids: label_text = os.path.basename(os.path.dirname(fn_vid)) label_num = class_names.index(label_text) # Append data to tensors ds.videos.append(deeplake.read(fn_vid)) ds.labels.append(np.uint32(label_num))In order for Activeloop Platform to correctly visualize the labels, class_names must be a list of strings, where the numerical labels correspond to the index of the label in the list.",activeloop Inspect the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-video-datasets#inspect-the-deep-lake-dataset,Let's check out the first frame in the second sample from this dataset. video_ind = 1frame_ind = 0# Individual frames are loaded lazilyimg = Image.fromarray(ds.videos[ind][frame_ind].numpy())# Load the numberic label and read the class name from ds.labels.info.class_namesds.labels.info.class_names[ds.labels[ind].numpy()[frame_ind]]imgYou've successfully created a video dataset in Activeloop Deep Lake.Congrats! You just created a video classification dataset! 🎉,activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-complex-datasets,"Datasets often have multiple labels such as classifications, bounding boxes, segmentations, and others. In order to create an intuitive layout of tensors, it's advisable to create a dataset hierarchy that captures the relationship between the different label types. This can be done with Deep Lake tensor groups.This example show to to use groups to create a dataset containing image classifications of 'indoor' and 'outdoor', as well as bounding boxes of objects such as 'dog' and 'cat'.",activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-complex-datasets#create-the-deep-lake-dataset," The first step is to download the small dataset below called animals complex.animals complex datasetThe images and their classes are stored in a classification folder where the subfolders correspond to the class names. Bounding boxes for object detection are stored in a separate boxes subfolder, which also contains a list of class names for object detection in the file box_names.txt. In YOLO format, images and annotations are typically matched using a common filename such as image -> filename.jpeg and annotation -> filename.txt . The data structure for the dataset is shown below:data_dir|_classification |_indoor |_image1.png |_image2.png |_outdoor |_image3.png |_image4.png|_boxes |_image1.txt |_image3.txt |_image3.txt |_image4.txt |_classes.txtNow that you have the data, let's create a Deep Lake Dataset in the ./animals_complex_deeplake folder by running: import deeplakefrom PIL import Image, ImageDrawimport numpy as npimport osds = deeplake.empty('./animals_complex_deeplake') # Create the dataset locallyNext, let's specify the folder paths containing the classification and object detection data. It's also helpful to create a list of all of the image files and class names for classification and object detection tasks.classification_folder = './animals_complex/classification'boxes_folder = './animals_complex/boxes'# List of all class names for classificationclass_names = os.listdir(classification_folder)fn_imgs = []for dirpath, dirnames, filenames in os.walk(classification_folder): for filename in filenames: fn_imgs.append(os.path.join(dirpath, filename))# List of all class names for object detection with open(os.path.join(boxes_folder, 'classes.txt'), 'r') as f: class_names_boxes = f.read().splitlines()Since annotations in YOLO are typically stored in text files, it's useful to write a helper function that parses the annotation file and returns numpy arrays with the bounding box coordinates and bounding box classes.def read_yolo_boxes(fn:str): ''' Function reads a label.txt YOLO file and returns a numpy array of yolo_boxes for the box geometry and yolo_labels for the corresponding box labels. ''' box_f = open(fn) lines = box_f.read() box_f.close() # Split each box into a separate lines lines_split = lines.splitlines() yolo_boxes = np.zeros((len(lines_split),4)) yolo_labels = np.zeros(len(lines_split)) # Go through each line and parse data for l, line in enumerate(lines_split): line_split = line.split() yolo_boxes[l,:]=np.array((float(line_split[1]), float(line_split[2]), float(line_split[3]), float(line_split[4]))) yolo_labels[l]=int(line_split[0]) return yolo_boxes, yolo_labelsNext, let's create the groups and tensors for this data. In order to separate the two annotations, a boxes group is created to wrap around the label and bbox tensors which contains the coordinates and labels for the bounding boxes.with ds: # Image ds.create_tensor('images', htype='image', sample_compression='jpeg') # Classification ds.create_tensor('labels', htype='class_label', class_names = class_names) # Object Detection ds.create_group('boxes') ds.boxes.create_tensor('bbox', htype='bbox') ds.boxes.create_tensor('label', htype='class_label', class_names = class_names_boxes) # An alternate approach is to use '/' notation, which automatically creates the boxes group # ds.create_tensor('boxes/bbox', ...) # ds.create_tensor('boxes/label', ...) # Define the format of the bounding boxes ds.boxes.bbox.info.update(coords = {'type': 'fractional', 'mode': 'LTWH'})In order for Activeloop Platform to correctly visualize the labels, class_names must be a list of strings, where the numerical labels correspond to the index of the label in the list.Finally, let's iterate through all the images in the dataset in order to upload the data in Deep Lake. The first axis of the boxes.bbox sample array corresponds to the first-and-only axis of the boxes.label sample array (i.e",activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-complex-datasets#create-the-deep-lake-dataset,"if there are 3 boxes in an image, the labels array is 3x1 and the boxes array is 3x4).with ds: #Iterate through the images for fn_img in fn_imgs: img_name = os.path.splitext(os.path.basename(fn_img))[0] fn_box = img_name+'.txt' # Get the class number for the classification label_text = os.path.basename(os.path.dirname(fn_img)) label_num = class_names.index(label_text) # Get the arrays for the bounding boxes and their classes yolo_boxes, yolo_labels = read_yolo_boxes(os.path.join(boxes_folder,fn_box)) # Append data to tensors ds.append({'images': deeplake.read(os.path.join(fn_img)), 'labels': np.uint32(label_num), 'boxes/label': yolo_labels.astype(np.uint32), 'boxes/bbox': yolo_boxes.astype(np.float32) }) ",activeloop Inspect the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-complex-datasets#inspect-the-deep-lake-dataset,"Let's check out the second sample from this dataset and visualize the labels.# Draw bounding boxes and the classfication label for the second imageind = 1img = Image.fromarray(ds.images[ind].numpy())draw = ImageDraw.Draw(img)(w,h) = img.sizeboxes = ds.boxes.bbox[ind].numpy()for b in range(boxes.shape[0]): (xc,yc) = (int(boxes[b][0]*w), int(boxes[b][1]*h)) (x1,y1) = (int(xc-boxes[b][2]*w/2), int(yc-boxes[b][3]*h/2)) (x2,y2) = (int(xc+boxes[b][2]*w/2), int(yc+boxes[b][3]*h/2)) draw.rectangle([x1,y1,x2,y2], width=2) draw.text((x1,y1), ds.boxes.label.info.class_names[ds.boxes.label[ind].numpy()[b]]) draw.text((0,0), ds.labels.info.class_names[ds.labels[ind].numpy()[0]])# Display the image and its bounding boxesimgCongrats! You just created a dataset with multiple types of annotations! 🎉",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-object-detection-datasets,"Object detection using bounding boxes is one of the most common annotation types for Computer Vision datasets. This tutorial demonstrates how to convert an object detection dataset from YOLO format to Deep Lake, and a similar method can be used to convert object detection datasets from other formats such as COCO and PASCAL VOC.",activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-object-detection-datasets#create-the-deep-lake-dataset,"The first step is to download the small dataset below called animals object detection.animals object detection datasetThe dataset has the following folder structure:data_dir|_images |_image_1.jpg |_image_2.jpg |_image_3.jpg |_image_4.jpg|_boxes |_image_1.txt |_image_2.txt |_image_3.txt |_image_4.txt |_classes.txtNow that you have the data, let's create a Deep Lake Dataset in the ./animals_od_deeplakefolder by running:import deeplakefrom PIL import Image, ImageDrawimport numpy as npimport osds = deeplake.empty('./animals_od_deeplake') # Create the dataset locallyNext, let's specify the folder paths containing the images and annotations in the dataset. In YOLO format, images and annotations are typically matched using a common filename such as image -> filename.jpeg and annotation -> filename.txt . It's also helpful to create a list of all of the image files and the class names contained in the dataset.img_folder = './animals_od/images'lbl_folder = './animals_od/boxes'# List of all imagesfn_imgs = os.listdir(img_folder)# List of all class nameswith open(os.path.join(lbl_folder, 'classes.txt'), 'r') as f: class_names = f.read().splitlines()Since annotations in YOLO are typically stored in text files, it's useful to write a helper function that parses the annotation file and returns numpy arrays with the bounding box coordinates and bounding box classes.def read_yolo_boxes(fn:str): ''' Function reads a label.txt YOLO file and returns a numpy array of yolo_boxes for the box geometry and yolo_labels for the corresponding box labels. ''' box_f = open(fn) lines = box_f.read() box_f.close() # Split each box into a separate lines lines_split = lines.splitlines() yolo_boxes = np.zeros((len(lines_split),4)) yolo_labels = np.zeros(len(lines_split)) # Go through each line and parse data for l, line in enumerate(lines_split): line_split = line.split() yolo_boxes[l,:]=np.array((float(line_split[1]), float(line_split[2]), float(line_split[3]), float(line_split[4]))) yolo_labels[l]=int(line_split[0]) return yolo_boxes, yolo_labelsFinally, let's create the tensors and iterate through all the images in the dataset in order to upload the data in Deep Lake. Boxes and their labels will be stored in separate tensors, and for a given sample, the first axis of the boxes array corresponds to the first-and-only axis of the labels array (i.e. if there are 3 boxes in an image, the labels array is 3x1 and the boxes array is 3x4).with ds: ds.create_tensor('images', htype='image', sample_compression = 'jpeg') ds.create_tensor('labels', htype='class_label', class_names = class_names) ds.create_tensor('boxes', htype='bbox') # Define the format of the bounding boxes ds.boxes.info.update(coords = {'type': 'fractional', 'mode': 'LTWH'}) for fn_img in fn_imgs: img_name = os.path.splitext(fn_img)[0] fn_box = img_name+'.txt' # Get the arrays for the bounding boxes and their classes yolo_boxes, yolo_labels = read_yolo_boxes(os.path.join(lbl_folder,fn_box)) # Append data to tensors ds.append({'images': deeplake.read(os.path.join(img_folder, fn_img)), 'labels': yolo_labels.astype(np.uint32), 'boxes': yolo_boxes.astype(np.float32) })In order for Activeloop Platform to correctly visualize the labels, class_names must be a list of strings, where the numerical labels correspond to the index of the label in the list.",activeloop Inspect the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-object-detection-datasets#inspect-the-deep-lake-dataset,"Let's check out the third sample from this dataset, which contains two bounding boxes.# Draw bounding boxes for the fourth imageind = 3img = Image.fromarray(ds.images[ind ].numpy())draw = ImageDraw.Draw(img)(w,h) = img.sizeboxes = ds.boxes[ind].numpy()for b in range(boxes.shape[0]): (xc,yc) = (int(boxes[b][0]*w), int(boxes[b][1]*h)) (x1,y1) = (int(xc-boxes[b][2]*w/2), int(yc-boxes[b][3]*h/2)) (x2,y2) = (int(xc+boxes[b][2]*w/2), int(yc+boxes[b][3]*h/2)) draw.rectangle([x1,y1,x2,y2], width=2) draw.text((x1,y1), ds.labels.info.class_names[ds.labels[ind].numpy()[b]])# Display the image and its bounding boxesimgCongrats! You just created a beautiful object detection dataset! 🎉Note: For optimal object detection model performance, it is often important for datasets to contain images with no annotations (See the 4th sample in the dataset above). Empty samples can be appended using:or by specifying an empty array whose len(shape) is equal to that of the other samples in the tensor: #len(sample.shape) == 2",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-time-series-datasets,Deep Lake is intuitive format for storing large time-series datasets and it offers compression for reducing storage costs. This tutorial demonstrates how to convert a time-series data to Deep Lake format and load the data for plotting.,activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-time-series-datasets#create-the-deep-lake-dataset,"The first step is to download the small dataset below called sensor data.This is a subset of a dataset available on kaggle, and it contains the iPhone x,y,z acceleration for 24 users (subjects) under conditions of walking and jogging. The dataset has the folder structure below. subjects_info.csv contains metadata such as height, weight, etc. for each subject, and the sub_n.csv files contains the time-series acceleration data for the nth subject.data_dir|_subjects_into.csv|_motion_data |_walk |_sub_1.csv |_sub_2.csv ... ... |_jog |_sub_1.csv |_sub_2.csv ... ...Now that you have the data, let's create a Deep Lake Dataset in the ./sensor_data_deeplake folder by running: import deeplakeimport pandas as pdimport osfrom tqdm import tqdmimport numpy as npimport matplotlib.pyplot as pltds = deeplake.empty('./sensor_data_deeplake') # Create the dataset locallyNext, let's specify the folder path containing the existing dataset, load the subjects metadata to a Pandas DataFrame, and create a list of all of the time-series files that should be converted to Deep Lake format.dataset_path= './sensor_data'subjects_info = pd.read_csv(os.path.join(dataset_path, 'subjects_info.csv'))fns_series = []for dirpath, dirnames, filenames in os.walk(os.path.join(dataset_path, 'motion_data')): for filename in filenames: fns_series .append(os.path.join(dirpath, filename))Next, let's create the tensors and add relevant metadata, such as the dataset source, the tensor units, and other information. We leverage groups to separate out the primary acceleration data from other user data such as the weight and height of the subjects. with ds: #Update dataset metadata ds.info.update(source = 'https://www.kaggle.com/malekzadeh/motionsense-dataset', notes = 'This is a small subset of the data in the source link') #Create tensors. Setting chunk_compression is optional and it defaults to None ds.create_tensor('acceleration_x', chunk_compression = 'lz4') ds.create_tensor('acceleration_y', chunk_compression = 'lz4') # Save the sampling rate as tensor metadata. Alternatively, # you could also create a 'time' tensor. ds.acceleration_x.info.update(sampling_rate_s = 0.1) ds.acceleration_y.info.update(sampling_rate_s = 0.1) # Encode activity as text ds.create_tensor('activity', htype = 'text') # Encode 'activity' as numeric labels and convert to text via class_names # ds.create_tensor('activity', htype = 'class_label', class_names = ['xyz']) ds.create_group('subjects_info') ds.subjects_info.create_tensor('age') ds.subjects_info.create_tensor('weight') ds.subjects_info.create_tensor('height') # Save the units of weight as tensor metadata ds.subjects_info.weight.info.update(units = 'kg') ds.subjects_info.height.info.update(units = 'cm')Finally, let's iterate through all the time-series data and upload it to the Deep Lake dataset. with ds: # Iterate through the time series and append data for fn in tqdm(fns_series): # Read the data in the time series df_data = pd.read_csv(fn) # Parse the 'activity' from the file name activity = os.path.basename(os.path.dirname(fn)) # Parse the subject code from the filename and pull the subject info from 'subjects_info' subject_code = int(os.path.splitext(os.path.basename(fn))[0].split('_')[1]) subject_info = subjects_info[subjects_info['code']==subject_code] # Append data to tensors ds.activity.append(activity) ds.subjects_info.age.append(subject_info['age'].values) ds.subjects_info.weight.append(subject_info['weight'].values) ds.subjects_info.height.append(subject_info['height'].values) ds.acceleration_x.append(df_data['userAcceleration.x'].values) ds.acceleration_y.append(df_data['userAcceleration.y'].values)",activeloop Inspect the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-time-series-datasets#inspect-the-deep-lake-dataset,"Let's check out the first sample from this dataset and plot the acceleration time-series.It is noteworthy that the Deep Lake dataset takes 36% less memory than the original dataset due to lz4 chunk compression for the acceleration tensors.s_ind = 0 # Plot the first time seriest_ind = 100 # Plot the first 100 indices in the time series#Plot the x accelerationx_data = ds.acceleration_x[s_ind].numpy()[:t_ind]sampling_rate_x = ds.acceleration_x.info.sampling_rate_splt.plot(np.arange(0, x_data.size)*sampling_rate_x, x_data, label='acceleration_x')#Plot the y accelerationy_data = ds.acceleration_y[s_ind].numpy()[:t_ind]sampling_rate_y = ds.acceleration_y.info.sampling_rate_splt.plot(np.arange(0, y_data.size)*sampling_rate_y, y_data, label='acceleration_y')plt.legend()plt.xlabel('time [s]', fontweight = 'bold')plt.ylabel('acceleration [g]', fontweight = 'bold')plt.title('Weight: {} {}, Height: {} {}'.format(ds.subjects_info.weight[s_ind].numpy()[0], ds.subjects_info.weight.info.units, ds.subjects_info.height[s_ind].numpy()[0], ds.subjects_info.height.info.units), fontweight = 'bold')plt.xlim([0, 10])plt.grid()plt.gcf().set_size_inches(8, 5)plt.show()Congrats! You just converted a time-series dataset to Deep Lake format! 🎉",activeloop This tutorial is also available as a Colab Notebook,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-datasets-with-sequences,"Deep learning with computer vision is increasingly moving in a direction of temporal data, where video frames and their labels are stored as sequences, rather than independent images. Models trained on this data directly account for the temporal information content, rather than making predictions frame-by-frame and then fusing them with non-deep-learning techniques.",activeloop Create the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-datasets-with-sequences#create-the-deep-lake-dataset,"The first step is to download the dataset Multiple Object Tracking Benchmark. Additional information about this data and its format is in this GitHub Repo.The dataset has the following folder structure:data_dir|_train |_MOT16_N (Folder with sequence N) |_det |_gt (Folder with ground truth annotations) |_img1 (Folder with images the sequence) |_00000n.jpg (image of n-th frame in sequence) |_MOT16_M ....|_test (same structure as _train)The annotations in gt.txt have the format below, and the last 4 items (conf->z) are not used in the Deep Lake dataset:frame, id, bb_left, bb_top, bb_width, bb_height, conf, x, y, zNow we're ready to create a Deep Lake Dataset in the ./mot_2016_train folder by running:import deeplakeimport osimport pandas as pdimport numpy as npfrom PIL import Image, ImageDrawds = deeplake.empty('./mot_2015_train') # Create the dataset locallyNext, let's write code to inspect the folder structure for the downloaded dataset and create a list of folders containing the sequences:dataset_folder = '/MOT16/train'sequences = [ item for item in os.listdir(dataset_folder) if os.path.isdir(os.path.join(dataset_folder, item)) ]Finally, let's create the tensors by using the sequence[...] htype, iterate through each sequence, and iterate through each frame within the sequence, one-by-one. Data is appended to sequence[...] htypes using lists. The list contains the whole sample, and the individual elements of the list are the individual data points, such as the image frame, the bounding boxes in a particular frame, etc. See end of code block below.with ds: # Define tensors ds.create_tensor('frames', htype = 'sequence[image]', sample_compression = 'jpg') ds.create_tensor('boxes', htype = 'sequence[bbox]') ds.create_tensor('ids', htype = 'sequence[]', dtype = 'uint32') # Ids are not uploaded as htype = 'class_labels' because they don't contain information about the class of an object. ds.boxes.info.update(coords = {'type': 'pixel', 'mode': 'LTWH'}) # Bounding box format is left, top, width, height # Iterate through each sequence for sequence in sequences: # Define root directory for that sequence root_local = os.path.join(dataset_folder,sequence, 'img1') # Get a list of all the image paths img_paths = [os.path.join(root_local, item) for item in sorted(os.listdir(root_local))] # Read the annotations and convert to dataframe with open(os.path.join(dataset_folder,sequence, 'gt', 'gt.txt')) as f: anns = [line.rstrip('\n') for line in f] anns_df = pd.read_csv(os.path.join(dataset_folder, sequence, 'gt', 'gt.txt'), header = None) # Get the frames from the annotations and make sure they're of equal length as the images frames = pd.unique(anns_df[0]) assert len(frames) == len(img_paths) # Iterate through each frame and add data to sequence boxes_seq = [] ids_seq = [] for frame in frames: ann_df = anns_df[anns_df[0] == frame] # Find annotations in the specific frame boxes_seq.append(ann_df.loc[:, [2, 3, 4, 5]].to_numpy().astype('float32')) # Box coordinates are in the 3rd-6th column ids_seq.append(ann_df.loc[:, 1].to_numpy().astype('uint32')) # ids are in the second column # Append the sequences to the deeplake dataset ds.append({ 'frames': [deeplake.read(path) for path in img_paths], 'boxes': boxes_seq, 'ids': ids_seq})This dataset identifies objects by id, where each id represents an instance of an object. However, the id does not identify the class of the object, such person, car, truck, etc. Therefore, the ids were not uploaded as htype = 'class_label'.",activeloop Inspect the Deep Lake Dataset,https://docs.activeloop.ai/tutorials/deep-learning/creating-datasets/creating-datasets-with-sequences#inspect-the-deep-lake-dataset,"Let's check out the 10th frame in the 6th sequence in this dataset. A complete visualization of this dataset is available in Activeloop Platform.# Draw bounding boxes for the 10th frame in the 6th sequenceseq_ind = 5frame_ind = 9img = Image.fromarray(ds.frames[seq_ind][frame_ind].numpy())draw = ImageDraw.Draw(img)(w,h) = img.sizeboxes = ds.boxes[seq_ind][frame_ind].numpy()for b in range(boxes.shape[0]): (x1,y1) = (int(boxes[b][0]), int(boxes[b][1])) (x2,y2) = (int(boxes[b][0]+boxes[b][2]), int(boxes[b][1]+boxes[b][3])) draw.rectangle([x1,y1,x2,y2], width=2, outline = 'red')# Display the frame and its bounding boxesimgCongrats! You just created a dataset using sequences! 🎉",activeloop Azure-Specific Information for Connecting Data to Deep Lake,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure#azure-specific-information-for-connecting-data-to-deep-lake,Provisioning Federated CredentialsEnabling CORS,activeloop AWS-Specific Information for Connecting Data to Deep Lake,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services#aws-specific-information-for-connecting-data-to-deep-lake,Provisioning Role-Based AccessEnabling CORS,activeloop Setting up Role-Based Access for AWS S3,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/provisioning-role-based-access#setting-up-role-based-access-for-aws-s3,"The most secure method for connecting data from your AWS account to Deep Lake is using Federated Credentials and Role-Based Access, which are set up using the steps below:",activeloop Step 1: Create the AWS IAM Policy,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/provisioning-role-based-access#step-1-create-the-aws-iam-policy,"1. Login to the AWS account where the IAM Role will be created and where the data is stored.2. Go to the IAM page in the AWS UI, which can be done by searching 'IAM' in the console and locating the IAM page under Services.3. In the left nav, open the Policies under Access management and on Create policy on the right.5. Select the JSON tab instead of Visual editor.6. Replace the code in the editor with the code below. Replace BUCKET_NAME with the bucket names for which you want to grant role-based access:{ 'Version': '2012-10-17', 'Statement': [ { 'Effect': 'Allow', 'Action': [ 's3:ListBucket', 's3:GetBucketLocation', 's3:GetObject', 's3:PutObject', 's3:DeleteObject' ], 'Resource': [ 'arn:aws:s3:::BUCKET_NAME', 'arn:aws:s3:::BUCKET_NAME/*' ] } ]}7. On the bottom right, click Next: Tags (create tags if needed) and Next: Preview, enter the policy name and description, and click Create policy",activeloop Step 2: Create the AWS IAM Role,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/provisioning-role-based-access#step-2-create-the-aws-iam-role,"1. On the IAM page, in the left nav, open the Roles under Access management, and click Create role on the right.3. Select Custom trust policy from the list of options.4. Replace the policy definition with the code below and click Next{ 'Version': '2012-10-17', 'Statement': [ { 'Sid': 'AllowAssumeRoleFromActiveloopSaaS', 'Effect': 'Allow', 'Principal': { 'AWS': [ 'arn:aws:iam::574987031486:role/activeloop_backend_dev', 'arn:aws:iam::574987031486:role/activeloop_backend', 'arn:aws:iam::597713067985:role/activeloop_backend' ] }, 'Action': 'sts:AssumeRole' } ]}5. From the provided policy list, select the previously created policy from Step 1 and click Next6. Set the name and description for the role and click Create role at the bottom.",activeloop Step 3: Grant Access to AWS KMS Key (only for buckets that are encrypted with customer managed KMS keys),https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/provisioning-role-based-access#step-3-grant-access-to-aws-kms-key-only-for-buckets-that-are-encrypted-with-customer-managed-kms-key,"1. Navigate to the bucket in the AWS S3 UI2. Open the bucket Properties3. Scroll down to Default encryption and copy the AWS KMS key ARN 4. In the Policy creation step (Step 1, Sub-step 6), use the JSON below in the policy statement, and replace YOUR_KMS_KEY_ARN with the copied Key ARN for the encrypted bucket.{ 'Version': '2012-10-17', 'Statement': [ { 'Effect': 'Allow', 'Action': [ 's3:ListBucket', 's3:GetBucketLocation', 's3:*Object*' ], 'Resource': [ 'arn:aws:s3:::BUCKET_NAME', 'arn:aws:s3:::BUCKET_NAME/*' ] }, { 'Effect': 'Allow', 'Action': [ 'kms:Encrypt', 'kms:Decrypt', 'kms:ReEncrypt*', 'kms:GenerateDataKey*', 'kms:DescribeKey' ], 'Resource': [ 'YOUR_KMS_KEY_ARN' ] } ]}",activeloop Step 4: Enter the created AWS Role ARN (Step 2) into the Activeloop UI,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/amazon-web-services/provisioning-role-based-access#step-4-enter-the-created-aws-role-arn-step-2-into-the-activeloop-ui,See the first video in the link below:Storing Deep Lake Data in Your Own Cloud,activeloop Deep Lake API Reference,https://docs.deeplake.ai/#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Vector Store Creating a Deep Lake Vector Store Vector Store Operations Vector Store Properties VectorStore.DeepMemory Creating a Deep Memory Deep Memory Operations Deep Memory Properties Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype 3D Bounding Box Htype Intrinsics Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Nifti Htype Point Cloud Htype Mesh Htype Embedding Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read MMDetection High-Performance Features Dataloader Sampler Tensor Query Language Random Split Deep Memory API Reference deeplake deeplake.VectorStore deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform deeplake.core.vectorstore.deep_memory deeplake.random.seed,activeloop Indices and tables,https://docs.deeplake.ai/#indices-and-tables,Index Module Index Search Page,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.1.5/index.html#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Enterprise Features Dataloader Sampler Tensor Query Language Random Split API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.1.5/index.html#indices-and-tables,Index Module Index Search Page,activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.1.0/index.html#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Enterprise Features Dataloader Sampler Tensor Query Language API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log deeplake.core.transform,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.1.0/index.html#indices-and-tables,Index Module Index Search Page,activeloop deeplake.core.index¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.index.html#deeplake-core-index," class deeplake.core.index.IndexEntry(value: Union[int, slice, Tuple[int]] = slice(None, None, None))¶ __getitem__(item: Union[int, slice, Tuple[int]])¶ Combines the given item and this IndexEntry. Returns a new IndexEntry representing the composition of the two. Examples >>> IndexEntry()[0:100] IndexEntry(slice(0, 100, None)) Copy to clipboard >>> IndexEntry()[100:200][5] IndexEntry(105) Copy to clipboard >>> IndexEntry()[(0, 1, 2, 3)] IndexEntry((0, 1, 2, 3)) Copy to clipboard >>> IndexEntry()[1, 2, 3] IndexEntry((0, 1, 2, 3)) Copy to clipboard Parameters: item – The desired sub-index to be composed with this IndexEntry. Can be an int, a slice, or a tuple of ints. Returns: The new IndexEntry object. Return type: IndexEntry Raises: TypeError – An integer IndexEntry should not be indexed further. __init__(value: Union[int, slice, Tuple[int]] = slice(None, None, None))¶ __str__()¶ Return str(self). __weakref__¶ list of weak references to the object (if defined) indices(length: int)¶ Generates the sequence of integer indices for a target of a given length. is_trivial()¶ Checks if an IndexEntry represents the entire slice length(parent_length: int) → int¶ Returns the length of an IndexEntry given the length of the parent it is indexing. Examples >>> IndexEntry(slice(5, 10)).length(100) 5 >>> len(list(range(100))[5:10]) 5 >>> IndexEntry(slice(5, 100)).length(50) 45 >>> len(list(range(50))[5:100]) 45 >>> IndexEntry(0).length(10) 1 Copy to clipboard Parameters: parent_length (int) – The length of the target that this IndexEntry is indexing. Returns: The length of the index if it were applied to a parent of the given length. Return type: int subscriptable()¶ Returns whether an IndexEntry can be further subscripted. validate(parent_length: int)¶ Checks that the index is not accessing values outside the range of the parent. class deeplake.core.index.Index(item: Union[int, slice, Tuple[int], Index, List[IndexEntry]] = slice(None, None, None))¶ __getitem__(item: Union[int, slice, List[int], Tuple[Union[int, slice, Tuple[int]]], Index])¶ Returns a new Index representing a subscripting with the given item. Modeled after NumPy's advanced integer indexing. See: https://numpy.org/doc/stable/reference/arrays.indexing.html Examples >>> Index([5, slice(None)])[5] Index([5, 5]) Copy to clipboard >>> Index([5])[5:6] Index([5, slice(5, 6)]) Copy to clipboard >>> Index()[0, 1, 2:5, 3] Index([0, 1, slice(2, 5), 3]) Copy to clipboard >>> Index([slice(5, 6)])[(0, 1, 2:5, 3),] Index([(5, 1, slice(2, 5), 3)]) Copy to clipboard Parameters: item – The contents of the subscript expression to add to this Index. Returns: The Index representing the result of the subscript operation. Return type: Index Raises: TypeError – Given item should be another Index, or compatible with NumPy's advanced integer indexing. __init__(item: Union[int, slice, Tuple[int], Index, List[IndexEntry]] = slice(None, None, None))¶ Initializes an Index from an IndexValue, another Index, or the values from another Index. Represents a list of IndexEntry objects corresponding to indexes into each axis of an ndarray. __repr__()¶ Return repr(self). __str__()¶ Return str(self). __weakref__¶ list of weak references to the object (if defined) apply(samples: List[ndarray])¶ Applies an Index to a list of ndarray samples with the same number of entries as the first entry in the Index. apply_squeeze(samples: List[ndarray])¶ Applies the primary axis of an Index to a list of ndarray samples",activeloop deeplake.core.index¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.index.html#deeplake-core-index,"Will either return the list as given, or return the first sample . compose_at(item: Union[int, slice, Tuple[int]], i: Optional[int] = None)¶ Returns a new Index representing the addition of an IndexValue, or the composition with a given axis. Examples >>> Index([slice(None), slice(None)]).compose_at(5) Index([slice(None), slice(None), 5]) Copy to clipboard >>> Index([slice(None), slice(5, 10), slice(None)]).compose_at(3, 1) Index([slice(None), 8, slice(None)]) Copy to clipboard Parameters: item (IndexValue) – The value to append or compose with the Index. i (int, optional) – The axis to compose with the given item. Defaults to None, meaning that the item will be appended instead. Returns: The result of the addition or composition. Return type: Index find_axis(offset: int = 0)¶ Returns the index for the nth subscriptable axis in the values of an Index. Parameters: offset (int) – The number of subscriptable axes to skip before returning. Defaults to 0, meaning that the first valid axis is returned. Returns: The index of the found axis, or None if no match is found. Return type: int is_trivial()¶ Checks if an Index is equivalent to the trivial slice [:], aka slice(None). length(parent_length: int)¶ Returns the primary length of an Index given the length of the parent it is indexing. See: IndexEntry.length() validate(parent_length)¶ Checks that the index is not accessing values outside the range of the parent. deeplake.core.index.merge_slices(existing_slice: slice, new_slice: slice) → slice¶ Compose two slice objects Given an iterable x, the following should be equivalent: x[existing_slice][new_slice] == x[merge_slices(existing_slice, new_slice)] Parameters: existing_slice (slice) – The existing slice to be restricted. new_slice (slice) – The new slice to be applied to the existing slice. Returns: the composition of the given slices Return type: slice Raises: NotImplementedError – Composing slices with negative values is not supported. Negative indexing for slices is only supported for the first slice. deeplake.core.index.slice_at_int(s: slice, i: int)¶ Returns the i th element of a slice s. Examples >>> slice_at_int(slice(None), 10) 10 Copy to clipboard >>> slice_at_int(slice(10, 20, 2), 3) 16 Copy to clipboard Parameters: s (slice) – The slice to index into. i (int) – The integer offset into the slice. Returns: The index corresponding to the offset into the slice. Return type: int Raises: NotImplementedError – Nontrivial slices should not be indexed with negative integers. IndexError – If step is negative and start is not greater than stop. deeplake.core.index.slice_length(s: slice, parent_length: int) → int¶ Returns the length of a slice given the length of its parent.",activeloop deeplake.api.dataset¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.dataset.html#deeplake-api-dataset,"class deeplake.api.dataset.dataset¶ static exists(path: Union[str, pathlib.Path], creds: Optional[dict] = None, token: Optional[str] = None) → bool¶ See deeplake.exists(). static empty(path: Union[str, pathlib.Path], overwrite: bool = False, public: bool = False, memory_cache_size: int = DEFAULT_MEMORY_CACHE_SIZE, local_cache_size: int = DEFAULT_LOCAL_CACHE_SIZE, creds: Optional[dict] = None, token: Optional[str] = None) → Dataset¶ See deeplake.empty(). static load(path: Union[str, pathlib.Path], read_only: Optional[bool] = None, memory_cache_size: int = DEFAULT_MEMORY_CACHE_SIZE, local_cache_size: int = DEFAULT_LOCAL_CACHE_SIZE, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = True, access_method: str = 'stream') → Dataset¶ See deeplake.load(). static rename(old_path: Union[str, pathlib.Path], new_path: Union[str, pathlib.Path], creds: Optional[dict] = None, token: Optional[str] = None) → Dataset¶ See deeplake.rename(). static delete(path: Union[str, pathlib.Path], force: bool = False, large_ok: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, verbose: bool = False) → None¶ See deeplake.delete(). static like(dest: Union[str, pathlib.Path], src: Union[str, Dataset, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, creds: Optional[dict] = None, token: Optional[str] = None, public: bool = False) → Dataset¶ See deeplake.like(). static copy(src: Union[str, pathlib.Path, Dataset], dest: Union[str, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, src_token=None, dest_creds=None, dest_token=None, num_workers: int = 0, scheduler='threaded', progressbar=True)¶ See deeplake.copy(). static deepcopy(src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], tensors: Optional[List[str]] = None, overwrite: bool = False, src_creds=None, src_token=None, dest_creds=None, dest_token=None, num_workers: int = 0, scheduler='threaded', progressbar=True, public: bool = False, verbose: bool = True)¶ See deeplake.deepcopy(). static connect(src_path: str, creds_key: str, dest_path: Optional[str], org_id: Optional[str], ds_name: Optional[str], token: Optional[str])¶ See deeplake.connect(). static ingest(src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], images_compression: str = 'auto', dest_creds: dict = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ See deeplake.ingest(). static ingest_kaggle(tag: str, src: Union[str, pathlib.Path], dest: Union[str, pathlib.Path], exist_ok: bool = False, images_compression: str = 'auto', dest_creds: dict = None, kaggle_credentials: dict = None, progressbar: bool = True, summary: bool = True, **dataset_kwargs) → Dataset¶ See deeplake.ingest_kaggle(). static ingest_dataframe(src, dest: Union[str, pathlib.Path], dest_creds: Optional[Dict] = None, progressbar: bool = True, **dataset_kwargs)¶ See deeplake.ingest_dataframe(). static list(workspace: str = '', token: Optional[str] = None) → None¶ See deeplake.list().",activeloop deeplake.api.info¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.info.html#module-deeplake.api.info,"class deeplake.api.info.Info¶ Contains optional key-value pairs that can be stored for datasets/tensors. clear()¶ Clear info. get(key, default=None)¶ Get value for key from info. items()¶ Return all items in info. keys()¶ Return all keys in info. property nbytes¶ Returns size of info stored in bytes. pop(key, default=None)¶ Pop item from info by key. popitem()¶ Pop item from info. replace_with(d)¶ Replace info with another dictionary. setdefault(key, default=None)¶ Set default value for a key in info. update(*args, **kwargs)¶ Update info. values()¶ Return all values in info.",activeloop deeplake.api.link¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.link.html#deeplake-api-link,"deeplake.api.link.link(path: str, creds_key: Optional[str] = None) → LinkedSample¶ See deeplake.link().",activeloop deeplake.api.read¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.read.html#deeplake-api-read,"deeplake.api.read.read(path: str, verify: bool = False, creds: Optional[Dict] = None, compression: Optional[str] = None, storage: Optional[StorageProvider] = None) → Sample¶ See deeplake.read().",activeloop deeplake.api.tiled¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.api.tiled.html#deeplake-api-tiled,"deeplake.api.tiled.tiled(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Union[str, np.dtype] = np.dtype('uint8'))¶ See deeplake.tiled().",activeloop deeplake.util.shape_interval¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.shape-interval.html#deeplake-util-shape-interval,"class deeplake.util.shape_interval.ShapeInterval¶ __init__(lower: Sequence[int], upper: Optional[Sequence[int]] = None)¶ Shapes in Deep Lake are best represented as intervals, this is to support dynamic tensors. Instead of having a single tuple of integers representing shape, we use 2 tuples of integers to represent the lower and upper bounds of the representing shape. If lower == upper for all cases, the shape is considered 'fixed'. If lower != upper for any cases, the shape is considered 'dynamic'. Parameters: lower (sequence) – Sequence of integers that represent the lower-bound shape. upper (sequence) – Sequence of integers that represent the upper-bound shape. If None is provided, lower is used as upper (implicitly fixed-shape). Raises: InvalidShapeIntervalError – If the provided lower/upper bounds are incompatible to represent a shape.",activeloop deeplake.util.remove_cache¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.remove-cache.html#deeplake-util-remove-cache,"deeplake.util.remove_cache.remove_memory_cache(storage: StorageProvider)¶ Removes the memory cache. deeplake.util.remove_cache.get_base_storage(storage: StorageProvider)¶ Removes all layers of caching and returns the underlying storage. deeplake.util.remove_cache.get_dataset_with_zero_size_cache(ds)¶ Returns a dataset with same storage but cache size set to zero. deeplake.util.remove_cache.create_read_copy_dataset(dataset, commit_id: Optional[str] = None)¶ Creates a read-only copy of the given dataset object, without copying underlying data. Parameters: dataset – The Dataset object to copy. commit_id – The commit id to checkout the new read-only copy to. Returns: A new Dataset object in read-only mode.",activeloop deeplake.util.notebook¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.notebook.html#deeplake-util-notebook,deeplake.util.notebook.is_notebook()¶ Whether running in a notebook. deeplake.util.notebook.is_jupyter()¶ Whether running in a Jupyter notebook. deeplake.util.notebook.is_colab()¶ Whether running in a colab notebook.,activeloop deeplake.util.exceptions¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.exceptions.html#deeplake-util-exceptions,"Check if the name or id is correct.')¶ Bases: Exception class deeplake.util.exceptions.BadRequestException(message)¶ Bases: Exception class deeplake.util.exceptions.OverLimitException(message='You are over the allowed limits for this operation.')¶ Bases: Exception class deeplake.util.exceptions.ServerException(message='Internal Activeloop server error.')¶ Bases: Exception class deeplake.util.exceptions.BadGatewayException(message='Invalid response from Activeloop server.')¶ Bases: Exception class deeplake.util.exceptions.GatewayTimeoutException(message='Activeloop server took too long to respond.')¶ Bases: Exception class deeplake.util.exceptions.WaitTimeoutException(message='Timeout waiting for server state update.')¶ Bases: Exception class deeplake.util.exceptions.LockedException(message='The resource is currently locked.')¶ Bases: Exception class deeplake.util.exceptions.UnexpectedStatusCodeException(message)¶ Bases: Exception class deeplake.util.exceptions.EmptyTokenException(message='The authentication token is empty.')¶ Bases: Exception class deeplake.util.exceptions.S3Error¶ Bases: Exception Catchall for all errors encountered while working with S3 class deeplake.util.exceptions.S3GetError¶ Bases: S3Error Catchall for all errors encountered while working getting an object from S3 class deeplake.util.exceptions.S3SetError¶ Bases: S3Error Catchall for all errors encountered while working setting an object in S3 class deeplake.util.exceptions.S3DeletionError¶ Bases: S3Error Catchall for all errors encountered while working deleting an object in S3 class deeplake.util.exceptions.S3ListError¶ Bases: S3Error Catchall for all errors encountered while retrieving a list of objects present in S3 class deeplake.util.exceptions.UnsupportedCompressionError(compression: Optional[str], htype: Optional[str] = None)¶ Bases: CompressionError class deeplake.util.exceptions.SampleCompressionError(sample_shape: Tuple[int, ...], compression_format: Optional[str], message: str)¶ Bases: CompressionError class deeplake.util.exceptions.SampleDecompressionError¶ Bases: CompressionError class deeplake.util.exceptions.InvalidImageDimensions(actual_dims, expected_dims)¶ Bases: Exception class deeplake.util.exceptions.TensorUnsupportedSampleType¶ Bases: Exception class deeplake.util.exceptions.MetaError¶ Bases: Exception class deeplake.util.exceptions.MetaDoesNotExistError(key: str)¶ Bases: MetaError class deeplake.util.exceptions.MetaAlreadyExistsError(key: str, required_meta: dict)¶ Bases: MetaError class deeplake.util.exceptions.MetaInvalidKey(name: str, available_keys: List[str])¶ Bases: MetaError class deeplake.util.exceptions.MetaInvalidRequiredMetaKey(key: str, subclass_name: str)¶ Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtype(htype: str, available_htypes: Sequence[str])¶ Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtypeOverwriteValue(key: str, value: Any, explanation: str = '')¶ Bases: MetaError class deeplake.util.exceptions.TensorMetaMissingRequiredValue(htype: str, key: Union[str, List[str]])¶ Bases: MetaError class deeplake.util.exceptions.TensorMetaInvalidHtypeOverwriteKey(htype: str, key: str, available_keys: Sequence[str])¶ Bases: MetaError class deeplake.util.exceptions.TensorDtypeMismatchError(expected: Union[dtype, str], actual: str, htype: str)¶ Bases: MetaError class deeplake.util.exceptions.InvalidTensorLinkError(msg='Invalid tensor link.')¶ Bases: MetaError class deeplake.util.exceptions.TensorMetaMutuallyExclusiveKeysError(keys: Optional[List[str]] = None, custom_message: Optional[str] = None)¶ Bases: MetaError class deeplake.util.exceptions.ReadOnlyModeError(custom_message: Optional[str] = None)¶ Bases: Exception class deeplake.util.exceptions.TransformError¶ Bases: Exception class deeplake.util.exceptions.FilterError¶ Bases: Exception class deeplake.util.exceptions.InvalidInputDataError(operation)¶ Bases: TransformError class deeplake.util.exceptions.UnsupportedSchedulerError(scheduler)¶ Bases: TransformError class deeplake.util.exceptions.TensorMismatchError(tensors, output_keys, skip_ok=False)¶ Bases: TransformError class deeplake.util.exceptions.InvalidOutputDatasetError(message='The output Dataset to transform should not be `read_only`.')¶ Bases: TransformError class deeplake.util.exceptions.InvalidTransformDataset(message='The TransformDataset (2nd argument to transform function) of one of the functions is invalid ",activeloop deeplake.util.exceptions¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.exceptions.html#deeplake-util-exceptions," class deeplake.util.exceptions.ExternalCommandError(command: str, status: int)¶ Bases: Exception class deeplake.util.exceptions.KaggleError¶ Bases: Exception class deeplake.util.exceptions.KaggleMissingCredentialsError(env_var_name: str)¶ Bases: KaggleError class deeplake.util.exceptions.KaggleDatasetAlreadyDownloadedError(tag: str, path: str)¶ Bases: KaggleError class deeplake.util.exceptions.InvalidPathException(directory)¶ Bases: Exception class deeplake.util.exceptions.AutoCompressionError(directory)¶ Bases: Exception class deeplake.util.exceptions.InvalidFileExtension(directory)¶ Bases: Exception class deeplake.util.exceptions.SamePathException(directory)¶ Bases: Exception class deeplake.util.exceptions.TensorInvalidSampleShapeError(shape: Sequence[int], expected_dims: int)¶ Bases: Exception class deeplake.util.exceptions.TensorMetaMissingKey(key: str, meta: dict)¶ Bases: Exception class deeplake.util.exceptions.TensorDoesNotExistError(tensor_name: str)¶ Bases: KeyError, AttributeError class deeplake.util.exceptions.TensorAlreadyExistsError(key: str)¶ Bases: Exception class deeplake.util.exceptions.TensorGroupDoesNotExistError(group_name: str)¶ Bases: KeyError class deeplake.util.exceptions.TensorGroupAlreadyExistsError(key: str)¶ Bases: Exception class deeplake.util.exceptions.InvalidTensorNameError(name: str)¶ Bases: Exception class deeplake.util.exceptions.InvalidTensorGroupNameError(name: str)¶ Bases: Exception class deeplake.util.exceptions.DynamicTensorNumpyError(key: str, index, property_key: str)¶ Bases: Exception class deeplake.util.exceptions.InvalidShapeIntervalError(message: str, lower: Optional[Sequence[int]] = None, upper: Optional[Sequence[int]] = None)¶ Bases: Exception class deeplake.util.exceptions.InvalidKeyTypeError(item: Any)¶ Bases: TypeError class deeplake.util.exceptions.UnsupportedTensorTypeError(item: Any)¶ Bases: TypeError class deeplake.util.exceptions.InvalidBytesRequestedError¶ Bases: Exception class deeplake.util.exceptions.ProviderListEmptyError¶ Bases: Exception class deeplake.util.exceptions.DirectoryAtPathException¶ Bases: Exception class deeplake.util.exceptions.FileAtPathException(path)¶ Bases: Exception class deeplake.util.exceptions.ProviderSizeListMismatch¶ Bases: Exception class deeplake.util.exceptions.ModuleNotInstalledException(message)¶ Bases: Exception class deeplake.util.exceptions.LoginException(message='Error while logging in, invalid auth token. Please try logging in again.')¶ Bases: Exception class deeplake.util.exceptions.UserNotLoggedInException(message='')¶ Bases: Exception class deeplake.util.exceptions.InvalidHubPathException(path)¶ Bases: Exception class deeplake.util.exceptions.PathNotEmptyException(use_hub=True)¶ Bases: Exception class deeplake.util.exceptions.AuthenticationException(message='Authentication failed. Please try logging in again.')¶ Bases: Exception class deeplake.util.exceptions.AuthorizationException(message='You are not authorized to access this resource on Activeloop Server.', response=None)¶ Bases: Exception class deeplake.util.exceptions.InvalidPasswordException(message='The password you provided was invalid.')¶ Bases: AuthorizationException class deeplake.util.exceptions.CouldNotCreateNewDatasetException(path: str)¶ Bases: AuthorizationException class deeplake.util.exceptions.ResourceNotFoundException(message='The resource you are looking for was not found",activeloop deeplake.util.exceptions¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.util.exceptions.html#deeplake-util-exceptions,"All the tensors should have equal length for it to be valid.')¶ Bases: TransformError class deeplake.util.exceptions.HubComposeEmptyListError(message='Cannot deeplake.compose an empty list.')¶ Bases: TransformError class deeplake.util.exceptions.HubComposeIncompatibleFunction(index: int)¶ Bases: TransformError class deeplake.util.exceptions.DatasetUnsupportedPytorch(reason)¶ Bases: Exception class deeplake.util.exceptions.CorruptedMetaError¶ Bases: Exception class deeplake.util.exceptions.ChunkEngineError¶ Bases: Exception class deeplake.util.exceptions.FullChunkError¶ Bases: ChunkEngineError class deeplake.util.exceptions.ChunkIdEncoderError¶ Bases: ChunkEngineError class deeplake.util.exceptions.ChunkSizeTooSmallError(message='If the size of the last chunk is given, it must be smaller than the requested chunk size.')¶ Bases: ChunkEngineError class deeplake.util.exceptions.DatasetHandlerError(message)¶ Bases: Exception class deeplake.util.exceptions.MemoryDatasetCanNotBePickledError¶ Bases: Exception class deeplake.util.exceptions.CorruptedSampleError(compression)¶ Bases: Exception class deeplake.util.exceptions.VersionControlError¶ Bases: Exception class deeplake.util.exceptions.MergeError¶ Bases: Exception class deeplake.util.exceptions.MergeNotSupportedError¶ Bases: MergeError class deeplake.util.exceptions.MergeMismatchError(tensor_name, mismatch_type, original_value, target_value)¶ Bases: MergeError class deeplake.util.exceptions.MergeConflictError(conflict_tensors=None, message='')¶ Bases: MergeError class deeplake.util.exceptions.CheckoutError¶ Bases: VersionControlError class deeplake.util.exceptions.CommitError¶ Bases: VersionControlError class deeplake.util.exceptions.EmptyCommitError¶ Bases: CommitError class deeplake.util.exceptions.TensorModifiedError¶ Bases: Exception class deeplake.util.exceptions.GCSDefaultCredsNotFoundError¶ Bases: Exception class deeplake.util.exceptions.InvalidOperationError(method: str, type: str)¶ Bases: Exception class deeplake.util.exceptions.AgreementError¶ Bases: Exception class deeplake.util.exceptions.AgreementNotAcceptedError(agreements=None)¶ Bases: AgreementError class deeplake.util.exceptions.NotLoggedInError(msg=None)¶ Bases: AgreementError class deeplake.util.exceptions.RenameError(msg='Only name of the dataset can be different in new path.')¶ Bases: Exception class deeplake.util.exceptions.BufferError¶ Bases: Exception class deeplake.util.exceptions.InfoError¶ Bases: Exception class deeplake.util.exceptions.OutOfChunkCountError¶ Bases: Exception class deeplake.util.exceptions.OutOfSampleCountError¶ Bases: Exception class deeplake.util.exceptions.SampleHtypeMismatchError(htype, sample_type)¶ Bases: Exception class deeplake.util.exceptions.EmptyTensorError(message)¶ Bases: Exception class deeplake.util.exceptions.DatasetViewSavingError¶ Bases: Exception class deeplake.util.exceptions.ManagedCredentialsNotFoundError(org_id, creds_key)¶ Bases: Exception class deeplake.util.exceptions.UnableToReadFromUrlError(url, status_code)¶ Bases: Exception class deeplake.util.exceptions.InvalidTokenException¶ Bases: Exception class deeplake.util.exceptions.TokenPermissionError¶ Bases: Exception ",activeloop deeplake.auto.unstructured.kaggle¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.unstructured.kaggle.html#deeplake-auto-unstructured-kaggle,"deeplake.auto.unstructured.kaggle.download_kaggle_dataset(tag: str, local_path: str, kaggle_credentials: Optional[dict] = None, exist_ok: bool = False)¶ Calls the kaggle API (https://www.kaggle.com/docs/api) to download a kaggle dataset and unzip it's contents. Parameters: tag (str) – Kaggle dataset tag. Example: 'coloradokb/dandelionimages' points to https://www.kaggle.com/coloradokb/dandelionimages local_path (str) – Path where the kaggle dataset will be downloaded and unzipped. Only local path downloading is supported. kaggle_credentials (dict) – Credentials are gathered from the environment variables or ~/kaggle.json. If those don't exist, the kaggle_credentials argument will be used. exist_ok (bool) – If the kaggle dataset was already downloaded, and exist_ok is True, no error is thrown. Raises: KaggleMissingCredentialsError – If no kaggle credentials are found. KaggleDatasetAlreadyDownloadedError – If the dataset tag already exists in local_path.",activeloop Deep Lake API Reference¶,https://docs.deeplake.ai/en/v3.0.16/index.html#deep-lake-api-reference,Deep Lake is an open-source database for AI.Getting Started Installation Key Concepts Datasets Creating Datasets Loading Datasets Deleting and Renaming Datasets Copying Datasets Dataset Operations Dataset Visualization Dataset Credentials Dataset Properties Dataset Version Control Dataset Views Tensors Creating Tensors Deleting and Renaming Tensors Adding and deleting samples Retrieving samples Tensor Properties Info Video features Htypes Image Htype Video Htype Audio Htype Class Label Htype Bounding Box Htype Segmentation Mask Htype Binary Mask Htype COCO Keypoints Htype Point Htype Polygon Htype Point Cloud Htype Mesh Htype Sequence htype Link htype Compressions Sample Compression Chunk Compression PyTorch and Tensorflow Support Utility Functions General Functions Making Deep Lake Samples Parallelism Integrations Weights and Biases Logging Dataset Creation Logging Dataset Read Experimental API Dataloader Tensor Query Language API Reference deeplake deeplake.core deeplake.core.dataset deeplake.core.tensor deeplake.api deeplake.auto deeplake.util deeplake.client.log,activeloop Indices and tables¶,https://docs.deeplake.ai/en/v3.0.16/index.html#indices-and-tables,Index Module Index Search Page,activeloop deeplake.core.sample¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.sample.html#deeplake-core-sample,"class deeplake.core.sample.Sample¶ __init__(path: str | None = None, array: ndarray | None = None, buffer: bytes | memoryview | None = None, compression: str | None = None, verify: bool = False, shape: Tuple[int] | None = None, dtype: str | None = None, creds: Dict | None = None, storage: StorageProvider | None = None)¶ Represents a single sample for a tensor. Provides all important meta information in one place. Note If self.is_lazy is True, this Sample doesn't actually have any data loaded. To read this data, simply try to read it into a numpy array (sample.array) Parameters: path (str) – Path to a sample stored on the local file system that represents a single sample. If path is provided, array should not be. Implicitly makes self.is_lazy == True. array (np.ndarray) – Array that represents a single sample. If array is provided, path should not be. Implicitly makes self.is_lazy == False. buffer – (bytes): Byte buffer that represents a single sample. If compressed, compression argument should be provided. compression (str) – Specify in case of byte buffer. verify (bool) – If a path is provided, verifies the sample if True. shape (Tuple[int]) – Shape of the sample. dtype (optional, str) – Data type of the sample. creds (optional, Dict) – Credentials for s3, gcp and http urls. storage (optional, StorageProvider) – Storage provider. Raises: ValueError – Cannot create a sample from both a path and array. property array: ndarray¶ Return numpy array corresponding to the sample. Decompresses the sample if necessary. Example >>> sample = deeplake.read('./images/dog.jpg') >>> arr = sample.array >>> arr.shape (323, 480, 3) Copy to clipboard compressed_bytes(compression: str | None) → bytes¶ Returns this sample as compressed bytes. Note If this sample is pointing to a path and the requested compression is the same as it's stored in, the data is returned without re-compressing. Parameters: compression (Optional[str]) – self.array will be compressed into this format. If compression is None, return uncompressed_bytes(). Returns: Bytes for the compressed sample. Contains all metadata required to decompress within these bytes. Return type: bytes Raises: ValueError – On recompression of unsupported formats. uncompressed_bytes() → bytes | None¶ Returns uncompressed bytes.",activeloop deeplake.core.linked_sample¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.linked-sample.html#deeplake-core-linked-sample,"class deeplake.core.linked_sample.LinkedSample(path: str, creds_key: str | None = None)¶ Represents a sample that is initialized using external links. See deeplake.link().",activeloop deeplake.core.partial_sample¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.partial-sample.html#deeplake-core-partial-sample,"class deeplake.core.partial_sample.PartialSample(sample_shape: Tuple[int, ...], tile_shape: Tuple[int, ...] | None = None, dtype: str | dtype = dtype('uint8'))¶ Represents a sample that is initialized by just shape and the data is updated later.",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"Called at the end of methods which write data, to ensure consistency as a default . set_bytes(path: str, value: bytes, start_byte: int | None = None, overwrite: bool | None = False)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises: InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode. LRU Cache¶ class deeplake.core.storage.LRUCache¶ Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str)¶ Deletes the object present at the path from the cache and the underlying storage. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters: path (str) – The path relative to the root of the underlying storage. Raises: KeyError – if an object is not found at the path. Returns: The bytes of the object present at the path. Return type: bytes __getstate__() → Dict[str, Any]¶ Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: StorageProvider | None, cache_size: int)¶ Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters: cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__()¶ Generator function that iterates over the keys of the cache and the underlying storage. Yields: str – the path of the object that it is iterating over, relative to the root of the provider",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage," Base Storage Provider¶ class deeplake.core.storage.StorageProvider¶ abstract __delitem__(path: str)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. abstract __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. abstract __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int abstract __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__¶ list of weak references to the object (if defined) abstract _all_keys() → Set[str]¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _is_hub_path = False¶ An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly()¶ Raises an exception if the provider is in read-only mode. abstract clear(prefix='')¶ Delete the contents of the provider. copy()¶ Returns a copy of the provider. Returns: A copy of the provider. Return type: StorageProvider disable_readonly()¶ Disables read-only mode for the provider. enable_readonly()¶ Enables read-only mode for the provider. flush()¶ Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush()¶ Flush cache if autoflush has been enabled",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"__len__()¶ Returns the number of files present in the cache and the underlying storage . Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes | DeepLakeMemoryObject)¶ Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any])¶ Recreates a cache with the same configuration as the state. Parameters: state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents. In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible. _all_keys()¶ Helper function that lists all the objects present in the cache and the underlying storage. Returns: set of all the objects found in the cache and the underlying storage. Return type: set _flush_if_not_read_only()¶ Flushes the cache if not in read-only mode. _forward(path)¶ Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value)¶ Forwards a path-value pair to the next storage, and un-marks its key. Parameters: path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int)¶ Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters: extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: bytes | DeepLakeMemoryObject)¶ Helper function that adds a key value pair to the cache. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _pop_from_cache()¶ Helper function that pops the least recently used key, value pair from the cache clear(prefix='')¶ Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache()¶ Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects()¶ Removes all DeepLakeMemoryObjects from the cache. flush()¶ Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"All read/write request keys will be appended to root . aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user. aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user. aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. __iter__()¶ Generator function that iterates over the keys of the S3Provider. Yields: str – the name of the object that it is iterating over. __len__()¶ Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns: the number of files present inside the root. Return type: int Raises: S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises: S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Helper function that lists all the objects present at the root of the S3Provider. Returns: set of all the objects found at the root of the S3Provider. Return type: set Raises: S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False)¶ If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The Deep Lake cloud path to the dataset. expiration (str) – The time at which the credentials expire. _state_keys()¶ Keys used to store the state of the provider. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the s3 bucket (under self.root). Warning Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range . Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. get_deeplake_object(path: str, expected_class, meta: Dict | None = None, url=False, partial_bytes: int = 0)¶ If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache. Parameters: path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises: ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns: An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject)¶ Registers a new object in the cache. remove_deeplake_object(path: str)¶ Removes a DeepLakeMemoryObject from the cache. S3 Storage Provider¶ class deeplake.core.storage.S3Provider¶ Bases: StorageProvider Provider class for using S3 storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises: S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path)¶ Gets the object present at the path. Parameters: path (str) – the path relative to the root of the S3Provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: str | None = None, aws_secret_access_key: str | None = None, aws_session_token: str | None = None, endpoint_url: str | None = None, aws_region: str | None = None, profile_name: str | None = None, token: str | None = None)¶ Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters: root (str) – The root of the provider",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"DirectoryAtPathException – If a directory is found at the path . Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str)¶ Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises: FileAtPathException – If the root is a file instead of a directory. __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str]¶ Lists all the objects present at the root of the Provider. Parameters: refresh (bool) – refresh keys Returns: set of all the objects found at the root of the Provider. Return type: set _check_is_file(path: str)¶ Checks if the path is a file. Returns the full_path to file if True. Parameters: path (str) – the path to the object relative to the root of the provider. Returns: the full path to the requested file. Return type: str Raises: DirectoryAtPathException – If a directory is found at the path. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"Returns: The bytes of the object present at the path within the given byte range . Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(root)¶ Rename root folder. Google Drive Storage Provider¶ class deeplake.core.storage.GDriveProvider¶ Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. __getitem__(path)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __init__(root: str, token: str | Dict | None = None, makemap: bool = True)¶ Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set clear(prefix='')¶ Delete the contents of the provider. sync()¶ Sync provider keys with actual storage Local Storage Provider¶ class deeplake.core.storage.LocalProvider¶ Bases: StorageProvider Provider class for using the local filesystem. __delitem__(path: str)¶ Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"Parameters: path (str) – The path relative to the root of the provider . start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. need_to_reload_creds(err: ClientError) → bool¶ Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root)¶ Rename root folder. Google Cloud Storage Provider¶ class deeplake.core.storage.GCSProvider¶ Bases: StorageProvider Provider class for using GC storage. __contains__(key)¶ Checks if key exists in mapping. __delitem__(key)¶ Remove key. __getitem__(key)¶ Retrieve data. __init__(root: str, token: str | Dict | None = None, project: str | None = None)¶ Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises: ModuleNotFoundError – If google cloud packages aren't installed. __iter__()¶ Iterating over the structure. __len__()¶ Returns length of the structure. __setitem__(key, value)¶ Store value in key. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. clear(prefix='')¶ Remove all keys with given prefix below root - empties out mapping. Warning Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#deeplake-core-storage,"Parameters: path (str) – The path relative to the root of the provider . start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path)¶ Renames root folder Memory Provider¶ class deeplake.core.storage.MemoryProvider¶ Bases: StorageProvider Provider class for using the memory. __delitem__(path: str)¶ Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __getstate__() → str¶ Does NOT save the in memory data in state. __init__(root: str = '')¶ __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Lists all the objects present at the root of the Provider. Returns: set of all the objects found at the root of the Provider. Return type: set clear(prefix='')¶ Clears the provider.",activeloop Base Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#base-storage-provider," class deeplake.core.storage.StorageProvider¶ abstract __delitem__(path: str)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. abstract __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. abstract __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int abstract __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__¶ list of weak references to the object (if defined) abstract _all_keys() → Set[str]¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _is_hub_path = False¶ An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly()¶ Raises an exception if the provider is in read-only mode. abstract clear(prefix='')¶ Delete the contents of the provider. copy()¶ Returns a copy of the provider. Returns: A copy of the provider. Return type: StorageProvider disable_readonly()¶ Disables read-only mode for the provider. enable_readonly()¶ Enables read-only mode for the provider. flush()¶ Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush()¶ Flush cache if autoflush has been enabled",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#lru-cache," class deeplake.core.storage.LRUCache¶ Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str)¶ Deletes the object present at the path from the cache and the underlying storage. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters: path (str) – The path relative to the root of the underlying storage. Raises: KeyError – if an object is not found at the path. Returns: The bytes of the object present at the path. Return type: bytes __getstate__() → Dict[str, Any]¶ Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: StorageProvider | None, cache_size: int)¶ Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters: cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__()¶ Generator function that iterates over the keys of the cache and the underlying storage. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present in the cache and the underlying storage. Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes | DeepLakeMemoryObject)¶ Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any])¶ Recreates a cache with the same configuration as the state. Parameters: state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#lru-cache,"In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible . _all_keys()¶ Helper function that lists all the objects present in the cache and the underlying storage. Returns: set of all the objects found in the cache and the underlying storage. Return type: set _flush_if_not_read_only()¶ Flushes the cache if not in read-only mode. _forward(path)¶ Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value)¶ Forwards a path-value pair to the next storage, and un-marks its key. Parameters: path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int)¶ Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters: extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: bytes | DeepLakeMemoryObject)¶ Helper function that adds a key value pair to the cache. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _pop_from_cache()¶ Helper function that pops the least recently used key, value pair from the cache clear(prefix='')¶ Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache()¶ Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects()¶ Removes all DeepLakeMemoryObjects from the cache. flush()¶ Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache. get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path",activeloop S3 Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#s3-storage-provider," class deeplake.core.storage.S3Provider¶ Bases: StorageProvider Provider class for using S3 storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises: S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path)¶ Gets the object present at the path. Parameters: path (str) – the path relative to the root of the S3Provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: str | None = None, aws_secret_access_key: str | None = None, aws_session_token: str | None = None, endpoint_url: str | None = None, aws_region: str | None = None, profile_name: str | None = None, token: str | None = None)¶ Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user. aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user. aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. __iter__()¶ Generator function that iterates over the keys of the S3Provider. Yields: str – the name of the object that it is iterating over. __len__()¶ Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns: the number of files present inside the root. Return type: int Raises: S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises: S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#lru-cache,"get_deeplake_object(path: str, expected_class, meta: Dict | None = None, url=False, partial_bytes: int = 0)¶ If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache . Parameters: path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises: ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns: An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject)¶ Registers a new object in the cache. remove_deeplake_object(path: str)¶ Removes a DeepLakeMemoryObject from the cache.",activeloop S3 Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#s3-storage-provider,"_all_keys()¶ Helper function that lists all the objects present at the root of the S3Provider . Returns: set of all the objects found at the root of the S3Provider. Return type: set Raises: S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False)¶ If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The Deep Lake cloud path to the dataset. expiration (str) – The time at which the credentials expire. _state_keys()¶ Keys used to store the state of the provider. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the s3 bucket (under self.root). Warning Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. need_to_reload_creds(err: ClientError) → bool¶ Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root)¶ Rename root folder.",activeloop Google Drive Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#google-drive-storage-provider,"class deeplake.core.storage.GDriveProvider¶ Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. __getitem__(path)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __init__(root: str, token: str | Dict | None = None, makemap: bool = True)¶ Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set clear(prefix='')¶ Delete the contents of the provider. sync()¶ Sync provider keys with actual storage",activeloop Local Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#local-storage-provider," class deeplake.core.storage.LocalProvider¶ Bases: StorageProvider Provider class for using the local filesystem. __delitem__(path: str)¶ Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str)¶ Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises: FileAtPathException – If the root is a file instead of a directory. __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str]¶ Lists all the objects present at the root of the Provider. Parameters: refresh (bool) – refresh keys Returns: set of all the objects found at the root of the Provider. Return type: set _check_is_file(path: str)¶ Checks if the path is a file. Returns the full_path to file if True. Parameters: path (str) – the path to the object relative to the root of the provider",activeloop Base Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#base-storage-provider,"Called at the end of methods which write data, to ensure consistency as a default . set_bytes(path: str, value: bytes, start_byte: int | None = None, overwrite: bool | None = False)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises: InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode.",activeloop Local Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#local-storage-provider,"Returns: the full path to the requested file . Return type: str Raises: DirectoryAtPathException – If a directory is found at the path. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path)¶ Renames root folder",activeloop Google Cloud Storage Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#google-cloud-storage-provider,"class deeplake.core.storage.GCSProvider¶ Bases: StorageProvider Provider class for using GC storage. __contains__(key)¶ Checks if key exists in mapping. __delitem__(key)¶ Remove key. __getitem__(key)¶ Retrieve data. __init__(root: str, token: str | Dict | None = None, project: str | None = None)¶ Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises: ModuleNotFoundError – If google cloud packages aren't installed. __iter__()¶ Iterating over the structure. __len__()¶ Returns length of the structure. __setitem__(key, value)¶ Store value in key. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. clear(prefix='')¶ Remove all keys with given prefix below root - empties out mapping. Warning Exercise caution! get_bytes(path: str, start_byte: int | None = None, end_byte: int | None = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(root)¶ Rename root folder.",activeloop Memory Provider¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.storage.html#memory-provider,"class deeplake.core.storage.MemoryProvider¶ Bases: StorageProvider Provider class for using the memory. __delitem__(path: str)¶ Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __getstate__() → str¶ Does NOT save the in memory data in state. __init__(root: str = '')¶ __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Lists all the objects present at the root of the Provider. Returns: set of all the objects found at the root of the Provider. Return type: set clear(prefix='')¶ Clears the provider.",activeloop deeplake.core.index¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.index.html#deeplake-core-index,"apply_squeeze(samples: List[ndarray])¶ Applies the primary axis of an Index to a list of ndarray samples . Will either return the list as given, or return the first sample. compose_at(item: int | slice | Tuple[int], i: int | None = None)¶ Returns a new Index representing the addition of an IndexValue, or the composition with a given axis. Examples >>> Index([slice(None), slice(None)]).compose_at(5) Index([slice(None), slice(None), 5]) Copy to clipboard >>> Index([slice(None), slice(5, 10), slice(None)]).compose_at(3, 1) Index([slice(None), 8, slice(None)]) Copy to clipboard Parameters: item (IndexValue) – The value to append or compose with the Index. i (int, optional) – The axis to compose with the given item. Defaults to None, meaning that the item will be appended instead. Returns: The result of the addition or composition. Return type: Index find_axis(offset: int = 0)¶ Returns the index for the nth subscriptable axis in the values of an Index. Parameters: offset (int) – The number of subscriptable axes to skip before returning. Defaults to 0, meaning that the first valid axis is returned. Returns: The index of the found axis, or None if no match is found. Return type: int is_trivial()¶ Checks if an Index is equivalent to the trivial slice [:], aka slice(None). length(parent_length: int)¶ Returns the primary length of an Index given the length of the parent it is indexing. See: IndexEntry.length() validate(parent_length)¶ Checks that the index is not accessing values outside the range of the parent. deeplake.core.index.merge_slices(existing_slice: slice, new_slice: slice) → slice¶ Compose two slice objects Given an iterable x, the following should be equivalent: x[existing_slice][new_slice] == x[merge_slices(existing_slice, new_slice)] Parameters: existing_slice (slice) – The existing slice to be restricted. new_slice (slice) – The new slice to be applied to the existing slice. Returns: the composition of the given slices Return type: slice Raises: NotImplementedError – Composing slices with negative values is not supported. Negative indexing for slices is only supported for the first slice. deeplake.core.index.slice_at_int(s: slice, i: int)¶ Returns the i th element of a slice s. Examples >>> slice_at_int(slice(None), 10) 10 Copy to clipboard >>> slice_at_int(slice(10, 20, 2), 3) 16 Copy to clipboard Parameters: s (slice) – The slice to index into. i (int) – The integer offset into the slice. Returns: The index corresponding to the offset into the slice. Return type: int Raises: NotImplementedError – Nontrivial slices should not be indexed with negative integers. IndexError – If step is negative and start is not greater than stop. deeplake.core.index.slice_length(s: slice, parent_length: int) → int¶ Returns the length of a slice given the length of its parent.",activeloop deeplake.core.index¶,https://docs.deeplake.ai/en/v3.0.16/deeplake.core.index.html#deeplake-core-index," class deeplake.core.index.IndexEntry(value: int | slice | Tuple[int] = slice(None, None, None))¶ __getitem__(item: int | slice | Tuple[int])¶ Combines the given item and this IndexEntry. Returns a new IndexEntry representing the composition of the two. Examples >>> IndexEntry()[0:100] IndexEntry(slice(0, 100, None)) Copy to clipboard >>> IndexEntry()[100:200][5] IndexEntry(105) Copy to clipboard >>> IndexEntry()[(0, 1, 2, 3)] IndexEntry((0, 1, 2, 3)) Copy to clipboard >>> IndexEntry()[1, 2, 3] IndexEntry((0, 1, 2, 3)) Copy to clipboard Parameters: item – The desired sub-index to be composed with this IndexEntry. Can be an int, a slice, or a tuple of ints. Returns: The new IndexEntry object. Return type: IndexEntry Raises: TypeError – An integer IndexEntry should not be indexed further. __init__(value: int | slice | Tuple[int] = slice(None, None, None))¶ __str__()¶ Return str(self). __weakref__¶ list of weak references to the object (if defined) indices(length: int)¶ Generates the sequence of integer indices for a target of a given length. is_trivial()¶ Checks if an IndexEntry represents the entire slice length(parent_length: int) → int¶ Returns the length of an IndexEntry given the length of the parent it is indexing. Examples >>> IndexEntry(slice(5, 10)).length(100) 5 >>> len(list(range(100))[5:10]) 5 >>> IndexEntry(slice(5, 100)).length(50) 45 >>> len(list(range(50))[5:100]) 45 >>> IndexEntry(0).length(10) 1 Copy to clipboard Parameters: parent_length (int) – The length of the target that this IndexEntry is indexing. Returns: The length of the index if it were applied to a parent of the given length. Return type: int subscriptable()¶ Returns whether an IndexEntry can be further subscripted. validate(parent_length: int)¶ Checks that the index is not accessing values outside the range of the parent. class deeplake.core.index.Index(item: int | slice | Tuple[int] | Index | List[IndexEntry] = slice(None, None, None))¶ __getitem__(item: int | slice | List[int] | Tuple[int | slice | Tuple[int]] | Index)¶ Returns a new Index representing a subscripting with the given item. Modeled after NumPy's advanced integer indexing. See: https://numpy.org/doc/stable/reference/arrays.indexing.html Examples >>> Index([5, slice(None)])[5] Index([5, 5]) Copy to clipboard >>> Index([5])[5:6] Index([5, slice(5, 6)]) Copy to clipboard >>> Index()[0, 1, 2:5, 3] Index([0, 1, slice(2, 5), 3]) Copy to clipboard >>> Index([slice(5, 6)])[(0, 1, 2:5, 3),] Index([(5, 1, slice(2, 5), 3)]) Copy to clipboard Parameters: item – The contents of the subscript expression to add to this Index. Returns: The Index representing the result of the subscript operation. Return type: Index Raises: TypeError – Given item should be another Index, or compatible with NumPy's advanced integer indexing. __init__(item: int | slice | Tuple[int] | Index | List[IndexEntry] = slice(None, None, None))¶ Initializes an Index from an IndexValue, another Index, or the values from another Index. Represents a list of IndexEntry objects corresponding to indexes into each axis of an ndarray. __repr__()¶ Return repr(self). __str__()¶ Return str(self). __weakref__¶ list of weak references to the object (if defined) apply(samples: List[ndarray])¶ Applies an Index to a list of ndarray samples with the same number of entries as the first entry in the Index",activeloop How Shuffling Works in Deep Lake's PyTorch DataLoader,https://docs.activeloop.ai/how-hub-works/shuffling-in-ds.pytorch#how-shuffling-works-in-deep-lakes-pytorch-dataloader,"The Deep Lake shuffling algorithm is based upon a shuffle buffer that preloads a specified amount of data (in MB) determined by the buffer_size parameter in ds.pytorch(buffer_size = 2048). First, the dataloader randomly selects chunks from the applicable tensors until the shuffle buffer is full. Next, the indices in shuffle buffer are randomly sampled to construct the batches that are returned by the dataloader. As the data in the shuffle buffer is consumed, new chunks are randomly selected and added to the buffer.In the OSS dataloader, the shuffle buffer contains the decompressed, decoded, and transformed samples. When using the PyTorch dataloaders, this corresponds to torch tensors. In the Performant dataloader, the shuffle buffer contains the non-decompressed data in the format they are stored in. For images, this typically corresponds to compressed bytes in jpeg, png, or other compressions. Since compressed data is stored more efficiently than uncompressed data, there are typically more distinct samples of data in the Performant dataloader shuffle buffer compared to the OSS shuffle buffer. If many chunks in the buffer contain data from the same class, which may occur if data was uploaded in non-random order, the shuffle buffer may contain fewer unique classes than if the samples were chosen fully randomly based on index. The most extreme case of reduced randomness occurs when datasets are much larger than the shuffle buffer, when they have many classes, and when those classes occur in sequence within the dataset indices. One example dataset is Unshuffled ImageNet, which has 1000 classes, 1.2M images, 140GB of data, and approximately 140 images per 16MB chunk. When the images are uploaded in sequence, the plot below shows how many unique classes are returned by the loader vs the number of images that have been returned in total. It is evident that fully randomly sampling returns more unique values than the Deep Lake dataloader. If reduced randomness has an impact on model performance in your workflows, the recommended countermeasures are:Store the dataset in a shuffled fashion such that the data does not appear in order by class. This completely mitigates the randomness concerns at the output of the data loader.Store the dataset with a smaller chunk size. This increases randomness because the shuffle buffer selects more discreet chunks before filling up. The current default size is 8, and reducing chunk size to 4MB significantly increases randomness (see plot above) with only a modest slowdown in data transfer speed.Increase the size of the shuffle buffer. This mitigates the randomness concerns but may not completely alleviate them.",activeloop Setting up Federated Credentials in Microsoft Azure,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/provisioning-federated-credentials#setting-up-federated-credentials-in-microsoft-azure,"The most secure method for connecting data from your Azure storage to Deep Lake is using Federated Credentials, which are set up using the steps below:",activeloop Step 1: Register Application Credentials with the Microsoft Identity Platform,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/provisioning-federated-credentials#step-1-register-application-credentials-with-the-microsoft-identity-platform,"1. Login to the Azure account where the App will be registered and where the data is stored.2. Go to the App Registrations page in the Azure UI, which can be done by searching 'App registrations' in the console.3. Click on Register an application or New registration.4. Enter the Name and Supported account type (all are supported in Deep Lake) and click Register5. In the application console, click Certificates & secrets. 6. Click on Federated credentials and Add credential.7. Click on Select scenario and select Other issuer.8. Enter the following information in the form, and click Add.Issuer: https://cognito-identity.amazonaws.comThis is for trusting Activeloop's Cognito issuer. There's no need to create AWS Cognito by the user.Subject identifier: us-east-1:7bc30eb1-bac6-494b-bf53-5747849d45aaName: enter a name with your choiceDescription (optional): enter description a with your choiceAudience: us-east-1:57e5de2f-e2ec-4514-b9b0-f3bb8c4283c3",activeloop Step 2a: Apply the Application Credentials to your Azure storage account,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/provisioning-federated-credentials#step-2a-apply-the-application-credentials-to-your-azure-storage-account,"Skip to 2b if you want to assign Application Credentials to a specific Azure container 1. Go to the Storage accounts page in the Azure UI, which can be done by searching 'Storage accounts' in the console.2. Select the Storage account to which you want to add Application Credentials.4. Select Access Control (IAM) and click Add, and select Add role assignment.5. Search and select Storage Blob Data Contributor under the role names and click Next.6. Click on the Select members link, and in the tab that opens up on the right, search by name and select the application you created in Step 1. Click Select at the bottom of the page. 7. The application should appear in the list of Members, at which point you can click Review + assign.",activeloop IMPORTANT TO PERFORM STEPS BELOW TO COMPLETE 2b - PLEASE DO NOT SKIP,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/provisioning-federated-credentials#important-to-perform-steps-below-to-complete-2b-please-do-not-skip,"5. Perform substeps 5-7 from Step 2a above, in order to add the Application Credentials to the Container6. Execute the steps in Step 2a above on your Storage Account, except set the Storage Account Role Assignment to Storage Blob Delegator in substep 5.",activeloop Step 2b: Apply the Application Credentials to a specific Azure contained in your Azure storage account,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/provisioning-federated-credentials#step-2b-apply-the-application-credentials-to-a-specific-azure-contained-in-your-azure-storage-accoun,"1. Go to the Storage accounts page in the Azure UI, which can be done by searching 'Storage accounts' in the console.2. Select the Storage account to which you want to add Application Credentials.3.Select the Container to which you add the Application Credentials.4. Select Access Control (IAM) and click Add, and select Add role assignment.",activeloop Enabling CORS in Azure for Data Visualization,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/enabling-cors#enabling-cors-in-azure-for-data-visualization,"Cross-Origin Resource Sharing (CORS) is typically enabled by default in Azure. If that's not the case in your Azure account, in order to visualize Deep Lake datasets stored in your own Azure storage in the Deep Lake app, please enable CORS in the storage account containing the Deep Lake dataset and any source data in linked tensors.",activeloop Steps for enabling CORS in Azure,https://docs.activeloop.ai/storage-and-credentials/managed-credentials/microsoft-azure/enabling-cors#steps-for-enabling-cors-in-azure,"1. Login to the Azure.2. Navigate to the Storage account with the relevant data.3. Open the Resource sharing (CORS) section on the left nav.4. Add the following items to the permissions.Allowed originsAllowed methodsAllowed headershttps://app.activeloop.aiGET, HEAD* https://app-dev.activeloop.dev GET, HEAD*https://app-staging.activeloop.devGET*",activeloop deeplake.core.sample¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.sample.html#deeplake-core-sample,"class deeplake.core.sample.Sample¶ __init__(path: Optional[str] = None, array: Optional[ndarray] = None, buffer: Optional[Union[bytes, memoryview]] = None, compression: Optional[str] = None, verify: bool = False, shape: Optional[Tuple[int]] = None, dtype: Optional[str] = None, creds: Optional[Dict] = None, storage: Optional[StorageProvider] = None)¶ Represents a single sample for a tensor. Provides all important meta information in one place. Note If self.is_lazy is True, this Sample doesn't actually have any data loaded. To read this data, simply try to read it into a numpy array (sample.array) Parameters: path (str) – Path to a sample stored on the local file system that represents a single sample. If path is provided, array should not be. Implicitly makes self.is_lazy == True. array (np.ndarray) – Array that represents a single sample. If array is provided, path should not be. Implicitly makes self.is_lazy == False. buffer – (bytes): Byte buffer that represents a single sample. If compressed, compression argument should be provided. compression (str) – Specify in case of byte buffer. verify (bool) – If a path is provided, verifies the sample if True. shape (Tuple[int]) – Shape of the sample. dtype (optional, str) – Data type of the sample. creds (optional, Dict) – Credentials for s3, gcp and http urls. storage (optional, StorageProvider) – Storage provider. Raises: ValueError – Cannot create a sample from both a path and array. property array: ndarray¶ Return numpy array corresponding to the sample. Decompresses the sample if necessary. Example >>> sample = deeplake.read('./images/dog.jpg') >>> arr = sample.array >>> arr.shape (323, 480, 3) Copy to clipboard compressed_bytes(compression: Optional[str]) → bytes¶ Returns this sample as compressed bytes. Note If this sample is pointing to a path and the requested compression is the same as it's stored in, the data is returned without re-compressing. Parameters: compression (Optional[str]) – self.array will be compressed into this format. If compression is None, return uncompressed_bytes(). Returns: Bytes for the compressed sample. Contains all metadata required to decompress within these bytes. Return type: bytes Raises: ValueError – On recompression of unsupported formats. property pil: Image¶ Return PIL image corresponding to the sample. Decompresses the sample if necessary. Example >>> sample = deeplake.read('./images/dog.jpg') >>> pil = sample.pil >>> pil.size (480, 323) Copy to clipboard uncompressed_bytes() → Optional[bytes]¶ Returns uncompressed bytes.",activeloop deeplake.core.linked_sample¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.linked-sample.html#deeplake-core-linked-sample,"class deeplake.core.linked_sample.LinkedSample(path: str, creds_key: Optional[str] = None)¶ Represents a sample that is initialized using external links. See deeplake.link().",activeloop deeplake.core.partial_sample¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.partial-sample.html#deeplake-core-partial-sample,"class deeplake.core.partial_sample.PartialSample(sample_shape: Tuple[int, ...], tile_shape: Optional[Tuple[int, ...]] = None, dtype: Union[str, dtype] = dtype('uint8'))¶ Represents a sample that is initialized by just shape and the data is updated later.",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage," Base Storage Provider¶ class deeplake.core.storage.StorageProvider¶ abstract __delitem__(path: str)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. abstract __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. abstract __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int abstract __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__¶ list of weak references to the object (if defined) abstract _all_keys() → Set[str]¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _is_hub_path = False¶ An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly()¶ Raises an exception if the provider is in read-only mode. abstract clear(prefix='')¶ Delete the contents of the provider. copy()¶ Returns a copy of the provider. Returns: A copy of the provider. Return type: StorageProvider disable_readonly()¶ Disables read-only mode for the provider. enable_readonly()¶ Enables read-only mode for the provider. flush()¶ Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush()¶ Flush cache if autoflush has been enabled",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"__len__()¶ Returns the number of files present in the cache and the underlying storage . Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: Union[bytes, DeepLakeMemoryObject])¶ Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any])¶ Recreates a cache with the same configuration as the state. Parameters: state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents. In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible. _all_keys()¶ Helper function that lists all the objects present in the cache and the underlying storage. Returns: set of all the objects found in the cache and the underlying storage. Return type: set _flush_if_not_read_only()¶ Flushes the cache if not in read-only mode. _forward(path)¶ Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value)¶ Forwards a path-value pair to the next storage, and un-marks its key. Parameters: path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int)¶ Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters: extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: Union[bytes, DeepLakeMemoryObject])¶ Helper function that adds a key value pair to the cache. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _pop_from_cache()¶ Helper function that pops the least recently used key, value pair from the cache clear(prefix='')¶ Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache()¶ Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects()¶ Removes all DeepLakeMemoryObjects from the cache. flush()¶ Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"Parameters: path (str) – The path relative to the root of the provider . start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. get_deeplake_object(path: str, expected_class, meta: Optional[Dict] = None, url=False, partial_bytes: int = 0)¶ If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache. Parameters: path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises: ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns: An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject)¶ Registers a new object in the cache. remove_deeplake_object(path: str)¶ Removes a DeepLakeMemoryObject from the cache. S3 Storage Provider¶ class deeplake.core.storage.S3Provider¶ Bases: StorageProvider Provider class for using S3 storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises: S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path)¶ Gets the object present at the path. Parameters: path (str) – the path relative to the root of the S3Provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, endpoint_url: Optional[str] = None, aws_region: Optional[str] = None, profile_name: Optional[str] = None, token: Optional[str] = None)¶ Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"end_byte (int, optional) – If only specific bytes up to end_byte are required . Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. need_to_reload_creds(err: ClientError) → bool¶ Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root)¶ Rename root folder. Google Cloud Storage Provider¶ class deeplake.core.storage.GCSProvider¶ Bases: StorageProvider Provider class for using GC storage. __contains__(key)¶ Checks if key exists in mapping. __delitem__(key)¶ Remove key. __getitem__(key)¶ Retrieve data. __init__(root: str, token: Optional[Union[str, Dict]] = None, project: Optional[str] = None)¶ Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises: ModuleNotFoundError – If google cloud packages aren't installed. __iter__()¶ Iterating over the structure. __len__()¶ Returns length of the structure. __setitem__(key, value)¶ Store value in key. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. clear(prefix='')¶ Remove all keys with given prefix below root - empties out mapping. Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"Called at the end of methods which write data, to ensure consistency as a default . set_bytes(path: str, value: bytes, start_byte: Optional[int] = None, overwrite: Optional[bool] = False)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises: InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode. LRU Cache¶ class deeplake.core.storage.LRUCache¶ Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str)¶ Deletes the object present at the path from the cache and the underlying storage. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters: path (str) – The path relative to the root of the underlying storage. Raises: KeyError – if an object is not found at the path. Returns: The bytes of the object present at the path. Return type: bytes __getstate__() → Dict[str, Any]¶ Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: Optional[StorageProvider], cache_size: int)¶ Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters: cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__()¶ Generator function that iterates over the keys of the cache and the underlying storage. Yields: str – the path of the object that it is iterating over, relative to the root of the provider",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"KeyError – If an object is not found at the path . rename(root)¶ Rename root folder. Google Drive Storage Provider¶ class deeplake.core.storage.GDriveProvider¶ Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. __getitem__(path)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __init__(root: str, token: Optional[Union[str, Dict]] = None, makemap: bool = True)¶ Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set clear(prefix='')¶ Delete the contents of the provider. sync()¶ Sync provider keys with actual storage Local Storage Provider¶ class deeplake.core.storage.LocalProvider¶ Bases: StorageProvider Provider class for using the local filesystem. __delitem__(path: str)¶ Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"__getitem__(path: str)¶ Gets the object present at the path within the given byte range . Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str)¶ Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises: FileAtPathException – If the root is a file instead of a directory. __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str]¶ Lists all the objects present at the root of the Provider. Parameters: refresh (bool) – refresh keys Returns: set of all the objects found at the root of the Provider. Return type: set _check_is_file(path: str)¶ Checks if the path is a file. Returns the full_path to file if True. Parameters: path (str) – the path to the object relative to the root of the provider. Returns: the full path to the requested file. Return type: str Raises: DirectoryAtPathException – If a directory is found at the path. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user . aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. __iter__()¶ Generator function that iterates over the keys of the S3Provider. Yields: str – the name of the object that it is iterating over. __len__()¶ Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns: the number of files present inside the root. Return type: int Raises: S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises: S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Helper function that lists all the objects present at the root of the S3Provider. Returns: set of all the objects found at the root of the S3Provider. Return type: set Raises: S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False)¶ If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The Deep Lake cloud path to the dataset. expiration (str) – The time at which the credentials expire. _state_keys()¶ Keys used to store the state of the provider. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the s3 bucket (under self.root). Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required",activeloop Base Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#base-storage-provider,"set_bytes(path: str, value: bytes, start_byte: Optional[int] = None, overwrite: Optional[bool] = False)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider . value (bytes) – the value to be assigned at the path. start_byte (int, optional) – If only specific bytes starting from start_byte are to be assigned. overwrite (boolean, optional) – If the value is True, if there is an object present at the path it is completely overwritten, without fetching it's data. Raises: InvalidBytesRequestedError – If start_byte < 0. ReadOnlyModeError – If the provider is in read-only mode.",activeloop deeplake.core.storage¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#deeplake-core-storage,"end_byte (int, optional) – If only specific bytes up to end_byte are required . Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path)¶ Renames root folder Memory Provider¶ class deeplake.core.storage.MemoryProvider¶ Bases: StorageProvider Provider class for using the memory. __delitem__(path: str)¶ Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __getstate__() → str¶ Does NOT save the in memory data in state. __init__(root: str = '')¶ __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Lists all the objects present at the root of the Provider. Returns: set of all the objects found at the root of the Provider. Return type: set clear(prefix='')¶ Clears the provider.",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#lru-cache," class deeplake.core.storage.LRUCache¶ Bases: StorageProvider LRU Cache that uses StorageProvider for caching __delitem__(path: str)¶ Deletes the object present at the path from the cache and the underlying storage. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ If item is in cache_storage, retrieves from there and returns. If item isn't in cache_storage, retrieves from next storage, stores in cache_storage (if possible) and returns. Parameters: path (str) – The path relative to the root of the underlying storage. Raises: KeyError – if an object is not found at the path. Returns: The bytes of the object present at the path. Return type: bytes __getstate__() → Dict[str, Any]¶ Returns the state of the cache, for pickling __init__(cache_storage: StorageProvider, next_storage: Optional[StorageProvider], cache_size: int)¶ Initializes the LRUCache. It can be chained with other LRUCache objects to create multilayer caches. Parameters: cache_storage (StorageProvider) – The storage being used as the caching layer of the cache. This should be a base provider such as MemoryProvider, LocalProvider or S3Provider but not another LRUCache. next_storage (StorageProvider) – The next storage layer of the cache. This can either be a base provider (i.e. it is the final storage) or another LRUCache (i.e. in case of chained cache). While reading data, all misses from cache would be retrieved from here. While writing data, the data will be written to the next_storage when cache_storage is full or flush is called. cache_size (int) – The total space that can be used from the cache_storage in bytes. This number may be less than the actual space available on the cache_storage. Setting it to a higher value than actually available space may lead to unexpected behaviors. __iter__()¶ Generator function that iterates over the keys of the cache and the underlying storage. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present in the cache and the underlying storage. Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: Union[bytes, DeepLakeMemoryObject])¶ Puts the item in the cache_storage (if possible), else writes to next_storage. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. __setstate__(state: Dict[str, Any])¶ Recreates a cache with the same configuration as the state. Parameters: state (dict) – The state to be used to recreate the cache. Note While restoring the cache, we reset its contents",activeloop Base Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#base-storage-provider," class deeplake.core.storage.StorageProvider¶ abstract __delitem__(path: str)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. abstract __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. abstract __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. abstract __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int abstract __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. __weakref__¶ list of weak references to the object (if defined) abstract _all_keys() → Set[str]¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _is_hub_path = False¶ An abstract base class for implementing a storage provider. To add a new provider using Provider, create a subclass and implement all 5 abstract methods below. check_readonly()¶ Raises an exception if the provider is in read-only mode. abstract clear(prefix='')¶ Delete the contents of the provider. copy()¶ Returns a copy of the provider. Returns: A copy of the provider. Return type: StorageProvider disable_readonly()¶ Disables read-only mode for the provider. enable_readonly()¶ Enables read-only mode for the provider. flush()¶ Only needs to be implemented for caches. Flushes the data to the next storage provider. Should be a no op for Base Storage Providers like local, s3, azure, gcs, etc. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. maybe_flush()¶ Flush cache if autoflush has been enabled. Called at the end of methods which write data, to ensure consistency as a default",activeloop S3 Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#s3-storage-provider," class deeplake.core.storage.S3Provider¶ Bases: StorageProvider Provider class for using S3 storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the S3Provider. Note If the object is not found, s3 won't raise KeyError. Raises: S3DeletionError – Any S3 error encountered while deleting the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path)¶ Gets the object present at the path. Parameters: path (str) – the path relative to the root of the S3Provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. __init__(root: str, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, endpoint_url: Optional[str] = None, aws_region: Optional[str] = None, profile_name: Optional[str] = None, token: Optional[str] = None)¶ Initializes the S3Provider Example >>> s3_provider = S3Provider('snark-test/benchmarks') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. aws_access_key_id (str, optional) – Specifies the AWS access key used as part of the credentials to authenticate the user. aws_secret_access_key (str, optional) – Specifies the AWS secret key used as part of the credentials to authenticate the user. aws_session_token (str, optional) – Specifies an AWS session token used as part of the credentials to authenticate the user. endpoint_url (str, optional) – The complete URL to use for the constructed client. This needs to be provided for cases in which you're interacting with MinIO, Wasabi, etc. aws_region (str, optional) – Specifies the AWS Region to send requests to. profile_name (str, optional) – Specifies the AWS profile name to use. token (str, optional) – Activeloop token, used for fetching credentials for Deep Lake datasets (if this is underlying storage for Deep Lake dataset). This is optional, tokens are normally autogenerated. __iter__()¶ Generator function that iterates over the keys of the S3Provider. Yields: str – the name of the object that it is iterating over. __len__()¶ Returns the number of files present at the root of the S3Provider. Note This is an expensive operation. Returns: the number of files present inside the root. Return type: int Raises: S3ListError – Any S3 error encountered while listing the objects. __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the S3Provider. content (bytes) – the value to be assigned at the path. Raises: S3SetError – Any S3 error encountered while setting the value at the path. ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Helper function that lists all the objects present at the root of the S3Provider",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#lru-cache,"get_deeplake_object(path: str, expected_class, meta: Optional[Dict] = None, url=False, partial_bytes: int = 0)¶ If the data at path was stored using the output of a DeepLakeMemoryObject's tobytes function, this function will read it back into object form & keep the object in cache . Parameters: path (str) – Path to the stored object. expected_class (callable) – The expected subclass of DeepLakeMemoryObject. meta (dict, optional) – Metadata associated with the stored object url (bool) – Get presigned url instead of downloading chunk (only for videos) partial_bytes (int) – Number of bytes to read from the beginning of the file. If 0, reads the whole file. Defaults to 0. Raises: ValueError – If the incorrect expected_class was provided. ValueError – If the type of the data at path is invalid. ValueError – If url is True but expected_class is not a subclass of BaseChunk. Returns: An instance of expected_class populated with the data. register_deeplake_object(path: str, obj: DeepLakeMemoryObject)¶ Registers a new object in the cache. remove_deeplake_object(path: str)¶ Removes a DeepLakeMemoryObject from the cache.",activeloop LRU Cache¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#lru-cache,"In case the cache storage was local/s3 and is still accessible when unpickled (if same machine/s3 creds present respectively), the earlier cache contents are no longer accessible . _all_keys()¶ Helper function that lists all the objects present in the cache and the underlying storage. Returns: set of all the objects found in the cache and the underlying storage. Return type: set _flush_if_not_read_only()¶ Flushes the cache if not in read-only mode. _forward(path)¶ Forward the value at a given path to the next storage, and un-marks its key. _forward_value(path, value)¶ Forwards a path-value pair to the next storage, and un-marks its key. Parameters: path (str) – the path to the object relative to the root of the provider. value (bytes, DeepLakeMemoryObject) – the value to send to the next storage. _free_up_space(extra_size: int)¶ Helper function that frees up space the requred space in cache.No action is taken if there is sufficient space in the cache. Parameters: extra_size (int) – the space that needs is required in bytes. _insert_in_cache(path: str, value: Union[bytes, DeepLakeMemoryObject])¶ Helper function that adds a key value pair to the cache. Parameters: path (str) – the path relative to the root of the underlying storage. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _pop_from_cache()¶ Helper function that pops the least recently used key, value pair from the cache clear(prefix='')¶ Deletes ALL the data from all the layers of the cache and the actual storage. This is an IRREVERSIBLE operation. Data once deleted can not be recovered. clear_cache()¶ Flushes the content of all the cache layers if not in read mode and and then deletes contents of all the layers of it. This doesn't delete data from the actual storage. clear_deeplake_objects()¶ Removes all DeepLakeMemoryObjects from the cache. flush()¶ Writes data from cache_storage to next_storage. Only the dirty keys are written. This is a cascading function and leads to data being written to the final storage in case of a chained cache. get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path",activeloop Google Drive Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#google-drive-storage-provider,"class deeplake.core.storage.GDriveProvider¶ Bases: StorageProvider Provider class for using Google Drive storage. __delitem__(path)¶ Delete the object present at the path. Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. __getitem__(path)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __init__(root: str, token: Optional[Union[str, Dict]] = None, makemap: bool = True)¶ Initializes the GDriveProvider Example >>> gdrive_provider = GDriveProvider('gdrive://folder_name/folder_name') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (dict, str, optional) – Google Drive token. Can be path to the token file or the actual credentials dictionary. makemap (bool) – Creates path to id map if True. Note Requires client_secrets.json in working directory if token is not provided. Due to limits on requests per 100 seconds on google drive api, continuous requests such as uploading many small files can be slow. Users can request to increse their quotas on their google cloud platform. __iter__()¶ Generator function that iterates over the keys of the provider. Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Returns: the number of files present inside the root. Return type: int __setitem__(path, content)¶ Sets the object present at the path with the value Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set clear(prefix='')¶ Delete the contents of the provider. sync()¶ Sync provider keys with actual storage",activeloop Google Cloud Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#google-cloud-storage-provider,"class deeplake.core.storage.GCSProvider¶ Bases: StorageProvider Provider class for using GC storage. __contains__(key)¶ Checks if key exists in mapping. __delitem__(key)¶ Remove key. __getitem__(key)¶ Retrieve data. __init__(root: str, token: Optional[Union[str, Dict]] = None, project: Optional[str] = None)¶ Initializes the GCSProvider. Example >>> gcs_provider = GCSProvider('gcs://my-bucket/gcs_ds') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root. token (str/Dict) – GCP token, used for fetching credentials for storage). Can be a path to the credentials file, actual credential dictionary or one of the folowing: - google_default: Tries to load default credentials for the specified project. - cache: Retrieves the previously used credentials from cache if exist. - anon: Sets credentials=None. - browser: Generates and stores new token file using cli. project (str) – Name of the project from GCloud. Raises: ModuleNotFoundError – If google cloud packages aren't installed. __iter__()¶ Iterating over the structure. __len__()¶ Returns length of the structure. __setitem__(key, value)¶ Store value in key. _all_keys()¶ Generator function that iterates over the keys of the provider. Returns: set of all keys present at the root of the provider. Return type: set _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The deeplake cloud path to the dataset. expiration (str) – The time at which the credentials expire. clear(prefix='')¶ Remove all keys with given prefix below root - empties out mapping. Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(root)¶ Rename root folder.",activeloop Local Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#local-storage-provider," class deeplake.core.storage.LocalProvider¶ Bases: StorageProvider Provider class for using the local filesystem. __delitem__(path: str)¶ Delete the object present at the path. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> del local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> my_data = local_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. DirectoryAtPathException – If a directory is found at the path. Exception – Any other exception encountered while trying to fetch the object. __init__(root: str)¶ Initializes the LocalProvider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') Copy to clipboard Parameters: root (str) – The root of the provider. All read/write request keys will be appended to root.' Raises: FileAtPathException – If the root is a file instead of a directory. __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> for my_data in local_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> len(local_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> local_provider = LocalProvider('/home/ubuntu/Documents/') >>> local_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: Exception – If unable to set item due to directory at path or permission or space issues. FileAtPathException – If the directory to the path is a file instead of a directory. ReadOnlyError – If the provider is in read-only mode. _all_keys(refresh: bool = False) → Set[str]¶ Lists all the objects present at the root of the Provider. Parameters: refresh (bool) – refresh keys Returns: set of all the objects found at the root of the Provider. Return type: set _check_is_file(path: str)¶ Checks if the path is a file. Returns the full_path to file if True. Parameters: path (str) – the path to the object relative to the root of the provider",activeloop S3 Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#s3-storage-provider,"Returns: set of all the objects found at the root of the S3Provider . Return type: set Raises: S3ListError – Any S3 error encountered while listing the objects. _check_update_creds(force=False)¶ If the client has an expiration time, check if creds are expired and fetch new ones. This would only happen for datasets stored on Deep Lake storage for which temporary 12 hour credentials are generated. _set_hub_creds_info(hub_path: str, expiration: str)¶ Sets the tag and expiration of the credentials. These are only relevant to datasets using Deep Lake storage. This info is used to fetch new credentials when the temporary 12 hour credentials expire. Parameters: hub_path (str) – The Deep Lake cloud path to the dataset. expiration (str) – The time at which the credentials expire. _state_keys()¶ Keys used to store the state of the provider. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the s3 bucket (under self.root). Warning Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. S3GetError – Any other error other than KeyError while retrieving the object. need_to_reload_creds(err: ClientError) → bool¶ Checks if the credentials need to be reloaded. This happens if the credentials were loaded from the environment and have now expired. rename(root)¶ Rename root folder.",activeloop Memory Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#memory-provider,"class deeplake.core.storage.MemoryProvider¶ Bases: StorageProvider Provider class for using the memory. __delitem__(path: str)¶ Delete the object present at the path. Example >>> memory_provider = MemoryProvider('xyz') >>> del memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – the path to the object relative to the root of the provider. Raises: KeyError – If an object is not found at the path. ReadOnlyError – If the provider is in read-only mode. __getitem__(path: str)¶ Gets the object present at the path within the given byte range. Example >>> memory_provider = MemoryProvider('xyz') >>> my_data = memory_provider['abc.txt'] Copy to clipboard Parameters: path (str) – The path relative to the root of the provider. Returns: The bytes of the object present at the path. Return type: bytes Raises: KeyError – If an object is not found at the path. __getstate__() → str¶ Does NOT save the in memory data in state. __init__(root: str = '')¶ __iter__()¶ Generator function that iterates over the keys of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> for my_data in memory_provider: ... pass Copy to clipboard Yields: str – the path of the object that it is iterating over, relative to the root of the provider. __len__()¶ Returns the number of files present inside the root of the provider. Example >>> memory_provider = MemoryProvider('xyz') >>> len(memory_provider) Copy to clipboard Returns: the number of files present inside the root. Return type: int __setitem__(path: str, value: bytes)¶ Sets the object present at the path with the value Example >>> memory_provider = MemoryProvider('xyz') >>> memory_provider['abc.txt'] = b'abcd' Copy to clipboard Parameters: path (str) – the path relative to the root of the provider. value (bytes) – the value to be assigned at the path. Raises: ReadOnlyError – If the provider is in read-only mode. _all_keys()¶ Lists all the objects present at the root of the Provider. Returns: set of all the objects found at the root of the Provider. Return type: set clear(prefix='')¶ Clears the provider.",activeloop Local Storage Provider¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.core.storage.html#local-storage-provider,"Returns: the full path to the requested file . Return type: str Raises: DirectoryAtPathException – If a directory is found at the path. clear(prefix='')¶ Deletes ALL data with keys having given prefix on the local machine (under self.root). Exercise caution! get_bytes(path: str, start_byte: Optional[int] = None, end_byte: Optional[int] = None)¶ Gets the object present at the path within the given byte range. Parameters: path (str) – The path relative to the root of the provider. start_byte (int, optional) – If only specific bytes starting from start_byte are required. end_byte (int, optional) – If only specific bytes up to end_byte are required. Returns: The bytes of the object present at the path within the given byte range. Return type: bytes Raises: InvalidBytesRequestedError – If start_byte > end_byte or start_byte < 0 or end_byte < 0. KeyError – If an object is not found at the path. rename(path)¶ Renames root folder",activeloop deeplake.auto.structured¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.structured.html#deeplake-auto-structured,deeplake.auto.structured.base StructuredDataset deeplake.auto.structured.dataframe DataFrame,activeloop deeplake.auto.unstructured¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.unstructured.html#deeplake-auto-unstructured,deeplake.auto.unstructured.base UnstructuredDataset deeplake.auto.unstructured.image_classification ImageClassification deeplake.auto.unstructured.kaggle download_kaggle_dataset(),activeloop deeplake.auto.unstructured.base¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.unstructured.base.html#deeplake-auto-unstructured-base,class deeplake.auto.unstructured.base.UnstructuredDataset(source: str)¶,activeloop deeplake.auto.unstructured.image_classification¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.unstructured.image-classification.html#deeplake-auto-unstructured-image-classification,"class deeplake.auto.unstructured.image_classification.ImageClassification(source: str)¶ structure(ds: Dataset, use_progress_bar: bool = True, generate_summary: bool = True, image_tensor_args: dict = {}) → Dataset¶ Create a structured dataset. Parameters: ds (Dataset) – A Deep Lake dataset object. use_progress_bar (bool) – Defines if the method uses a progress bar. Defaults to True. generate_summary (bool) – Defines if the method generates ingestion summary. Defaults to True. image_tensor_args (dict) – Defines the sample compression of the dataset (jpeg or png). Returns: A Deep Lake dataset.",activeloop deeplake.auto.structured.base¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.structured.base.html#deeplake-auto-structured-base,class deeplake.auto.structured.base.StructuredDataset(source)¶ Initializes a structured dataset. Parameters: source (str) – The local path to folder or file containing a structured dataset and of the form ./path/to/dataset or ~/path/to/dataset or path/to/dataset.,activeloop deeplake.auto.structured.dataframe¶,https://docs.deeplake.ai/en/v3.1.0/deeplake.auto.structured.dataframe.html#deeplake-auto-structured-dataframe,"class deeplake.auto.structured.dataframe.DataFrame(source)¶ fill_dataset(ds: Dataset, progressbar: bool = True) → Dataset¶ Fill dataset with data from the dataframe - one tensor per column Parameters: ds (Dataset) – A Deep Lake dataset object. progressbar (bool) – Defines if the method uses a progress bar. Defaults to True. Returns: A Deep Lake dataset.",activeloop Let's build the AI Image Search App!,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"We need four things: data, a way to generate embeddings, a vector database to store them, and an interactive app. Let's start with the data. You can also take a look at the article companion video below and fork the GitHub repo as well.",activeloop How to Conduct Multimodal Search with ImageBind & Deep Lake?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"If you've ever thought that Multimodality in AI is limited to generating images with Midjourney or Dall-E, think again. Multimodal use cases will increasingly become more prevalent, with each additional modality unlocking incremental business value.In this guide, we'll explore the creation of a search engine that retrieves AI-generated images using text, audio, or visual inputs, opening new doors for accessibility, user experience, and business intelligence.To achieve this, we will leverage ImageBind by Meta AI, a game-changer for multimodal AI applications. It captures diverse data modalities and maps them into a common vector space, making our search more powerful. This unlocks novel use cases beyond a vanilla image similarity search.Unlike anything else, Deep Lake by Activeloop enables the storage and querying of multimodal data (not only the embeddings but also the raw data!). With Deep Lake and ImageBind, the potential applications of this technology are vast. Whether improving product discovery in eCommerce, streamlining digital media libraries, enhancing accessibility in tech products, or powering intuitive search in digital archives, this innovation can drive user satisfaction and business growth.",activeloop Gathering AI-Generated Images from Lexica for AI Image Search,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"We were thinking about interesting data to search on, so we came across a the Lexica dataset containing images from Lexica - a website where you can search for AI generate (mostly Stable Diffusion) images. The image search works by exact match in the prompt, while we will create a semantic search.Since we want to display the images on the web app, we need to get them and store them somewhere (in our case, an S3 bucket - but since Deep Lake is serverless, you can do it wherever). So, first, we load the hugging face dataset1# pip install datasets 2from datasets import load_dataset 3 4dataset = load_dataset('xfh/lexica_6k', split='train') 5CopyThen we store each image on a disk.1for row in dataset: 2 image = row['image'] 3 name = row['text'] 4 image.save(f'{row[name]}.jpg') 5CopyThis is a simplified version. For speed, we used a thread pool on dataset batches. Then we stored the images in a S3 bucket to show them later on the app.",activeloop What is ImageBind?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/," In a nutshell, ImageBind is a transformer-based model trained on multiple pairs of modalities, e.g., text-image, and learns how to map all of them in the same vector space. This means that a text query 'dog' will be mapped close to a dog image, allowing us to search in that space seamlessly. The main advantage is that we don't need one model per modality, like in CLIP where you have one for text and one for image, but we can use the same weights for all of them. The following image taken from the paper shows the idea.The model supports images, text, audio, depth, thermal, and IMU data. We will limit ourselves to the first three. The task of learning similar embeddings for similar concepts in different modalities, e.g., 'dog' and an image of a dog, is called alignment. The ImageBind authors used a Vision Transformer (ViT) , a typical architecture these days. Due to the number of different modalities, the preprocessing step is different. For example, for videos, we need to consider the time dimension, the audio needs to be converted to a spectrogram, but the main weights are the same.To learn to align pairs of modalities (text, image), (audio, text), the Authors used contrastive learning and specifically the InfoNCE loss. Using InfoNCE, the model is trained to identify a positive example from a batch of negative ones by maximizing the similarity between positive pairs and minimizing the similarity between negative ones.The most exciting thing is that even if the model was trained on pairs (text, image) and (audio, text), the model also learns (image, audio). This is what the Authors called the 'Emergent alignment of unseen pairs of modalities'/Moreover, we can do Embedding space arithmetic, adding (or subtracting) multiple modalities embeddings to capture different semantic information. We'll play with it later on.For the most curious reader, you can learn more by reading the paperOkay, let's get the image embeddings. We need to load the model and store the embeddings for all the images, so we can, later on, read them and dump them in the vector database.Getting the embeddings is quite easy with the ImageBind code code.1import data 2import torch 3from models import imagebind_model 4from models.imagebind_model import ModalityType 5 6text_list=['A dog.', 'A car', 'A bird'] 7image_paths=['.assets/dog_image.jpg', '.assets/car_image.jpg', '.assets/bird_image.jpg'] 8audio_paths=['.assets/dog_audio.wav', '.assets/car_audio.wav', '.assets/bird_audio.wav'] 9 10device = 'cuda:0' if torch.cuda.is_available() else 'cpu' 11 12# Instantiate model 13model = imagebind_model.imagebind_huge(pretrained=True) 14model.eval() 15model.to(device) 16 17# Load data 18inputs = { 19 ModalityType.TEXT: data.load_and_transform_text(text_list, device), 20 ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device), 21 ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, device), 22} 23 24with torch.no_grad(): 25 embeddings = model(inputs) 26 27print(embeddings[ModalityType.VISION]) 28print(embeddings[ModalityType.AUDIO]) 29print(embeddings[ModalityType.TEXT]) 30CopyWe first store all the image embeddings as pth files on disk using a simple function to batch the images",activeloop How to Store Embeddings in a Vector Database?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/," After we have obtained our embeddings, load them into Deep Lake. You can learn more about Deep Lake in Deep Lake docs.To start, we need to define the vector database.1import deeplake 2 3ds = deeplake.empty( 4 path='<hub://<YOUR_ACTIVELOOP_ORG_ID>/<DATASET_NAME>', 5 runtime={'db_engine': True}, 6 token='<YOUR_TOKEN>', 7 overwrite=overwrite, 8 ) 9CopyWe are setting db_engine=True, meaning we won't store the data on our disk, but we will use the managed Deep Lake database to store the data and run our queries. This comes in handy when developing applications where you need to have compute and data storage separation while keeping data where it matters to you. You can deploy the same setup entirely locally and not send your sensitive data anywhere it's not supposed to be.Next, we need to define the shape of the data.1with ds: 2 ds.create_tensor( 3 'metadata', 4 htype='json', 5 create_id_tensor=False, 6 create_sample_info_tensor=False, 7 create_shape_tensor=False, 8 chunk_compression='lz4', 9 ) 10 ds.create_tensor('images', htype='image', sample_compression='jpg') 11 ds.create_tensor( 12 'embeddings', 13 htype='embedding', 14 dtype=np.float32, 15 create_id_tensor=False, 16 create_sample_info_tensor=False, 17 max_chunk_size=64 * MB, 18 create_shape_tensor=True, 19 ) 20CopyHere we create three tensors, one to hold the metadata of each embedding, one to store the images (in our case, this is optional, but it's cool to showcase), and one to store the actual tensors of our embeddings. Deep Lake stands out from the crowd with this feature.Then it's time to add our data. We stored batched embeddings to disk as .pth files if you recall.1 2def add_torch_embeddings(ds: deeplake.Dataset, embeddings_data_path: Path): 3 embeddings_data = torch.load(embeddings_data_path) 4 for embedding_data in embeddings_data: 5 metadata = embedding_data['metadata'] 6 embedding = embedding_data['embedding'].cpu().float().numpy() 7 image = read_image(metadata['path']).permute(1, 2, 0).numpy() 8 metadata['path'] = Path(metadata['path']).name 9 ds.append({'embeddings': embedding, 'metadata': metadata, 'images': image}) 10 11embeddings_data_paths = embeddings_root.glob('*.pth') 12list( 13 tqdm( 14 map( 15 partial(add_torch_embeddings, ds), 16 embeddings_data_paths, 17 ) 18 ) 19) 20CopyHere we are just iterating all the embeddings file and adding everything within each one. We can have a look at the data from activeloop dashboard - spoiler alert. It is quite cool. You can also visualize the 3D embedding space (and pick your preferred clustering algorithm).Cool!To run a query on Deep Lake we can1embedding = # getting the embeddings from ImageBind 2dataset_path = # our path to active loop dataset 3limit = # number of results we want 4query = f'select * from (select metadata, cosine_similarity(embeddings, ARRAY{embedding.tolist()}) as score from '{dataset_path}') order by score desc limit {limit}' 5query_res = ds.query(query, runtime={'tensor_db': True}) 6# query_res = Dataset(path='hub://zuppif/lexica-6k', read_only=True, index=Index([(1331, 1551)]), tensors=['embeddings', 'images', 'metadata']) 7CopyWe can access the metadata by1query_res.metadata.data(aslist=True)['value'] 2# [{'path': '5e3a7c9b-e890-4975-9342-4b6898fed2c6.jpeg'}, {'path': '7a961855-25af-4359-b869-5ae1cc8a4b95.jpeg'}] 3CopyIf you remember, these are the metadata we stored previously, so the image filename. We wrapped all the vector store-related code into a VectorStore class inside vector_store.py.1class VectorStore(): 2 ..",activeloop Create Image Embeddings for Multimodal Retrieval,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"To search the images given a query, we need to encode the images in embeddings, then encode the query and perform cosine similarity to find the 'closets,' aka 'most similar' images. We want to search using multiple modalities, text, images, or audio. For this reason, we decided to use the new Meta model called ImageBind. If you want, you can learn more about generating image embeddings.",activeloop Developing an AI Image Search App with Gradio,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/," We'll use Gradio to create a sleek UI for the app. We'd need first to define the inputs and outputs of the app.1with gr.Blocks() as demo: 2 # a little description 3 with Path('docs/APP_README.md').open() as f: 4 gr.Markdown(f.read()) 5 # text input 6 text_query = gr.Text(label='Text') 7 with gr.Row(): 8 # image input 9 image_query = gr.Image(label='Image', type='pil') 10 with gr.Column(): 11 # audio input 12 audio_query = gr.Audio(label='Audio', source='microphone', type='filepath') 13 search_button = gr.Button('Search', label='Search', variant='primary') 14 # and a little section to change the settings 15 with gr.Accordion('Settings', open=False): 16 limit = gr.Slider( 17 minimum=1, 18 maximum=30, 19 value=15, 20 step=1, 21 label='search limit', 22 interactive=True, 23 ) 24 # This will show the images 25 gallery = gr.Gallery().style(columns=[3], object_fit='contain', height='auto') 26 27CopyThis results in the following UI.Then we need to link the search button to the actual search code.1... 2search_button.click( 3 search_button_handler, [text_query, image_query, audio_query, limit], [gallery] 4 ) 5CopyThis means text_query, image_query, audio_query, limit are the inputs to search_button_handler and gallery is the output.where search_button_handler is1... 2vs = VectorStore.from_env() 3model = get_model() 4... 5def search_button_handler( 6 text_query: Optional[str], 7 image_query: Optional[Image.Image], 8 audio_query: Optional[str], 9 limit: int = 15, 10): 11 if not text_query and not image_query and not audio_query: 12 logger.info('No inputs!') 13 return 14 # we have to pass a list for each query 15 if text_query == '' and len(text_query) <= 0: 16 text_query = None 17 if text_query is not None: 18 text_query = [text_query] 19 if image_query is not None: 20 image_query = [image_query] 21 if audio_query is not None: 22 audio_query = [audio_query] 23 start = perf_counter() 24 logger.info(f'Searching ...') 25 embeddings = get_embeddings(model, text_query, image_query, audio_query).values() 26 # if multiple inputs, we sum them 27 embedding = torch.stack(list(embeddings), dim=0).sum(0).squeeze() 28 logger.info(f'Model took {(perf_counter() - start) * 1000:.2f}') 29 images_paths, query_res = vs.retrieve(embedding.cpu().float(), limit) 30 return [f'{BUCKET_LINK}{image_path}' for image_path in images_paths] 31CopySo for each input, we check that they exist, if they do, we wrap them into a list. This is needed for our internal implementation. vs.retrieve is a function of VectorStore, just a utility class that wrap all the code in the same place. Inside that function, we first compute the embeddings using the get_embeddings function shown before, and then we run a query against the vector db.We have stored all the images in S3, so we return a list of links to the images there; this is the input to gr.GalleryWhere that's it! Let's see it in action.Normal single modality works as expected.If we receive more than one input, we sum them up. Basically,1embedding = torch.vstack(list(embeddings)).sum(0) 2CopyFor example, we can pass an image of a car and an audio of a f1 race.or the text + image. We can also do text + image + audio",activeloop How to Store Embeddings in a Vector Database?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"3 def retrieve(self, embedding: torch.Tensor, limit: int = 15) -> List[str]: 4 query = f'select * from (select metadata, cosine_similarity(embeddings, ARRAY{embedding.tolist()}) as score from '{self.dataset_path}') order by score desc limit {limit}' 5 query_res = self._ds.query(query, runtime={'tensor_db': True}) 6 images = [ 7 el['path'].split('.')[0] 8 for el in query_res.metadata.data(aslist=True)['value'] 9 ] 10 return images, query_res 11CopySo, since the model supports text, images, and audio we can also create a utility function to make our life easier.1@torch.no_grad() 2def get_embeddings( 3 model: torch.nn.Module, 4 texts: Optional[List[str]], 5 images: Optional[List[ImageLike]], 6 audio: Optional[List[str]], 7 dtype: torch.dtype = torch.float16 8) -> Dict[str, torch.Tensor]: 9 inputs = {} 10 if texts is not None: 11 # they need to be ints 12 inputs[ModalityType.TEXT] = load_and_transform_text(texts, device) 13 if images is not None: 14 inputs[ModalityType.VISION] = load_and_transform_vision_data(images, device, dtype) 15 if audio is not None: 16 inputs[ModalityType.AUDIO] = load_and_transform_audio_data(audio, device, dtype) 17 embeddings = model(inputs) 18 return embeddings 19CopyAlways remember the torch.no_grad decorator :) Next, we can easily do1vs = VectorStore(...) 2vs.retrieve(get_embeddings(texts=['A Dog'])) 3Copyqueryresults 'A Dog' 1vs = VectorStore(...) 2vs.retrieve(get_embeddings(images=['car.jpeg'])) 3Copyqueryresults ",activeloop The Future of Multimodal Search,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"In conclusion, the fusion of ImageBind and Deep Lake provides a robust framework for developing an AI-based multimodal search engine. By leveraging AI-generated images and text, audio, and visual inputs, we've shown how it's possible to make strides toward more intuitive, efficient, and inclusive search experiences.For machine learning engineers, this exploration opens up new avenues for creating more user-centric applications. At the same time, business executives can see the transformative impact of AI on a wide array of industries.The future of search is here, and it's multimodal. Now it's your turn. Try the demo and share the results!",activeloop What Modalities are Supported by ImageBind?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"ImageBind supports six different modalities - images, text, audio, depth, thermal, and IMU data. The modality encoders are based on a Transformer architecture, including the Vision Transformer (ViT) for images and videos. Audio is encoded by converting a 2-second audio sample into spectrograms, which are treated as 2D signals and encoded using a ViT. Thermal images and depth images are treated as one-channel images and also encoded using a ViT.",activeloop Is ImageBind Open Source?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"ImageBind is an open-source project with a PyTorch implementation and pretrained models available. The model and accompanying weights are available for download and can be used to feed text, image, and audio data into ImageBind.",activeloop Is ImageBind Applications?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"ImageBind enables novel applications ‘out-of-the-box' including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. It has several potential use cases, including information retrieval, zero-shot classification, and connecting the output of ImageBind to other models. ImageBind could play a crucial role in developing autonomous vehicles, helping them to perceive and interpret their surroundings more effectively.",activeloop Who Developed ImageBind?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,ImageBind has been developed and open-sourced by researchers at Meta.,activeloop What is multimodal AI?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Multimodal AI is an AI category that integrates various types, or modalities, of data to reach more precise conclusions, make insightful deductions, or provide more accurate real-world problem predictions. Multimodal AI platforms use and learn from a variety of data including video, audio, speech, images, text, and numerous traditional structured datasets.",activeloop What are the benefits of multimodal AI?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Multimodal AI generally surpasses single modal AI in many real-world situations. Through the combination of different data types, multimodal AI can produce more accurate, human-like responses, thereby enhancing its versatility and adaptability in varying scenarios. Industries like healthcare, finance, and retail could significantly benefit from multimodal AI due to its ability to provide precise and customized responses.",activeloop What is multimodal machine learning?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,Multimodal machine learning is an evolving multidisciplinary research domain aimed at developing computer agents with intelligent capabilities to process and connect information from multiple modalities. There has been considerable progress in this emerging field over recent years.,activeloop What are the challenges of multimodal AI?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Despite its potential and advantages, multimodal AI does have associated challenges, specifically related to data quality and interpretation for developers. Certain modalities may be excessively noisy, complicating the AI system's learning process. The complexity of multimodal AI systems necessitates substantial computational resources. Lastly, for these systems to be trusted, they need to be explainable.",activeloop What are the applications of multimodal AI?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Multimodal AI has extensive applications across various sectors including healthcare, finance, and retail. Multimodal conversational AI systems can answer queries, complete tasks, and mimic human conversations by comprehending and conveying information from multiple modalities. Complex recipe generation from images is another potential application for multimodal AI.",activeloop What is the difference between multimodal AI and single modal AI?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"The key distinction between multimodal AI and conventional single modal AI lies in the data. Single modal AI is typically designed to handle a singular data source or type. For instance, a financial AI leverages business financial data, along with wider economic and industrial sector data, to conduct analyses, make financial predictions, or identify potential financial issues for the company. In other words, the single modal AI is specialized for a specific task.",activeloop What is ImageBind?,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Note that we store a dictionary to add metadata; we are interested in the image_path and will use it later.1@torch.no_grad() 2def encode_images( 3 images_root: Path, 4 model: torch.nn.Module, 5 embeddings_out_dir: Path, 6 batch_size: int = 64, 7): 8 # not the best way, but the faster, best way would be to use a torch Dataset + Dataloader 9 images = images_root.glob('*.jpeg') 10 embeddings_out_dir.mkdir(exist_ok=True) 11 for batch_idx, chunk in tqdm(enumerate(chunks(images, batch_size))): 12 images_paths_str = [str(el) for el in chunk] 13 images_embeddings = get_images_embeddings(model, images_paths_str) 14 torch.save( 15 [ 16 {'metadata': {'path': image_path}, 'embedding': embedding} 17 for image_path, embedding in zip(images_paths_str, images_embeddings) 18 ], 19 f'{str(embeddings_out_dir)}/{batch_idx}.pth', 20 ) 21CopyNote that a better solution would have been using torch Dataset + Dataloader, and we dive into this in this image embedding tutorial. ",activeloop Developing an AI Image Search App with Gradio,https://activeloop.ai/resources/use-image-bind-multimodal-retrieval-for-ai-image-search/,"Feel free to test it out!Some of the results were not too great, 'cartoon' + cat image.In our experiments, we've noticed that the text is more potent compared to image and audio when combined with the other modalities ",activeloop The Problem in RAGs is Inaccurate Retrieval,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Enterprises are currently building ‘chat with your data' applications for a variety of use cases ranging from documenting internal processes to adding automation for customer support. These solutions are typically implemented using Retrieval Augmented Generation (RAG) systems that provide context to Large Language Models (LLMs) like GPT-4. Retrieval Augmented Generation also infuses the model with strategically relevant bits of the latest information, reducing the need to fine-tune the model separately and removing the training dataset restriction. While truthfulness and faithfulness are important, the utility of the implementation ultimately depends on the retrieval accuracy. At best, RAG applications achieve a retrieval accuracy of 70%, so in 30% of cases, an LLM like GPT-4 cannot provide an accurate response to the user's requests. This level of inconsistency is obvious and perceptible by the end user, and it is unacceptable when applications play a crucial role in business-critical operations.",activeloop Existing Solutions Provide Incremental Improvements,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"There are several options for boosting the accuracy of RAG systems such as: adjusting the input with feature engineering, fine-tuning embeddings, employing hybrid or lexical search, reranking the final results with cross encoders, and context-aware fine-tuning your LLM. However, these techniques offer marginal improvements that do not fundamentally change the user-experience of using LLM apps.Another approach for increasing accuracy is to use a broader LLM context, but this often results in higher costs with only minor gains. Why? Answer quality decreases, and the risk of hallucination increases As shown in a recent study from Stanford, LLMs struggle to pick out key details from large contexts, especially if these details are in the middle. These models find it hard to focus on relevant info when given many documents, and the problem gets worse (up to 20%) with more documents. The bigger the context, the higher the cost for LLM execution. LLM providers bill based on data size, and adding more data to a query increases the cost. For self-deployments, the cost manifests though higher compute requirements and larger infrastructure.",activeloop Customer Spotlight: Munai Boosts Vector Search Accuracy by 41% with Deep Memory,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Health tech startup Munai, backed by the Bill & Melinda Gates Foundation, provides intelligent patient management. 'Leveraging Deep Memory, we've achieved an astounding 18.6% boost in our vector search accuracy across medical documents. Such a transformation directly impacts the efficiency and efficacy of our solutions.' - Mateus Cichelero da Silva, Data Chapter Lead at Munai The company collaborates with one of the largest hospitals in Brazil to facilitate patient management for healthcare professionals and health insurance companies. 'Munai is one of the leading innovative generative AI solution providers in Healthcare. We are more than excited to partner with them to deploy mission critical AI workloads into hospitals' - Davit Buniatyan, CEO of Activeloop Hugo Morales e Cristian Rocha, founders of Munai, healthtech generative AI startup (Crédito: Divulgação) 'Deep Memory marks a groundbreaking advancement in constructing precision-focused RAG systems for medical applications. In contexts as delicate as healthcare environments, this is not just an improvement; it's a revolution' - Cristian Rocha CEO of Munai",activeloop Introducing Deep Memory: An Actual Solution That Works,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Deep Memory significantly increases Deep Lake's vector search accuracy up to +22% by learning an index from labeled queries tailored to your application, without impacting search time. These results can be achieved with only a few hundred example pairs of prompt embeddings and most relevant answers from the vector store.Post training, vector search is used without modifications as if normal.Embeddings can still be computed using a model of your choice such as Open AI ada-002 or other OSS models BGE by BAAI. Furthermore, search results from Deep Memory can be further improved by combining them with lexical search or reranker.Recall comparison of lexical search, vector search and deep memory at top 10 retrieved documents",activeloop Higher Accuracy at Lower Cost for Your RAG Application,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Deep Memory increases retrieval accuracy without altering your existing workflow. Additionally, by reducing the top_k input into the LLM, you can significantly cut inference costs via lower token usage.",activeloop How can you do it yourself?,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Let's get hands-on. first, we load the indexed dataset, let's call it corpus. It has to be in managed database (runtime={'tensor_db': True}) to leverage Deep Memory training on the Deep Lake Managed Tensor Database.1from deeplake import VectorStore 2 3corpus = VectorStore( 4 'hub://activeloop-test/scifact-demo', 5 embedding_function = embeddings.embed_documents, 6 runtime={'tensor_db': True} 7) 8CopyThen, we construct a dataset of questions and relevance. Relevance is a set of pairs (corpus.id: str, significance: str) which provides information where is the answer inside the corpus. Sometimes an answer can be found in multiple locations or have different significance. Relevance enables Deep Memory training to correctly optimize the embedding space for higher accuracy. The goal here is to obtain a labelled query dataset similar to what your user would ask in production setting. To observe significant improvements, we would expect to have few hundred query pairs. If you don't have already labelled query dataset, you can also use GPT4 to generate synthetic pairs based on chunks from the corpus.Then, we kickstart the training job on the Activeloop platform.1questions = ['question 1', ...] 2relevance = [[(corpus.dataset.id[0], 1), ...], ...] 3 4job_id = corpus.deep_memory.train( 5 queries = questions, 6 relevance = relevance, 7 embedding_function = embeddings.embed_documents, 8) 9CopyWe can monitor training progress and observe that we received up to +21.5% on validation set in few minutes 🤯.1corpus.deep_memory.status(job_id) 2CopyWe can directly enable Deep Memory inside vector search without waiting the training to finish.1corpus.search( 2 embedding_data = 'Female carriers of the Apolipoprotein E4 (APOE4) allele have increased risk for dementia.', 3 embedding_function = embeddings.embed_query, 4 deep_memory = True 5) 6CopyA GPT4 answer based on naive vector search would produce the following result 'The provided context does not explicitly state that female carriers of the Apolipoprotein E4 (APOE4) allele have an increased risk for dementia.' however with deep memory enabled 'The Apolipoprotein E4 (APOE4) allele is a confirmed susceptibility locus for late-onset Alzheimer's disease.' While qualitative results are good for manual inspection, in order to evaluate in a production setting we would like to quantitatively measure the accuracy of the information retrieval on previously unseen queries. This provides objective metrics to compare against naive vector search, or even lexical search.1corpus.deep_memory.evaluate( 2 queries = test_questions, 3 relevance = test_relevance, 4 embedding_function = embeddings.embed_documents, 5 top_k=[1, 3, 5, 10, 50, 100] 6) 7CopyX Axis computes recall per top k. Higher recall % and lower K is better.As you increase k , the likelihood of correct answer appearing in the retrieved results increases. However, since the LLM API costs are proportional to the number of tokens in the context, your goal is to decrease top_k while preserving accuracy. Deep Memory lets you to decrease top_k from 10 to 3 while preserving the accuracy, resulting in up to 70% lower token usage, faster computations, and lower costs.",activeloop Get Started Today,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,A number of enterprise customers are already reaping the benefits of the Deep Memory-powered Retrieval Augmented Generation apps. Dive into the future with our hosted training service in our Managed Tensor Database.Deep Memory is now available by signing up for a waitlist on this link.,activeloop Benefits of Deep Memory,https://activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/,"Higher Quality: Accuracy improves up to +22% learning from queries on average, with customers like Munai achieving up to 41% increase. Cost Reduction: Saving up to 50% GPT4 cost and execution time by reducing top k. Simple to Use: No change in Deep Lake Vector Search usage, same speed with higher accuracy. Natively integrated inside Langchain & LlamaIndex. Smaller, Faster, Cheaper: Use smaller text embeddings combined with Deep Memory such as BGE_small (384 dims) to beat OpenAI's ada-002 (1536 dims) and/or Elastic (BM25).",activeloop What is LangChain?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain's rise has been nothing short of impressive.LangChain is a library that provides tools for working with LLMs. It streamlines the large language model development process by offering a chain of steps to generate text from a specific input prompt. The toolkit handles prompt input, text generation, and generated output manipulation. Effectively, it allows for developing conversational agents that use LLMs to generate natural language responses. These agents can be used for various applications, from supporting customers at scale to answering company-specifc questions based on your team's internal Notion database.",activeloop How does LangChain work?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain provides a pipeline of steps that generate text from input prompts. The pipeline comprises seven main components: input (prompt templates), the large language model, agents, utilities, document loaders, chains, indexes, and memory.",activeloop Input - LangChain Prompt Templates,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"The input component specifies the input prompt or the initial input provided to the pipeline (check out the most complete prompt engineering resource on GitHub). This comes in the form of a template that defines the structure of the prompt, including the format and content (sadly, no more Sydney for the foreseeable future 🙁). A typical example, for instance, could be asking the LLM to act as a machine learning expert or writing a bulleted list of possible answers.Choosing the correct template is vital since a minor change in wording may significantly impact LLM performance - with up to 45% of improvement possible against the baseline with some methods.",activeloop Document Loaders,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"With Document Loaders module, you can ingest documents (e.g., pdfs of quarterly reports, powerpoints, etc., as we will do shortly!) into the LLM for further analysis (typically question answering).",activeloop Large Language Model,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain enables access to a range of pre-trained LLMs (e.g., GPT-3) trained on large datasets. The large language model component generates output (in this case, text) based on the prompt and input. These LLMs can further be fine-tuned to match the needs of specific conversational agents (e.g., if you are building a legal-specific chatbot). If you want to train your own LLM, check out how to train your Large Language Models (LLMs) efficiently.",activeloop Chains,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain Chains, arguably, are THE component that makes LangChain stand out. LangChain provides a simple interface to go beyond a single LLM call, but rather enables users to link up LLMs or calls to LLMs and other Utils. The LangChain chains can be generic - a chain made up of links, which can be either primitives (prompts, LLMs, Utils, or other chains) or additional chains. In addition, chains can be sequential, meaning that multiple chains are run one after another, with prior outputs serving as inputs to the next-in-line chain link.For instance, we used the Program-Aided Language Models in the previous example. Here, the chain takes user input (a reasoning question - 'I'm 6 feet tall. How many feet long is the circumference of the smallest moon of Jupiter compared to me?'), and decides that no additional Utils must be used to answer the question.Let's try asking another question. 'Alex has three times the number of balls as Ben. Ben has two more balls than Claire. If Claire has four balls, how many total balls do the three have together?'pal_chain = PALChain.from_math_prompt(llm, verbose=True) question = 'Alex has three times the number of balls as Ben. Ben has two more balls than Claire. If Claire has four balls, how many total balls do the three have together?' pal_chain.run(question) > Entering new PALChain chain... def solution(): '''Alex has three times the number of balls as Ben. Ben has two more balls than Claire. If Claire has four balls, how many total balls do the three have together?''' claire_balls = 4 ben_balls = claire_balls + 2 alex_balls = ben_balls * 3 total_balls = alex_balls + ben_balls + claire_balls result = total_balls return result > Finished chain. '28'",activeloop LangChain Agents,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain Agents make a call regarding which action should be taken next (e.g., a calculation should be handled by Wolfram Alpha, while a question about how old is the famous person might trigger a web search) to satisfy the directive given by the input. All the possible actions are chained together in a logical loop. In effect, you can use Tools (generic Utils, other chains, or even other agents) or a Toolkit (a group of tools) to minimize ChatGPT's peculiarity of being very bad at answering math questions, and asking sophisticated questions, like 'I'm 6 feet tall. How many feet longer is the circumference of the smallest moon of Jupiter compared to me?' (it's less than you think).Notably, Agents engage an LLM to decide which Action to take, execute that Action, see an Observation, and iterate until the goal is reached.For the previous query, the Agent would act as follows: Thought: Do I need to use a tool? Yes Action: PAL-MATH Action Input: The circumference of the smallest moon of Jupiter is 6 times longer than 6 feet. Observation: 36 Thought: Do I need to use a tool? No AI: The circumference of the smallest moon of Jupiter is 36 feet longer than 6 feet. The circumference of the smallest moon of Jupiter is 36 feet longer than 6 feet.",activeloop Indexes,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Language models are more powerful and valuable when combined with your own (proprietary) data. LangChain provides common indices for working with data - embeddings or, most commonly, a data store, such as Deep Lake.",activeloop LangChain Utils,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain provides an extensive collection of common utilities (Utils) to use in your application, such as Python REPLs (LLM would generate code to calculate the answer, run that code to get the answer and print it out), bash commands (e.g., to interface with the local system) or search engines, as well as a requests wrapper (to link a URL published post-2021 that ChatGPT doesn't know about, for instance). Utils are employed to strengthen the potency of LLMs when interacting with other sources of knowledge or computation.",activeloop LangChain Memory,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Finally, LangChain includes a memory component. By default, Chains and Agents are stateless, which means they react to each new input query independently of prior inputs. Memory thus allows agents to recall previous interactions with users (a great use case for chatbots) or remember entities (e.g., recall all the Horcruxes Lord Voldemort created and who destroyed which one after you feed the entirety of Pottervers into it). This enables agents to provide users with more personalized and contextualized responses as time passes.",activeloop What are the top LangChain use cases?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Agents: LangChain provides a dedicated module for building agents, which are systems that use a language model to interact with other tools. They can be used to create chatbots or power the next generation of personal assistants. Chatbots: Thanks to the Memory module, LangChain can be used to create chatbots, with the ability to remember what happened in previous interactions and the ability to recall distinct entities and answer questions based on them. Data Augmented Generation: LangChain provides tools for generating text based on specific data, such as summarization of a specific text or question answering over one particular or multiple pieces of text. Question Answering: As previously mentioned, LangChain can be utilized to perform question answering over your document data (powerpoints, pdfs, etc.), with the option to create an index over the data to save time and money (and, potentially, additional headcount on Research Assistants or Dedicated Knowledge teams - looking at you, consulting companies!). Code Understanding: You can use LangChain to parse large code repositories, understand how certain variables are defined or how the functions interact. More interestingly, you can also create demo scenarios and ask LangChain how the code would perform in that scenario. We've written a guide on using LangChain to Chat with GitHub Code Repos (we've used the Twitter recommendation algorithm). Check it out! Summarization: LangChain can synthesize an actionable summary of multiple longer documents, which helps distill long reports into the core pieces of information (or maybe even sifting through long and uninteresting email chains). Evaluation: LLMs are notoriously hard to evaluate with traditional 'hard' metrics. LangChain provides tools for assessing language models, specifically (data-augmented) question-answering systems. Model Comparison: LangChain provides a ModelLaboratory for testing and comparing various alternatives of prompts, models, and even chains to use.",activeloop Practical Example: Using LangChain and Deep Lake to Explore Amazon's Revenue Growth Pre- and Post-Pandemic,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"In this tutorial, we will first load Amazon's quarterly financial reports, embed using OpenAI's API, store the data in Deep Lake, and then explore it by asking questions.1!pip3 install langchain deeplake pypdf openai tiktoken 2 3Copy1from langchain.embeddings.openai import OpenAIEmbeddings 2from langchain.vectorstores import DeepLake 3from langchain.text_splitter import CharacterTextSplitter 4from langchain import OpenAI 5from langchain.chains import RetrievalQA 6from langchain.llms import OpenAIChat 7from langchain.document_loaders import PagedPDFSplitter 8CopyNext, we need to get API tokens from OpenAI and Activeloop (learn how to receive an API token for Activeloop after registering).1import os 2os.environ['OPENAI_API_KEY']='OPEN_AI_KEY_HERE' 3os.environ['ACTIVELOOP_TOKEN']='ACTIVELOOP_TOKEN' 4CopyNext, we need to get API tokens from OpenAI and Activeloop (learn how to receive an API token for Activeloop after registering).First, let's download financial reports from Amazon and load them into pages:1import requests 2import tqdm 3from typing import List 4 5# financial reports of amamzon, but can be replaced by any URLs of pdfs 6urls = ['https://s2.q4cdn.com/299287126/files/doc_financials/Q1_2018_-_8-K_Press_Release_FILED.pdf', 7 'https://s2.q4cdn.com/299287126/files/doc_financials/Q2_2018_Earnings_Release.pdf', 8 'https://s2.q4cdn.com/299287126/files/doc_news/archive/Q318-Amazon-Earnings-Press-Release.pdf', 9 'https://s2.q4cdn.com/299287126/files/doc_news/archive/AMAZON.COM-ANNOUNCES-FOURTH-QUARTER-SALES-UP-20-TO-$72.4-BILLION.pdf', 10 'https://s2.q4cdn.com/299287126/files/doc_financials/Q119_Amazon_Earnings_Press_Release_FINAL.pdf', 11 'https://s2.q4cdn.com/299287126/files/doc_news/archive/Amazon-Q2-2019-Earnings-Release.pdf', 12 'https://s2.q4cdn.com/299287126/files/doc_news/archive/Q3-2019-Amazon-Financial-Results.pdf', 13 'https://s2.q4cdn.com/299287126/files/doc_news/archive/Amazon-Q4-2019-Earnings-Release.pdf', 14 'https://s2.q4cdn.com/299287126/files/doc_financials/2020/Q1/AMZN-Q1-2020-Earnings-Release.pdf', 15 'https://s2.q4cdn.com/299287126/files/doc_financials/2020/q2/Q2-2020-Amazon-Earnings-Release.pdf', 16 'https://s2.q4cdn.com/299287126/files/doc_financials/2020/q4/Amazon-Q4-2020-Earnings-Release.pdf', 17 'https://s2.q4cdn.com/299287126/files/doc_financials/2021/q1/Amazon-Q1-2021-Earnings-Release.pdf', 18 'https://s2.q4cdn.com/299287126/files/doc_financials/2021/q2/AMZN-Q2-2021-Earnings-Release.pdf', 19 'https://s2.q4cdn.com/299287126/files/doc_financials/2021/q3/Q3-2021-Earnings-Release.pdf', 20 'https://s2.q4cdn.com/299287126/files/doc_financials/2021/q4/business_and_financial_update.pdf', 21 'https://s2.q4cdn.com/299287126/files/doc_financials/2022/q1/Q1-2022-Amazon-Earnings-Release.pdf', 22 'https://s2.q4cdn.com/299287126/files/doc_financials/2022/q2/Q2-2022-Amazon-Earnings-Release.pdf', 23 'https://s2.q4cdn.com/299287126/files/doc_financials/2022/q3/Q3-2022-Amazon-Earnings-Release.pdf', 24 'https://s2.q4cdn.com/299287126/files/doc_financials/2022/q4/Q4-2022-Amazon-Earnings-Release.pdf' 25 ] 26 27def load_reports(urls: List[str]) -> List[str]: 28 ''' Load pages from a list of urls''' 29 pages = [] 30 31 for url in tqdm.tqdm(urls): 32 r = requests.get(url) 33 path = url.split('/')[-1] 34 with open(path, 'wb') as f: 35 f.write(r.content) 36 loader = PagedPDFSplitter(path) 37 local_pages = loader.load_and_split() 38 pages.extend(local_pages) 39 return pages 40 41pages = load_reports(urls) 42CopyWe now use the Text Splitter Util to split documents into pages.1text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) 2texts = text_splitter.split_documents(pages) 3 4embeddings = OpenAIEmbeddings() 5 6db = DeepLake(dataset_path='hub://davitbun/amazon_earnings_6', embedding_function=embeddings, token=os.environ['ACTIVELOOP_TOKEN']) 7db.add_documents(texts) 8 9CopySince the dataset is stored on Activeloop, you can load it later without recomputing embeddings. This is a significant benefit cause it would save you time and computational resources. LangChain has a wrapper around Deep Lake, allowing you to use it as a Vector Store.1qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever()) 2 3Copy",activeloop What are the main benefits of LangChain for large language models?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain provides several benefits for developers working with LLMs. These comprise, but are not limited to: Simplified development: LangChain streamlines the development process by providing a simple pipeline for generating text from input prompts. This saves time and effort for developers who would otherwise have to handle complex NLP tasks manually. Improved accuracy: LangChain's 'chained' approach can enhance the accuracy of end result by ensuring consistency and coherence between the input prompts and the generated text. The model evaluation toolkit helps compare and improve the large language models you're working on. Customizability: LangChain's flexible pipeline allows developers to customize it to their specific needs. This includes choosing different LLMs and combining them with utilities such as search engines, modifying the prompt template, or handling the generated output in a custom way. But, it also can work out of the box, offering a multitude of pre-built prompts and chains for standard use cases, such as question answering, summarization, and chatbots, so that users can rapidly build and deploy language model applications. Interoperability: LangChain is built to integrate easily with various tools, data sources, and Python libraries, including Deep Lake, making it a stellar choice for building complex applications to solve multiple business problems.",activeloop How to answer questions using ML?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"The state-of-the-art method of question answering is by leveraging LangChain and Activeloop Deep Lake. You can store your documents in the unified, streamable format, and connect it to LangChain to run a question-answering model such as GPT or BERT. These language models are able to understand the context of the question and generate a more accurate answer. You may also implement techniques like data augmentation, document retrieval, and summarization to enhance the system performance.Let's now explore Amazon's revenue change pre- and post-pandemic.1qa.run('Combine total revenue in 2020?') 2#Amazon's total revenue in 2020 was $386,064 million. 3CopyThis answer is obviously incorrect, as we didn't use any sophisticated methods for addition. We will explore further optimization for this use case to consistently get good answers by employing a chain of agents. For now, however, more straightforward queries can be answered very quickly:1qa.run('What is the revenue in 2021 Q3?') 2# The net sales for Q3 2021 was $110.8 billion, with a 15% increase to $127.1 billion in Q3 2022. 3Copy1qa.run('What is the revenue in 2023 Q1?') 2# The financial results for 2023 Q1 are not provided in the given context 3CopyObviously, the question answering works only on the given data, that is limited to 2022. To answer more current questions, we should connect LangChain to agents like GoogleSearchAPIWrapper. We will explore this in a further article.No more switching tabs between reports! Just feed the entirety of the information to LangChain and store the converted text in the Deep Lake format for immediate answers.You can also use the Colab Notebook to streamline the development.",activeloop Can I use LangChain to query financial data?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Yes. With Deep Lake as your vector store, you can store your financial pdfs or csv files and their embeddings and ask questions on top of your financial data.",activeloop Concluding Remarks: Use LangChain & Activeloop for efficient LLM training,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"In conclusion, LangChain is a robust tool for productizing large language models, providing a simple workflow to generate output by employing prompts, utilities, and language models in a linked fashion. It offers numerous benefits, including streamlined LLM development, improved accuracy of LLMs, better use-case customization, easy integration with data sources, and other Python libraries, including Activeloop Deep Lake. Deep Lake, in its turn, enables rapid LangChain prototyping, as you can immediately access data without the need to recompute the embeddings for the model finetuning. If you're working with LLMs and looking to streamline your development process, LangChain, powered by Deep Lake as the data store for LLM training, is worth exploring.",activeloop What are the main LangChain modules?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"The main modules of LangChain include prompts, LLMs, document loaders, vector stores like Deep Lake, utils, chains, indexes, agents, memory, and chat.",activeloop How can I get started with LangChain?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Research: After reading this ultimate guide, familiarize yourself with the LangChain project, its goals, and its underlying technology. Read the documentation, and explore related LangChain articles to build a solid foundation. Join the Community: Connect with the LangChain community discord and follow them on Twitter to stay updated on the latest developments. Experiment with LangChain Sample Projects: Start by experimenting with sample projects for code understanding, building chatbots or answering questions over specific documents or webpages.",activeloop How does LangChain help with prompt management and optimization?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain provides prompt templates and various agents that can use LLMs to decide what actions should be taken. In theory, you can use LangChain to iteratively optimize your prompts.",activeloop What is the use of LangChain?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain allows users to build advanced AI applications using large language models (LLMs) for various tasks, such as chatbots, Generative Question Answering (GQA), code understanding or summarization. The library is designed to 'chain' together different components, including prompt templates, LLMs, agents, and memory, to create sophisticated use cases around LLMs. LangChain supports both OpenAI's GPT family of models, as well as open-source alternatives, offering flexibility and adaptability for developers.",activeloop How old is LangChain?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"LangChain is was launched in October 2022. At the time of writing this article, it was approximately 7 months old.",activeloop Is LangChain open-source?,https://activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/,"Yes, LangChain is is fully open-source and comprises Python and Javascript libraries.",activeloop "Introduction: the History of Queries: from Papyrus Scrolls to Text to SQL, enabled by GPT-4",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"Querying datasets has been a crucial aspect of database management systems since their inception. It is the process of extracting data from a database based on specific criteria. Over the years, querying has undergone significant changes, from simple command-line interfaces to complex graphical user interfaces and, most recently, natural language processing. In this blog post, we will review each notable milestone in the evolution of sending commands to manipulate database data. While we've previously focused on the evolution of data storage, covering the development from Oracle to Databases for AI, this blog post will mainly focus on the querying side of things. We will also introduce the next step in the querying lifecycle - talking to the databases in natural language, as you would talk to your business intelligence analyst or data engineer.But first, let's get to the basics.",activeloop Retrieving Data pre-1960s,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"In 1945, magnetic tapes were developed as the first medium for searching data, replacing punch cards and paper tape. In the 1950s, with the increasing use of computers and the installation of the first commercial computer in 1957, the need for a more efficient and standardized way of accessing data became apparent. 1959 saw McGee propose generalized access to electronically stored data, and IBM introduced the Ramac system. As a result, analysts could read data in a non-sequential manner, and access to files became feasible.",activeloop "The 1970s: Relational databases are born, and SEQUEL evolves",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"The 1970s saw the emergence of the relational model, proposed by Edgar F. Codd. The relational model allowed data to be stored in tables, making it easier to retrieve and manipulate data. Inspired by this, Donald D. Chamberlin and Raymond F. Boyce, at IBM's San Jose location (quite close to Activeloop's HQ), developed the Structured English Query Language (SEQUEL). After several attempts at creating a relational database language involving multiple experiments, puns on the query language of Ingres (QUEL), and a trademark issue with a British aircraft manufacturing company forcing them to drop the vowels, they settled on SQL. Next, IBM began developing commercial products based on their System R prototype. In 1979, Relational Software introduced Oracle V2, one of the first commercially available implementations of SQL.",activeloop The 1980s: Road to SQL becoming a global standard,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"With the rise of the development of personal computers, database management systems (DBMS) such as DBASE and PARADOX were created to allow PC users to manipulate and define data. However, these systems lacked insulation between programs and data and didn't support multiview or multiaccess capabilities. As computer sales rapidly increased, relational database systems emerged as a commercial success, leading to a decline in the popularity of network and hierarchical database models. IBM's DB2 became a flagship product, and the introduction of IBM PC led to the establishment of new database companies and the creation of various products, including PARADOX, RBASE 5000, RIM, Dbase III and IV, OS/2 Database Manager, and Watcom SQL. Finally, ANSI (1986) and ISO (1987) officially adopted the standard 'Database Language SQL' language definition, cementing SQL as the dominating query language for the years to come.",activeloop "The 2010s: the rise of NoSQL, MongoDB, Snowflake, & Databricks",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"The 2010s saw the rise of big data and the need for more advanced querying techniques. NoSQL databases, such as MongoDB and Cassandra, emerged as viable alternatives to traditional relational databases. These databases allowed users to store and query large volumes of unstructured data. In addition, machine learning and artificial intelligence (AI) techniques were integrated into querying systems, allowing users to ask more complex questions and retrieve more relevant information. I've covered the differences between the options, including Lakehouse (and Photon) and Data Lake, in this article, in case you're interested.",activeloop The 1960s: The onset of computerization requires querying,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"In the 1960s, the advent of cost-effective computers enabled private organizations to begin storing data electronically, marking the start of computerized databases. Two dominant data models characterized this era: the network model known as CODASYL and the IMS hierarchical model. Among these models, the SABRE system emerged as a commercial success, allowing IBM to help American Airlines effectively manage its reservation data. The set was being staged for the first (truly) relational database systems to be designed in the next decade",activeloop The 2000s: Querying goes mainstream,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"During the 2000s, natural language processing (NLP) was increasingly used in querying, allowing non-technical users to retrieve data by asking questions in plain language. NLP-based systems such as Wolfram Alpha and Google's Knowledge Graph enabled users to retrieve relevant information from databases. Despite a dot-com bubble burst-driven decline in the early 2000s, database applications continued to grow, with new interactive applications developed for PDAs, point-of-sale transactions, and consolidation of vendors. The leading database companies in the western world were Microsoft, IBM, and Oracle. It wasn't until 2009 that NoSQL became more widespread by organizing an event to discuss non-relational databases (thanks, Johan Oskarsson!). Today, NoSQL databases enjoy wide adoption, which we will cover in the next section.",activeloop "Hello, Text To SQL (or, in our case, Text to TQL)",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"What is Text to SQL? It is a technology that converts natural language queries into SQL code. It is rapidly gaining popularity due to its ability to simplify querying databases, particularly for non-technical users. Text to SQL was enabled with hard work from various researchers, including companies like Salesforce, that released the crowdsourced the WikiSQL dataset, containing queries in SQL and natural language prompts. If interested, check out their work, including the Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning.With Text to SQL, users can ask questions in plain English and get the desired results without having to write complex SQL queries. Naturally, this saves time and increases efficiency. Simplifying the process of querying databases, Text to SQL eliminates the need for extensive training or knowledge of complex SQL commands. In addition, Text to SQL can help bridge the gap between technical and non-technical teams, making it a valuable tool for businesses.Another benefit of Text to SQL is its ability to increase data accessibility. By allowing users to query databases using natural language, it reduces the need for manual data processing and interpretation, which can result in faster access to information. This can be particularly beneficial in industries where quick access to data is critical, such as finance, healthcare, and e-commerce.Having this in mind, we decided to ship Text to TQL to simplify querying complex datasets like Imagenet or COCO dataset and ease both technical and non-technical users into machine learning datasets exploration.",activeloop The 1990s: Object-oriented databases emerge,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"In the 1990s, the database industry saw the emergence of object-oriented databases (OODBs) that allowed for the storage of complex data structures and retrieval using object-oriented query languages like OQL and QBE. This decade saw the demand for extending DBMS capabilities to meet new applications, the emergence of commercial object-oriented DBMSs, and the demand for developing applications utilizing data from various sources and of different modalities (temporal, multimodal, etc). Furthermore, the decade saw the emergence of standards for data query and exchange (SQL2, PDES, STEP) and the extension of DBMS capabilities to heterogeneous and multi-database systems. The advent of the internet in the middle of the decade led to exponential growth in the database industry, with average desktop users accessing client-server database systems to access computer systems with legacy data. The introduction of GUI-based database management systems, such as Microsoft Access and Oracle Forms, made querying more accessible to non-technical users. These systems allowed users to create queries by selecting options from drop-down menus and forms. What is more, the decade saw increased investment in online businesses resulting in a rise in demand for internet database connectors such as Front Page, Active Server Pages, Java Servlets, and more.Finally, in 1998, Carlo Strozzi coined the term 'NoSQL' when naming his database Strozzi NoSQL. Although it was still de facto relational, it would significantly impact querying in the new millennium.",activeloop "SQL Query Where Contains, but slightly different",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"So how does this work? Imagine you're working on the COCO training dataset, and need to build a simple classifier to discern between cats and dogs. Show me 100 images of dogs and 100 images of cats You type this text into the upper box and hit run. Almost instantly, we return the actual query:And we also run it for you, so you automatically have your data.Apart from labels for image data, you can also query other modalities, e.g., text data. Let's consider Stanford Question Answering Dataset (SQuaD dataset). The dataset consists of a set of questions and answers to them. show me data where the question contains France and the context contains Paris The UI will generate the query:1select * where contains(question, 'France') and contains(context, 'Paris') 2CopyText to TQL can also handle basic computational queries. Let's try a simple query on MNIST. find all images where the sample is smaller or equal to 3 Or,So, what is Behind Text to SQL?",activeloop "Introducing Automated Tensor Query Language, powered by Large Language Models",https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"What is TQL? Tensor Query Language (TQL) powerful dataset query engine built by Activeloop in C++, optimized for Deep Lake datasets, enables users to filter relevant data in data science workflows. Queries can be performed both in the UI and in the Python API. The resulting Dataset Views can be saved, loaded, and passed to machine learning frameworks for analysis. Dataset query syntax includes various operators like CONTAINS, ==, SHAPE, LIMIT, AND, OR, NOT, UNION, INTERSECT, ORDER BY, ANY, ALL, and SAMPLE BY, enabling flexible querying of datasets.In our most recent update, TQL leverages the power of machine learning models and algorithms to understand and interpret natural language queries and then provides relevant results based on the user's input. Thus with TQL, users can query datasets based on the labels, or tags, associated with the data. This makes it easier to find and retrieve specific data points that match certain criteria without needing to know the underlying structure or format of the dataset. Text to TQL is a powerful new tool for data scientists, machine learning engineers, and anyone working with large datasets who wants to quickly and efficiently find and analyze specific data points.",activeloop What is GPT-4?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs - for now only for limited cases, though). It is a transformer-based model pre-trained to predict the next token in a document. Its post-training alignment process improves performance on measures of factuality and adherence to the desired behavior (API users can specify to a certain extent as 'system requirements').",activeloop The Future of Querying Datasets,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"Querying data has come a long way since its inception in the 1960s (or scrolling through papyrus rolls by Egyptians starting around 3000 BCE). The evolution of querying has been driven by advancements in computer technology, the rise of non-technical users, and the need for more advanced querying techniques. Today, querying is an integral part of database management systems, and with the advancements in AI and machine learning, it is set to become even more accessible and efficient.We are not fans of crystal balls, so instead of hypothesizing what the future of querying would look like, we attempted to build it. Enter querying for machine learning, with natural language as an input.",activeloop What are GPT-4 Advantages?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"One advantage of GPT-4 is its capability to process both image and text inputs, expanding its potential use cases in dialogue systems, text summarization, as well as machine translation. Additionally, it exhibits improved performance on measures of factuality and adherence to the desired behavior.",activeloop How does GPT-4 compare to ChatGPT/GPT-3.5?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"Compared to ChatGPT and GPT-3.5, GPT-4 demonstrates human-level performance across various professional and academic benchmarks, including passing a simulated LSAT test with a score around the top 10% of test takers (ChatGPT, in contrast, resides in the bottom 10% for the same test). It also outperforms previous large language models and most state-of-the-art systems on traditional NLP benchmarks and exhibits strong performance in non-English languages.'",activeloop GPT-4 Disadvantages,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"Nonetheless, like earlier GPT models, GPT-4 is not fully reliable and can suffer from 'hallucinations,' has a limited context window, and does not learn from experience.To tune GPT-4 for our use case, we have fed it countless examples of correct TQL queries for various datasets (of different modalities and schemas). For our use case, we noticed that GPT-4 currently walks a fine line between being to 'verbose' and generous. For instance, for a query 'find all things related to trees' it may produce a result that contains lemons (as they grow on trees) or even cliffs (…? Our guess is as good as yours here). On the other hand, it sometimes can err in over-relying on user-specified query. For example, Imagenet does have entire schools of fish (enough to build an entire education system, might I add). Asking TQL + GPT-4 to find all the fish in dataset might result in the following:1 select * where contains(labels, 'fish') 2CopyWhen the actual expected query could have been:1 select * where any(contains(labels, 'shark'), contains(labels, 'ray'), contains(labels, 'pike'), contains(labels, 'trout'), contains(labels, 'bass'), contains(labels, 'salmon'), contains(labels, 'eel'), contains(labels, 'sturgeon'), contains(labels, 'coho'), contains(labels, 'barracouta')) 2Copyand so on…One thus should be cautious when using the outputs of GPT-4, particularly in contexts where reliability is important. However, we are working at making this as reliable as possible. Whatever the case, we are confident this is the right step forward: out of hundreds of tests we've conducted, the new feature outperformed first-time query users whopping 99% of the time.",activeloop Concluding Remarks,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"In sum, querying databases has come a long way since the dawn of magnetic tapes in the 1940s. From punch cards to natural language processing, the evolution of querying has been driven by advancements in computer technology and the need for more accessible and efficient methods for extracting data from databases. The recent emergence of Text to SQL technology, including our own Text to TQL, has revolutionized the querying process by allowing users to retrieve data from databases using plain English queries. This technology saves time, increases efficiency, and makes querying more accessible even to non-technical users. Try out Text to TQL yourself and experience the benefits of simplified, efficient querying for complex datasets such as Imagenet dataset to explore the data, as well as to train and edit it with data lineage or evaluate ML model performance.",activeloop How long does it take to learn SQL?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"According to Berkley, the basics of SQL can be learned in 2 to 3 weeks due to its relative simplicity, but higher fluency requires additional time. Learning independently can take up to six months or longer. With Text to SQL, non-technical users need not learn how to code to easily access their data, which can result in faster and more accurate decision-making.",activeloop GPT-4,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"Since it's release a mere week ago, GPT-4 (Chat-GPT's younger but stronger cousin) has been making waves across the Twitterverse and the industry. Companies (including us!) are already integrating it in their day-to-day operations to improve the user experience. We, too, wanted to make our users productive. :)",activeloop What is a Database Query?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"A database query is a request for information from a database. It usually involves some form of filtering or selecting data based on specific criteria or conditions. Queries are used to retrieve data, perform calculations or transformations on that data, and to combine data from multiple tables or sources. They are an essential part of database management and support a wide range of applications and business processes. Queries can be written in different programming languages or using graphical user interfaces provided by database management systems. The most common query language used today is Structured Query Language (SQL), used for relational databases.",activeloop When to use NoSQL vs. SQL?,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,"When choosing between SQL and NoSQL databases, it's crucial to evaluate your application's unique needs. SQL databases are an ideal choice for applications that demand complicated queries and data consistency. They're also useful for applications that must abide by strict data integrity rules since they can manage complicated transactions and contain well-defined schemas. SQL databases perform best for applications that handle heavy write loads and data relationships, such as e-commerce websites and financial systems. Naturally, the expected datatype is structured (tabular).When to use NoSQL? In contrast to SQL databases, NoSQL databases are a better fit for applications that require high scalability and flexibility, especially when managing unstructured or semi-structured data. They can effectively manage large volumes of data, including non-tabular and non-relational datasets, and are tailored to high-performance, distributed environments. Big data and real-time analytics applications, content management systems, and mobile applications prioritizing speed & scalability are great use cases for when to use a NoSQL database.Ultimately, selecting between SQL and NoSQL hinges on your specific demands, including the size and intricacy of your data, the type of queries you need to execute, and the scalability and flexibility you demand. Here's a brief table summarizing the differences between SQL vs NoSQL databases. We've also added TQL comparison, with TQL being powered by Deep Lake. CriteriaSQLNoSQLTQL Data StructureStructured dataUnstructured dataBoth unstructured and structured data, higher performance on unstructured Data VolumeRelatively smaller data setsVery large and rapidly growing data setsDataset size irrelevant, handles rapid scalability for more than 1B rows Data RelationshipComplex relationships and joins between tablesSimple or no relationships between dataRelationship-agnostic ScalabilityVertical scaling (adding more resources to a single server)Horizontal scaling (distributing data across multiple servers)Both horizontal and vertical scaling ACID ComplianceACID compliant (Atomicity, Consistency, Isolation, Durability)Mostly eventual consistencyACID compliant TransactionsSupports complex transactions with rollbacks and commitsDoes not support complex transactionsSupports complex transactions with rollbacks, commits and time travel PerformanceOptimized for complex queries on a few large data setsOptimized for many simple queries on large and rapidly changing data setsOptimized for complex queries on any amount of large and rapidly evolving datasets FlexibilityRigid schema structureDynamic schema structureDynamic schema structure CostHigh cost for large data setsLower cost for large data setsUltra low cost for large data sets and related machine learning training processes on them Use CasesSuitable for financial, e-commerce, and other transactional applicationsSuitable for real-time data analytics, social media, and IoT applicationsSuitable for generative AI, machine learning and computer vision use cases on multi-modal data",activeloop References,https://activeloop.ai/resources/text-to-sql-queries-for-ml-the-next-chapter-in-database-querying-history-powered-by-gpt-4/,Database History Timeline until 1990s Timeline of Database History The Evolution Of Data Models And Approaches To Persistence In Database Systems,activeloop What is Retrieval Augmented Generation (RAG) in AI?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Retrieval Augmented Generation, or RAG, is an advanced technique in AI that bridges information retrieval and text generation. It is designed to handle intricate and knowledge-intensive tasks by pulling relevant information from external sources and feeding it into a Large Language Model for text generation. When RAG receives an input, it searches for pertinent documents from specified sources (e.g., Wikipedia, company knowledge base, etc.), combines this retrieved data with the input, and then provides a comprehensive output with references. This innovative structure allows RAG to seamlessly integrate new and evolving information without retraining the entire model from scratch. It also enables you to fine-tune the model, enhancing its knowledge domain beyond what it was trained on.",activeloop Introduction to Retrieval Augmented Generation (RAG) in AI,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Retrieval Augmented Generation (RAG), a new frontier in AI technology, is transforming the digital landscape. With platforms like Cohere & Activeloop, this advanced technology is now easily accessible and customizable, catalyzing a wave of AI-first businesses.RAG's impact is considerable. MIT research shows businesses incorporating RAG report up to 50% productivity gains on knowledge-based tasks. By automating mundane tasks, businesses improve resource allocation and employee satisfaction. Notably, Goldman Sachs estimates that such advancements could boost global GDP by 7%.RAG's versatility is seen across industries. In customer support, it leads to a 14% productivity increase per hour. In sales, AI-assisted representatives send five times more emails per hour. With the maturation of this technology, these figures will rise even further.The future of RAG points towards the development of Knowledge Assistants. Acting as intelligent tools for workers, they will retrieve and process corporate data, interact with enterprise systems, and take action on a worker's behalf. This heralds a new age of AI-driven productivity.As the third significant revolution in human-computer interfaces, RAG, and LLMs could unlock an estimated $1 trillion in economic value in the U.S. alone. Therefore, businesses and developers must adopt these technologies to remain competitive in the rapidly evolving AI-centric future.At the end of this article, we cover the Retrieval Augmented Generation History and other fun facts.",activeloop Build LLM-powered Chatbot with RAG,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"To demonstrate the power of Retrieval Augmented Generation for building AI Chatbots with LangChain & Vector Databases, we will build a course companion chatbot for our LangChain & Vector Databases in Production course.Educational Chatbot harnesses the power of AI to answer queries and provide relevant information to users by retrieving data from an extensive and detailed knowledge base. It returns a natural response to the user's question and the truth source.Application Demo: Databutton | LangChain Chat with Deep Lake Vector DatabaseApplication Building Steps: Data Loading Retrieving Data Building Conversation Chain with Memory and Retrieval Building the Chat UI",activeloop Learning Material and Resources,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In order to build this sophisticated Retrieval Augmented Chatbot, I undertook a comprehensive educational journey that includes courses from the ‘LLM University by Cohere' and ‘LangChain & Vector Databases in Production'.",activeloop Setting up LangChain & Databutton,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules you can use to build language model applications. It also provides chains and agents with memory capabilities.The flowchart below demonstrates the pipeline initiated through LangChain to complete the Conversation Process. The tutorial goes into each of the steps in the pipeline, this visual helps to give you an overview of how the components are working together and in what order.The design pattern started by thinking about the following: What problem am I trying to solve? Who is going to benefit from this solution? How am I going to get and pre-process my data sources? How am I going to store and retrieve my data sources? How is the user going to interact with my data sources? Taking a step back before building a solution can really help to save time and importantly considers your end user.",activeloop Build your Databutton Application,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Create a free account with Databutton Create a new app Once you have signed up for your free Databutton account, you can create a new app in seconds by clicking on ‘New app' Add secrets and packages To use the API Key in your app, copy the code snippet from the secret, this will look something like this: ‘COHERE_API_KEY = db.secrets.get(name='COHERE_API_KEY')' Add the packages below and click install. 1langchain 2deeplake 3openai 4cohere 5apify-client 6tiktoken 7Copy Add entire code from the tutorial to either the Jobs section or the Home Page as specified in the steps below.",activeloop Application Platform and Required API Keys,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Databutton: All-in-one app workspace where we will build and deploy our application. $25 free monthly quota (covers one app a month), community and student plans available. Cohere API key: Generative AI endpoint for embeddings, rerank and chatbot. Get free, rate-limited usage for learning and prototyping. Usage is free until you go into production Apify API Key: Web scraping data for the chatbot to retrieve. $5 free usage (more than enough for website contents) Activeloop token: We will use Deep Lake to store the text scraped from a website. Deep Lake Community version is free to use.",activeloop Step 1: Loading the Data with RecursiveCharacterTextSplitter,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In this stage, we are gathering the data needed to provide context to the chatbot. We use ApifyLoader to scrape the content from a specific website. The RecursiveCharacterTextSplitter is then used to split the data into smaller, manageable chunks. Next, we embed the data using CohereEmbeddings which translates the text data into numerical data (vectors) that the chatbot can learn from. Lastly, we load the transformed data into Deep Lake.The code for this step is located in the ‘Jobs' section within Databutton because this is a task that only needs to be run once. Once the data is collected and loaded into DeepLake, it can be retrieved by the chatbot.",activeloop Helper Functions,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"ApifyWrapper(): Scrapes the content from websites. 1from langchain.document_loaders import ApifyDatasetLoader 2from langchain.utilities import ApifyWrapper 3from langchain.document_loaders.base import Document 4import os 5 6os.environ['APIFY_API_TOKEN'] = db.secrets.get('APIFY_API_TOKEN') 7 8apify = ApifyWrapper() 9loader = apify.call_actor( 10 actor_id='apify/website-content-crawler', 11 run_input={'startUrls': [{'url': 'ENTER\YOUR\URL\HERE'}]}, 12 dataset_mapping_function=lambda dataset_item: Document( 13 page_content=dataset_item['text'] if dataset_item['text'] else 'No content available', 14 metadata={ 15 'source': dataset_item['url'], 16 'title': dataset_item['metadata']['title'] 17 } 18 ), 19) 20 21docs = loader.load() 22Copy ApifyWrapperRecursiveCharacterTextSplitter(): Splits the scraped content into manageable chunks. 1from langchain.text_splitter import RecursiveCharacterTextSplitter 2 3# we split the documents into smaller chunks 4text_splitter = RecursiveCharacterTextSplitter( 5 chunk_size=1000, chunk_overlap=20, length_function=len 6) 7docs_split = text_splitter.split_documents(docs) 8Copy CohereEmbeddings(): Translates text data into numerical data. DeepLake(): Stores and retrieves the transformed data. 1from langchain.embeddings.cohere import CohereEmbeddings 2from langchain.vectorstores import DeepLake 3import os 4 5os.environ['COHERE_API_KEY'] = db.secrets.get('COHERE_API_KEY') 6os.environ['ACTIVELOOP_TOKEN'] = db.secrets.get('APIFY_API_TOKEN') 7 8embeddings = CohereEmbeddings(model = 'embed-english-v2.0') 9 10username = 'elleneal' # replace with your username from app.activeloop.ai 11db_id = 'kb-material'# replace with your database name 12DeepLake.force_delete_by_path(f'hub://{username}/{db_id}') 13 14dbs = DeepLake(dataset_path=f'hub://{username}/{db_id}', embedding_function=embeddings) 15dbs.add_documents(docs_split) 16Copy",activeloop A Brief Intro to Cohere's Rerank Endpoint for Enhanced Search Results,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Within a search process, Cohere's Rerank endpoint serves as a final step to refine and rank documents in alignment with a user's search criteria. Businesses can seamlessly integrate it with their existing keyword-based (also called 'lexical') or semantic search mechanisms for initial retrieval. The Rerank endpoint will take over the second phase of refining results.Cohere's Rerank & Deep Lake: The Solution to Imprecise Search Outcomes: This tool is powered by Cohere's large language model, which determines a relevance score between the user's query and each of the preliminary search findings. This approach surpasses traditional embedding-based semantic searches, delivering superior outcomes, especially when dealing with intricate or domain-specific search queries.This tool is powered by Cohere's large language model, which determines a relevance score between the user's query and each of the preliminary search findings. This approach surpasses traditional embedding-based semantic searches, delivering superior outcomes, especially when dealing with intricate or domain-specific search queries.The DeepLake instance is then turned into a retriever with specific parameters for distance metric, number of items to fetch (fetch_k), use of maximal marginal relevance and the number of results to return (k).1from langchain.vectorstores import DeepLake 2from langchain.embeddings.cohere import CohereEmbeddings 3from langchain.retrievers import ContextualCompressionRetriever 4from langchain.retrievers.document_compressors import CohereRerank 5import os 6 7os.environ['COHERE_API_KEY'] = db.secrets.get('COHERE_API_KEY') 8os.environ['ACTIVELOOP_TOKEN'] = db.secrets.get('ACTIVELOOP_TOKEN') 9 10@st.cache_resource() 11def data_lake(): 12 embeddings = CohereEmbeddings(model = 'embed-english-v2.0') 13 14 dbs = DeepLake( 15 dataset_path='hub://elleneal/activeloop-material', 16 read_only=True, 17 embedding_function=embeddings 18 ) 19 retriever = dbs.as_retriever() 20 retriever.search_kwargs['distance_metric'] = 'cos' 21 retriever.search_kwargs['fetch_k'] = 20 22 retriever.search_kwargs['maximal_marginal_relevance'] = True 23 retriever.search_kwargs['k'] = 20 24 25 compressor = CohereRerank( 26 model = 'rerank-english-v2.0', 27 top_n=5 28 ) 29 compression_retriever = ContextualCompressionRetriever( 30 base_compressor=compressor, base_retriever=retriever 31 ) 32 return dbs, compression_retriever, retriever 33 34dbs, compression_retriever, retriever = data_lake() 35Copy",activeloop Step 4: Building the Chat UI,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In this final step, we set up the chat user interface (UI).We start by creating a button that, when clicked, triggers the clearing of cache and session states, effectively starting a new chat session.Then, we initialize the chat history if it does not exist and display previous chat messages from the session state.1# Create a button to trigger the clearing of cache and session states 2if st.sidebar.button('Start a New Chat Interaction'): 3 clear_cache_and_session() 4 5# Initialize chat history 6if 'messages' not in st.session_state: 7 st.session_state.messages = [] 8 9# Display chat messages from history on app rerun 10for message in st.session_state.messages: 11 with st.chat_message(message['role']): 12 st.markdown(message['content']) 13CopyThe chat_ui function is used to handle the chat interactions. Inside this function, we accept user input, add the user's message to the chat history and display it, load the memory variables which include the chat history, and predict and display the chatbot's response.The function also displays the top 2 retrieved sources relevant to the response and appends the chatbot's response to the session state. The chat_ui function is then called, passing in the ConversationalRetrievalChain object.1def chat_ui(qa): 2 # Accept user input 3 if prompt := st.chat_input( 4 'Ask me questions: How can I retrieve data from Deep Lake in Langchain?' 5 ): 6 7 # Add user message to chat history 8 st.session_state.messages.append({'role': 'user', 'content': prompt}) 9 10 # Display user message in chat message container 11 with st.chat_message('user'): 12 st.markdown(prompt) 13 14 # Display assistant response in chat message container 15 with st.chat_message('assistant'): 16 message_placeholder = st.empty() 17 full_response = '' 18 19 # Load the memory variables, which include the chat history 20 memory_variables = memory.load_memory_variables({}) 21 22 # Predict the AI's response in the conversation 23 with st.spinner('Searching course material'): 24 response = capture_and_display_output( 25 qa, ({'question': prompt, 'chat_history': memory_variables}) 26 ) 27 28 # Display chat response 29 full_response += response['answer'] 30 message_placeholder.markdown(full_response + '▌') 31 message_placeholder.markdown(full_response) 32 33 #Display top 2 retrieved sources 34 source = response['source_documents'][0].metadata 35 source2 = response['source_documents'][1].metadata 36 with st.expander('See Resources'): 37 st.write(f'Title: {source['title'].split('·')[0].strip()}') 38 st.write(f'Source: {source['source']}') 39 st.write(f'Relevance to Query: {source['relevance_score'] * 100}%') 40 st.write(f'Title: {source2['title'].split('·')[0].strip()}') 41 st.write(f'Source: {source2['source']}') 42 st.write(f'Relevance to Query: {source2['relevance_score'] * 100}%') 43 44 # Append message to session state 45 st.session_state.messages.append( 46 {'role': 'assistant', 'content': full_response} 47 ) 48 49# Run function passing the ConversationalRetrievalChain 50chat_ui(qa) 51Copy",activeloop Step 2: Retrieve Data,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In this step, we're setting up the environment to retrieve data from DeepLake using the CohereEmbeddings for transforming numerical data back to text. We'll then use ContextualCompressionRetriever & CohereRerank to search, rank and retrieve the relevant data.Add this code to your home page in DatabuttonFirst we set the COHERE_API_KEY and ACTIVELOOP_TOKEN environment variables, using db.secrets.get, allowing us to access the Cohere and ActiveLoop services. DeepLake() retrieve data CohereEmbeddings() Following this, we create a DeepLake object, passing in the dataset path to the DeepLake instance, setting it to read-only mode and passing in the embedding function.Next, we define a data_lake function. Inside this function, we instantiate a CohereEmbeddings object with a specific model, embed-english-v2.0. ContextualCompressionRetriever() & CohereRerank() Reranking (cohere.com) We then instantiate a CohereRerank object with a specific model and number of top items to consider (top_n), and finally create a ContextualCompressionRetriever object, passing in the compressor and retriever objects. The data_lake function returns the DeepLake object, the compression retriever, and the retriever.The data retrieval process is set up by calling the data_lake function and unpacking its return values into dbs, compression_retriever, and retriever.The Rerank endpoint acts as the last stage reranker of a search flow.",activeloop Step 3: Use ConversationBufferWindowMemory to Build Conversation Chain with Memory,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In this step, we will build a memory system for our chatbot using the ConversationBufferWindowMemory.The memory function instantiates a ConversationBufferWindowMemory object with a specific buffer size (k), a key for storing chat history, and parameters for returning messages and output key. The function returns the instantiated memory object.We then instantiate the memory by calling the memory function.1@st.cache_resource() 2def memory(): 3 memory=ConversationBufferWindowMemory( 4 k=3, 5 memory_key='chat_history', 6 return_messages=True, 7 output_key='answer' 8 ) 9 return memory 10 11memory=memory() 12CopyThe chatbot uses the AzureChatOpenAI() function to initiate our LLM Chat model. You can very easily swap this out with other chat models listed here.1from langchain.chat_models import AzureChatOpenAI 2 3BASE_URL = '<URL>' 4API_KEY = db.secrets.get('AZURE_OPENAI_KEY') 5DEPLOYMENT_NAME = '<deployment_name>' 6llm = AzureChatOpenAI( 7 openai_api_base=BASE_URL, 8 openai_api_version='2023-03-15-preview', 9 deployment_name=DEPLOYMENT_NAME, 10 openai_api_key=API_KEY, 11 openai_api_type='azure', 12 streaming=True, 13 verbose=True, 14 temperature=0, 15 max_tokens=1500, 16 top_p=0.95 17) 18CopyNext, we build the conversation chain using the ConversationalRetrievalChain. We use the from_llm class method, passing in the llm, retriever, memory, and several additional parameters. The resulting chain object is stored in the qa variable.qa = ConversationalRetrievalChain.from_llm( llm=llm, retriever=compression_retriever, memory=memory, verbose=True, chain_type='stuff', return_source_documents=True )",activeloop Verbose Display Code for Streamlit,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"1import databutton as db 2import streamlit as st 3import io 4import re 5import sys 6from typing import Any, Callable 7 8def capture_and_display_output(func: Callable[..., Any], args, **kwargs) -> Any: 9 # Capture the standard output 10 original_stdout = sys.stdout 11 sys.stdout = output_catcher = io.StringIO() 12 13 # Run the given function and capture its output 14 response = func(args, **kwargs) 15 16 # Reset the standard output to its original value 17 sys.stdout = original_stdout 18 19 # Clean the captured output 20 output_text = output_catcher.getvalue() 21 clean_text = re.sub(r'\x1b[.?[@-~]', '', output_text) 22 23 # Custom CSS for the response box 24 st.markdown(''' 25 <style> 26 .response-value { 27 border: 2px solid #6c757d; 28 border-radius: 5px; 29 padding: 20px; 30 background-color: #f8f9fa; 31 color: #3d3d3d; 32 font-size: 20px; # Change this value to adjust the text size 33 font-family: monospace; 34 } 35 </style> 36 ''', unsafe_allow_html=True) 37 38 # Create an expander titled 'See Verbose' 39 with st.expander('See Langchain Thought Process'): 40 # Display the cleaned text in Streamlit as code 41 st.code(clean_text) 42 43 return response 44CopyThat is all you need to start building your own RAG Chatbot on your own data! I can't wait to see what you build and how you develop this idea forward.",activeloop What is Retrieval Augmented Generation (RAG)?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Retrieval Augmented Generation, or RAG, is a machine learning technique combining the best aspects of retrieval-based and generative language models. This method cleverly integrates the strength of retrieving relevant documents from a large set of data and the creative ability of generative models to construct coherent and diverse responses. Moreover, RAG allows the internal knowledge of the model to be updated efficiently without retraining the entire model.",activeloop Conclusion: Retrieval Augmented Generation to Power Chatbots & Economy,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"In conclusion, Retrieval Augmented Generation (RAG) is not just an emerging AI technology but a transformative force reshaping how businesses operate. With its proven potential to boost productivity, catalyze AI-first businesses, and increase GDP, it's clear that adopting RAG and Large Language Models is crucial for maintaining a competitive edge in today's rapidly-evolving digital landscape. The potential of applications like the Educational Chatbot demonstrates how these AI tools can streamline tasks, making operations more efficient and user-friendly. Businesses, developers, and technology enthusiasts need to understand and leverage these advancements. The ongoing development of AI tools like Knowledge Assistants emphasizes the importance of keeping pace with these technological evolutions. As we stand at the brink of the third revolution in human-computer interfaces, we are reminded of the immense value and opportunities RAG and LLMs hold, estimated to unlock $1 trillion in the U.S. economy alone. The future is here, and it's AI-driven.",activeloop How does Retrieval Augmented Generation work?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"RAG operates in two distinct stages. The first stage involves retrieving relevant documents from a vast vector database like Deep Lake using 'dense retrieval.' This process leverages vector representations of the query and documents to identify the most relevant document matches. The second stage is the generation phase, where a sequence-to-sequence model is utilized to create a response, considering not just the input query but also the retrieved documents. The model learns to generate responses based on the context of these retrieved documents.",activeloop What are the benefits of using Retrieval Augmented Generation,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"RAG offers several advantages over traditional LLMs: RAG can easily acquire knowledge from external sources, improving the performance of LLMs in domain-specific tasks. RAG reduces hallucination and improves the accuracy of generated content. It requires minimal training, only needing to index your knowledge base. RAG can utilize multiple sources of knowledge, allowing it to outperform other models. It has strong scalability and can handle complex queries. It can overcome the context-window limit of LLMs by incorporating data from larger document collections. RAG provides explainability by surfacing the sources used to generate text.",activeloop How to implement Retrieval Augmented Generation?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Implementation of RAG involves three key components: a knowledge-base index like Deep Lake, a retriever that fetches indexed documents, and an LLM to generate the answers. Libraries like Deep Lake and LangChain have made it easier to implement these complex architectures.",activeloop What is the historical Ccntext of Retrieval Augmented Generation?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Retrieval Augmented Generation, as a concept, has its roots in foundational principles of Information Retrieval (IR) and Natural Language Processing (NLP). Retrieving relevant information before generating a response is common in IR. With the rise of neural network-based models in NLP, these approaches started merging, leading to the development of RAG.",activeloop What are the Complexities Involved in RAG?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"The main challenge with RAG lies in its dual nature - retrieval and generation. The retrieval phase requires an efficient system to sift through vast data. On the other hand, the generation phase needs a model capable of constructing high-quality responses. Both phases require significant computational resources and advanced machine-learning expertise. Using libraries like Deep Lake for efficient data storage and retrieval helps streamline using RAG.",activeloop What are the Current Challenges with Retrieval Augmented Generation?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"Current challenges with RAG include: Handling complex queries that require deep understanding. Managing computational resources efficiently. Ensuring response relevance and quality. Improving these aspects would make RAG even more effective in tasks like chatbots, question-answering systems, or dialogue generation.",activeloop Where is Retrieval Augmented Generation used?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"RAG is useful for complex, knowledge-intensive tasks, such as question-answering and fact verification. It has been used to improve the performance of large language models (LLMs) like GPT-4 or LLama-v2, fine-tuning their performance to be more factual, domain-specific, and diverse.",activeloop What are Retrieval Augmented Generation advantages?,https://activeloop.ai/resources/retrieval-augmented-generation-for-llm-bots-with-lang-chain/,"RAG combines the benefits of both retrieval-based and generative models. This means it gains from the specificity and factual correctness typical of retrieval-based methods while leveraging the flexibility and creativity inherent in generative models. This combination often results in more accurate, detailed, and contextually appropriate responses.",activeloop Introduction to Radiology in Machine Learning & Multi-Image Segmentation with TransUNet,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Deep learning-based image segmentation is becoming essential in computer vision. Domains such as biomedical ML, autonomous vehicles & robotics, machine recognition, face detection, etc. rely on image segmentation to identify regions of interest in a given image to make decisions. In radiology, image segmentation seems promising as it helps radiologists to quickly identify affected areas in CT scans or MRIs. Radiologists can use such tools to diagnose patients precisely, providing them with relevant treatments.Deep learning-based image segmentation tools are potent as they can identify tiny anomalies that human eyes can miss. As deep learning algorithms develop, novel SOTA models are being released, laying the foundation for next-gen radiology.U-Net, the most used architecture for image segmentation in biomedicine, has paved its way as the primary architecture. This architecture uses convolutional neural networks as its main component. But with the introduction of transformers, convolutional neural networks are either being hybridized or removed for better/more accurate results.In this article, we will discuss how UNet, hybridized with CNN and transformers, can yield better results. Apart from that, we will also explore the following: What is image segmentation? How is image segmentation used in image processing? How is image segmentation used in radiology machine learning? What is U-Net? What are Transformers? Finally, we will learn how to develop our multi-segment U-Net model by hybridizing transformers and convolution neural networks.",activeloop What is Image Segmentation?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Image segmentation is computer vision task separating different objects in the image at a pixel level. Essentially, the pixels of all the individual objects in the image are clustered and grouped as a set of pixels, separating one object from another. These clustered pixels are then assigned a particular color known as a mask and a label for identification. This makes image analysis more straightforward, and meaningful information can be derived efficiently.Source: Deep Lake",activeloop How is Segmentation Used in Radiology?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"In radiology, image segmentation is used to detect different parts of an object in a medical image (CT, X-rays, & MRI scans) by putting a layer of segmentation masks (or painting a set of pixels relating to the object) and assigning labels to the same to detect anomalies such as malignant cells or find defects such as a none fracture bones for diagnosis and screening purposes.",activeloop How many types of image segmentation exist?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"As such, there are three types of image segmentation:Semantic Segmentation: It is a segmentation task where different objects of the same class are segmented together. For instance, all the objects belonging to humans will be segmented with the same segmentation mask. Likewise, all the objects representing trees will be segmented with the same segmentation mask.Instance Segmentation: Here, all objects, regardless of whether they belong to the same class, are uniquely segmented with different segmentation masks. Instance segmentation, as the name suggests, creates segmentation masks based on the number of instances.Panoptic Segmentation: A combo of semantic and instance segmentation, utilizing the strengths of both approaches to create accurate masks for both well-defined and less distinct objects, resulting in improved overall performance.Source: Deep LakeThe image above shows the difference between the different types of segmentations.",activeloop U-Net: Convolutional Networks for Biomedical Image Segmentation,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"In effect, this allows for good extraction of the features through the downsampling blocks, followed by sound localization of the extracted features by the upsampling blocks at each layer.Here is the code for a basic U-net:1 2class Decoder(nn.Module): 3 def __init__(self, in_channels, middle_channels, out_channels): 4 super(Decoder, self).__init__() 5 self.up = nn.ConvTranspose2d(in_channels, out_channels, kernel_size=2, stride=2) 6 self.conv_relu = nn.Sequential( 7 nn.Conv2d(middle_channels, out_channels, kernel_size=3, padding=1), 8 nn.ReLU(inplace=True) 9 ) 10 def forward(self, x1, x2): 11 x1 = self.up(x1) 12 x1 = torch.cat((x1, x2), dim=1) 13 x1 = self.conv_relu(x1) 14 return x1 15Copy1class UNet(nn.Module): 2 def __init__(self, n_class): 3 super().__init__() 4 5 self.base_model = torchvision.models.resnet18(True) 6 self.base_layers = list(self.base_model.children()) 7 self.layer1 = nn.Sequential( 8 nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False), 9 self.base_layers[1], 10 self.base_layers[2]) 11 self.layer2 = nn.Sequential(*self.base_layers[3:5]) 12 self.layer3 = self.base_layers[5] 13 self.layer4 = self.base_layers[6] 14 self.layer5 = self.base_layers[7] 15 self.decode4 = Decoder(512, 256+256, 256) 16 self.decode3 = Decoder(256, 256+128, 256) 17 self.decode2 = Decoder(256, 128+64, 128) 18 self.decode1 = Decoder(128, 64+64, 64) 19 self.decode0 = nn.Sequential( 20 nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True), 21 nn.Conv2d(64, 32, kernel_size=3, padding=1, bias=False), 22 nn.Conv2d(32, 64, kernel_size=3, padding=1, bias=False) 23 ) 24 self.conv_last = nn.Conv2d(64, n_class, 1) 25 26 def forward(self, input): 27 e1 = self.layer1(input) # 64,128,128 28 e2 = self.layer2(e1) # 64,64,64 29 e3 = self.layer3(e2) # 128,32,32 30 e4 = self.layer4(e3) # 256,16,16 31 f = self.layer5(e4) # 512,8,8 32 d4 = self.decode4(f, e4) # 256,16,16 33 d3 = self.decode3(d4, e3) # 256,32,32 34 d2 = self.decode2(d3, e2) # 128,64,64 35 d1 = self.decode1(d2, e1) # 64,128,128 36 d0 = self.decode0(d1) # 64,256,256 37 out = self.conv_last(d0) # 1,256,256 38 return out 39CopyWe can consider UNet a symmetric encoder-decoder network that leverages skip connections to extract vital information from the image and enhance detail retention . Since UNet leverages convolutional operation to extract features, it fails to model long-range dependency explicitly. This issue is because of the intrinsic locality of convolution operations. To tackle this issue, we use transformers.",activeloop Self-Attention,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Hence the self-attention mechanism does a great job of finding meaningful feature representations in the given distribution compared to any other architecture.Here is the code for the multi-head self-attention mechanism written in Pytorch:1class Attention(nn.Module): 2 def __init__(self, num_attention_heads, hidden_size, attention_dropout_rate): 3 super(Attention, self).__init__() 4 self.num_attention_heads = num_attention_heads 5 self.attention_head_size = int(hidden_size / self.num_attention_heads) 6 self.all_head_size = self.num_attention_heads * self.attention_head_size 7 8 self.query = Linear(hidden_size, self.all_head_size) 9 self.key = Linear(hidden_size, self.all_head_size) 10 self.value = Linear(hidden_size, self.all_head_size) 11 12 self.out = Linear(hidden_size, self.all_head_size) 13 self.attn_dropout = Dropout(attention_dropout_rate) 14 self.proj_dropout = Dropout(attention_dropout_rate) 15 16 self.softmax = torch.nn.Softmax(dim=-1) 17 18 def transpose_for_scores(self, x): 19 new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) 20 #multihead attention 21 x = x.view(*new_x_shape) 22 return x.permute(0, 2, 1, 3) 23 24 def forward(self, hidden_states): 25 #finding representations 26 mixed_query_layer = self.query(hidden_states) 27 mixed_key_layer = self.key(hidden_states) 28 mixed_value_layer = self.value(hidden_states) 29 30 #multi-head view 31 query_layer = self.transpose_for_scores(mixed_query_layer) 32 key_layer = self.transpose_for_scores(mixed_key_layer) 33 value_layer = self.transpose_for_scores(mixed_value_layer) 34 35 #calculating attention 36 attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) 37 attention_scores = attention_scores / math.sqrt(self.attention_head_size) 38 attention_probs = self.softmax(attention_scores) 39 weights = attention_probs 40 attention_probs = self.attn_dropout(attention_probs) 41 42 context_layer = torch.matmul(attention_probs, value_layer) 43 context_layer = context_layer.permute(0, 2, 1, 3).contiguous() 44 new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) 45 context_layer = context_layer.view(*new_context_layer_shape) 46 attention_output = self.out(context_layer) 47 attention_output = self.proj_dropout(attention_output) 48 49 return attention_output, weights 50CopyThe multi-head self-attention module is accompanied by various other modules, such as Norm and a linear layer . This whole set of mechanisms can be known as an attention block. See the image below.Source: Attention Is All You NeedWe can enclose the above diagram in the following code:1class Block(nn.Module): 2 def __init__( 3 self, 4 num_attention_heads, 5 hidden_size, 6 linear_dim, 7 dropout_rate, 8 attention_dropout_rate, 9 eps, 10 std_norm, 11 ): 12 13 super(Block, self).__init__() 14 self.hidden_size = hidden_size 15 self.attention_norm = LayerNorm(hidden_size, eps=eps) 16 self.ffn_norm = LayerNorm(hidden_size, eps=eps) 17 self.ffn = Mlp( 18 hidden_size=hidden_size, 19 linear_dim=linear_dim, 20 dropout_rate=dropout_rate, 21 std_norm=std_norm, 22 ) 23 self.attn = Attention( 24 num_attention_heads=num_attention_heads, 25 hidden_size=hidden_size, 26 attention_dropout_rate=attention_dropout_rate, 27 ) 28 29 def forward(self, x): 30 h = x 31 x = self.attention_norm(x) 32 x, weights = self.attn(x) 33 x = x + h 34 h = x 35 x = self.ffn_norm(x) 36 x = self.ffn(x) 37 x = x + h 38 return x, weights 39Copy",activeloop U-Net: Convolutional Networks for Biomedical Image Segmentation,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/," The U-Net architecture stands and remains the state-of-the-art architecture for image segmentation. It was introduced in 2015 by Ronneberger et al. The paper describes a CNN with a U-Net design.Source: U-Net: Convolutional Networks for Biomedical Image SegmentationThe U-Net design is specifically used to extract features and construct precise segmentation masks. It consists of two main components: a downsampling block and an upsampling block connected at each layer.The downsampling block is made up of a CNN that extracts features of the given image and yields two sets of outputs. One set is fed into the max-pooling layer, where the image is reduced, while the other is fed to the adjacent upsampling block, where localization is performed. See the image below.Source: U-Net: Convolutional Networks for Biomedical Image SegmentationThe red arrow represents max-pooling, i.e., the reduction of the image, while the gray arrow represents the passing of the image to the upsampling block.The upsampling blocks are made of transpose-CNN that expands the size of the image. Essentially, it takes two inputs. The first input comes from the adjacent downsampling block. The second input comes from the lower upsampling block. The network adds padding via transpose convolution operation for the latter, which has a reduced dimension. The upsampling block's job is to concatenate the two inputs and generate a localized output of the original image. See the image below.Source:U-Net: Convolutional Networks for Biomedical Image SegmentationThe green arrow represents upsampling of the feature where it can be concatenated to the larger features. Once the image is concatenated, the CNN performs extraction and localization.We must bear in mind that each downsampling block is connected to a consecutive upsampling block",activeloop Encoder,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/," In a transformer, the encoder consists of three major components: Embedding layer: It creates n x n number of patches of the image, which is then sequentially fed into the Multi-head self-attention block. Multi-head self-attention: block yields global representations or features extracted from the given input. Multilayer perceptron: the MLP consists of two linear layers with Gaussian Error Linear Unit (GELU). Essentially, it is a classification head that is local and translationally equivariant. Source: An Image Is Worth 16x16 Words: Transformers For Image Recognition At ScaleThe figure above represents a schematic diagram of the Vision Transformer. It is worth noting that the Vision Transformer uses only the encoder part of the entire Transformer architecture and not the decoder. A classification head extends the encoder block of the ViT.To better understand the architecture, we will explore each module separately. We will explore the encoder module, the transformer module, and the entire ViT module:Encoder module:The core idea of the encoder module is to create a list of attention blocks and then pass the input one at a time through the attention block, followed by normalizing them.1class Encoder(nn.Module): 2 def __init__( 3 self, 4 num_layers, 5 hidden_size, 6 num_attention_heads, 7 linear_dim, 8 dropout_rate, 9 attention_dropout_rate, 10 eps, 11 std_norm, 12 ): 13 super(Encoder, self).__init__() 14 self.layer = nn.ModuleList() 15 self.encoder_norm = LayerNorm(hidden_size, eps=eps) 16 for _ in range(num_layers): 17 layer = Block( 18 num_attention_heads, 19 hidden_size, 20 linear_dim, 21 dropout_rate, 22 attention_dropout_rate, 23 eps, 24 std_norm, 25 ) 26 self.layer.append(copy.deepcopy(layer)) 27 28 def forward(self, hidden_states): 29 attn_weights = [] 30 for layer_block in self.layer: 31 hidden_states, weights = layer_block(hidden_states) 32 attn_weights.append(weights) 33 encoded = self.encoder_norm(hidden_states) 34 return encoded, attn_weights 35CopyTransformer module:This module sequentially arranges the embedding module and the encoder module.1class Transformer(nn.Module): 2 def __init__( 3 self, 4 img_size, 5 hidden_size, 6 in_channels, 7 num_layers, 8 num_attention_heads, 9 linear_dim, 10 dropout_rate, 11 attention_dropout_rate, 12 eps, 13 std_norm, 14 ): 15 super(Transformer, self).__init__() 16 . 17 . 18 . 19 20 def forward(self, input_ids): 21 embedding_output = self.embeddings(input_ids) 22 encoded, attn_weights = self.encoder(embedding_output) 23 return encoded, attn_weights 24CopyVision Transformer:The ViT module, like the transformers module, combines just two modules: the transformer module and a linear layer that acts as a classification head",activeloop Self-Attention,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/," Self Attention is an attention mechanism that operates the different positions of a sequence, namely a key-value pair and query, to yield good representations. The attention mechanism maps a given sequence to an output with a Scaled Dot-Product operation (as shown in the figure above).Source: Attention Is All You NeedQ, K, and V are Query, Key, and Value in the formula above. dk is the scaling factor. d is usually the size of attention heads. Essentially, d can be calculated by the following formula:d = hidden feature/number of attention head Source: Attention Is All You NeedNotably, the capabilities of producing rich representation with self-attention are significantly increased by using the multi-head attention mechanism. According to the authors, it 'allows the model to jointly attend to information from different representation subspaces at different positions'.This operation allows the model to yield a global representation of the given input. In NLP, self-attention enables the model to learn the long-term dependencies, i.e., the correlation between the current output and the previous outputs, so that the contextual information is preserved over time.In computer vision, the self-attention mechanism can enable the model to focus on the critical part of the image. It can neglect or discern the objects, such as the background from the foreground, and yield results based on the same. It can also enable models to preserve essential features extracted in the early stages of the convolution operation.Source: Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionIn the image classification task, the model can concentrate or pay attention to the objects that yield good attention scores. The image above shows that the model focuses on the two humans more than the background and the frisbee",activeloop Hybridizing Encoder and UNet,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"So far, we have discussed CNN-based UNet and Attention-based Transformers. Further on, we will see how we can combine both to create a hybrid architecture.We want to create a hybridized UNet because both CNN-based UNet and Transformers have advantages and disadvantages. Combining both can help us leverage both models' advantages in a single model.We have learned that CNN-based UNet has a significant limitation because of the convolutional operations that make them unable to model long-term dependencies. Transformers, on the other hand, can model long-term dependencies. So why not use transformers instead of CNN?Although transformers can model long-term dependencies, they cannot model local features, which CNN is best at. Hence, combining both models can help us to leverage the power of both the CNN and self-attention mechanism to yield better results.How can we combine them both?In the released paper, 'TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation, ' the authors proposed using CNN to create image patches or image embedding that can be fed into the transformer's encoder. The transformer can encode image patches from a CNN feature map as the input sequence to extract global features. The decoder, made up of transpose CNN can then upsamples the encoded features. Like the original UNet at each layer, the upsample features will be combined with the high-resolution CNN feature maps to yield high-quality and precise localization. For a better understanding, look at the image below. Source: TransUNet | Transformers Make Strong Encoders for Medical Image SegmentationAs you can see, the CNN layer extracts local features, and the transformer extracts global features. The local features are combined or concatenated with the upsampling block at each layer for precise segmentation masks. To comprehend the whole architecture, we will break it into different modules and explore the working of what each module does.",activeloop Encoder:,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/," We will explore and understand the code released by the authors in their paper.To start with, let us explore the CNN module. The CNN module leverages ResNet. The ResNet defined in the paper is ResNet-50. This network is very efficient in extracting features from the given input. Although it is a big network, it tackles vanishing gradients using skip connections.Here is the pseudo-code for the ResNet:1class ResNetV2(nn.Module): 2 def __init__(self, block_units, width_factor): 3 . 4 . 5 self.root = '''it consists of conv layers, group norm, and an activation''' 6 self.body = '''it consists of pre-activation bottleneck block ''' 7 8 def forward(self, x): 9 features = [] 10 b, c, in_size, _ = x.size() 11 x = self.root(x) 12 features.append(x) 13 x = nn.MaxPool2d(kernel_size=3, stride=2, padding=0)(x) 14 for i in range(len(self.body)-1): 15 x = self.body[i][18] 16 right_size = int(in_size / 4 / (i+1)) 17 if x.size()[2] != right_size: 18 pad = right_size - x.size()[2] 19 assert pad < 3 and pad > 0, 'x {} should {}'.format(x.size(), right_size) 20 feat = torch.zeros((b, x.size()[1], right_size, right_size), device=x.device) 21 feat[:, :, 0:x.size()[2], 0:x.size()[3]] = x[:] 22 else: 23 feat = x 24 features.append(feat) 25 x = self.body[-1][18] 26 return x, features[::-1] 27CopyResNet defined consists of two essential modules:Root block:The idea of the root block is to standardize the given input image.1nn.Sequential(OrderedDict([ 2 ('conv', StdConv2d(3, width, kernel_size=7, stride=2, bias=False, padding=3)), 3 ('gn', nn.GroupNorm(32, width, eps=1e-6)), 4 ('relu', nn.ReLU(inplace=True)), 5 # ('pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=0)) 6 ])) 7CopyBody block:The body block contains a sequential layer of preactivation CNN module, a bottleneck block extracting vital information from the image. The preactivation block itself is a ResNet block.1nn.Sequential(OrderedDict([ 2 ('block1', nn.Sequential(OrderedDict( 3 [('unit1', PreActBottleneck(cin=width, cout=width*4, cmid=width))] + 4 [(f'unit{i:d}', PreActBottleneck(cin=width*4, cout=width*4, cmid=width)) for i in range(2, block_units[0] + 1)], 5 ))) 6CopyThe ResNetV2 is then integrated into the Embedding module, which crops the image and feeds it into the transformers. See the figure below.Source: TransUNet | Transformers Make Strong Encoders for Medical Image SegmentationResNetV2 provides an edge to the feature extraction process. This hybrid embedding module returns two outputs: the embedded features are passed into the transformers, while the features extracted from the ResNet are passed into the upsampling block.In the forward function of the ResNetV2 module, you will find that the output from each CNN layer, which is located in the body block, is stored in a 'features' list. This list is reversed once the output from all the CNN layers is stored, enabling the concatenation of global features for upsampling. Source: TransUNet | Transformers Make Strong Encoders for Medical Image Segmentation1class Embeddings(nn.Module): 2 '''Construct the embeddings from the patch, position embeddings",activeloop Encoder,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Because it is a classification head, the final output has to pass through the cross-entropy loss.1class VisionTransformer(nn.Module): 2 def __init__( 3 self, 4 img_size, 5 num_classes, 6 hidden_size, 7 in_channels, 8 num_layers, 9 num_attention_heads, 10 linear_dim, 11 dropout_rate, 12 attention_dropout_rate, 13 eps, 14 std_norm, 15 ): 16 super(VisionTransformer, self).__init__() 17 self.classifier = 'token' 18 19 self.transformer = Transformer( 20 img_size, 21 hidden_size, 22 in_channels, 23 num_layers, 24 num_attention_heads, 25 linear_dim, 26 dropout_rate, 27 attention_dropout_rate, 28 eps, 29 std_norm, 30 ) 31 self.head = Linear(hidden_size, num_classes) 32 33 def forward(self, x, labels=None): 34 x, attn_weights = self.transformer(x) 35 logits = self.head(x[:, 0]) 36 37 if labels is not None: 38 loss_fct = CrossEntropyLoss() 39 loss = loss_fct(logits.view(-1, 400), labels.view(-1)) 40 return loss 41 else: 42 return logits, attn_weights 43Copy ",activeloop Encoder:,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"3 ''' 4 def __init__(self, config, img_size, in_channels=3): 5 super(Embeddings, self).__init__() 6 self.hybrid = None 7 self.config = config 8 img_size = _pair(img_size) 9 10 if config.patches.get('grid') is not None: # ResNet 11 grid_size = config.patches['grid'] 12 patch_size = (img_size[0] // 16 // grid_size[0], img_size[1] // 16 // grid_size[1]) 13 patch_size_real = (patch_size[0] * 16, patch_size[1] * 16) 14 n_patches = (img_size[0] // patch_size_real[0]) * (img_size[1] // patch_size_real[1]) 15 self.hybrid = True 16 else: 17 patch_size = _pair(config.patches['size']) 18 n_patches = (img_size[0] // patch_size[0]) * (img_size[1] // patch_size[1]) 19 self.hybrid = False 20 21 if self.hybrid: 22 self.hybrid_model = ResNetV2(block_units=config.resnet.num_layers, width_factor=config.resnet.width_factor) 23 in_channels = self.hybrid_model.width * 16 24 self.patch_embeddings = Conv2d(in_channels=in_channels, 25 out_channels=config.hidden_size, 26 kernel_size=patch_size, 27 stride=patch_size) 28 self.position_embeddings = nn.Parameter(torch.zeros(1, n_patches, config.hidden_size)) 29 30 self.dropout = Dropout(config.transformer['dropout_rate']) 31 32 def forward(self, x): 33 if self.hybrid: 34 x, features = self.hybrid_model(x) 35 else: 36 features = None 37 x = self.patch_embeddings(x) # (B, hidden . n_patches^(1/2), n_patches^(1/2)) 38 x = x.flatten(2) 39 x = x.transpose(-1, -2) # (B, n_patches, hidden) 40 41 embeddings = x + self.position_embeddings 42 embeddings = self.dropout(embeddings) 43 return embeddings, features 44CopyNow we will understand how the two outputs from the embedding module, i.e., embeddings and features, can be processed to acquire the segmentation masks. The process of acquiring a segmentation mask will be carried out in the Vision Transformer. But it has to be built in the form of a UNet, which means that we have to add two additional components to the existing transformer architecture: the encoder module. The additional two components are a decoder and a segmentation head.Let's build each component step by step for better understanding:Transformer or encoder module: This module will receive only the embedded output and then feed it to the attention block followed by the multilayer perceptron.1class Transformer(nn.Module): 2 def __init__(self, config, img_size, vis): 3 super(Transformer, self).__init__() 4 self.embeddings = Embeddings(config, img_size=img_size) 5 self.encoder = Encoder(config, vis) 6 7 def forward(self, input_ids): 8 embedding_output, features = self.embeddings(input_ids) 9 encoded, attn_weights = self.encoder(embedding_output) # (B, n_patch, hidden) 10 return encoded, attn_weights, features 11CopyAs illustrated, since we have the ResNetV2 in the embedding, the transformers yield three outputs instead of two, as seen in earlier encoded features, attention weights, and convolutional features.",activeloop TransUnet,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Now comes the main component, ViT or TransUnet. This module consists of three critical parts: Transformer, DecoderCup, and Segmentation head. We already saw the transformer architecture. Here we will explore the decoder cup and the segmentation head.a) DecoderCupThe idea of the decoder cup is to upsample the given input.1class DecoderCup(nn.Module): 2 def __init__(self, config): 3 super().__init__() 4 self.config = config 5 head_channels = 512 6 self.conv_more = Conv2dReLU( 7 config.hidden_size, 8 head_channels, 9 kernel_size=3, 10 padding=1, 11 use_batchnorm=True, 12 ) 13 decoder_channels = config.decoder_channels 14 in_channels = [head_channels] + list(decoder_channels[:-1]) 15 out_channels = decoder_channels 16 17 if self.config.n_skip != 0: 18 skip_channels = self.config.skip_channels 19 for i in range(4-self.config.n_skip): # re-select the skip channels according to n_skip 20 skip_channels[3-i]=0 21 22 else: 23 skip_channels=[0,0,0,0] 24 25 blocks = [ 26 DecoderBlock(in_ch, out_ch, sk_ch) for in_ch, out_ch, sk_ch in zip(in_channels, out_channels, skip_channels) 27 ] 28 self.blocks = nn.ModuleList(blocks) 29 30 def forward(self, hidden_states, features=None): 31 B, n_patch, hidden = hidden_states.size() # reshape from (B, n_patch, hidden) to (B, h, w, hidden) 32 h, w = int(np.sqrt(n_patch)), int(np.sqrt(n_patch)) 33 x = hidden_states.permute(0, 2, 1) 34 x = x.contiguous().view(B, hidden, h, w) 35 x = self.conv_more(x) 36 for i, decoder_block in enumerate(self.blocks): 37 if features is not None: 38 skip = features[i] if (i < self.config.n_skip) else None 39 else: 40 skip = None 41 x = decoder_block(x, skip=skip) 42 return x 43Copyb) Segmentation headThis module inherits the nn.Sequential class, which by default has a forward() function. The function of this module is to upsample the final output yielded by the last layer of decodercup to create a segmentation mask.1class SegmentationHead(nn.Sequential): 2 3 def __init__(self, in_channels, out_channels, kernel_size=3, upsampling=1): 4 conv2d = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=kernel_size // 2) 5 upsampling = nn.UpsamplingBilinear2d(scale_factor=upsampling) if upsampling > 1 else nn.Identity() 6 super().__init__(conv2d, upsampling) 7 8Copyc) Vision TransformerAs usual, this module sequentially arranges all the modules together to create a seamless pipeline.1class VisionTransformer(nn.Module): 2 def __init__(self, config, img_size=224, num_classes=21843, zero_head=False, vis=False): 3 super(VisionTransformer, self).__init__() 4 self.num_classes = num_classes 5 self.zero_head = zero_head 6 self.classifier = config.classifier 7 self.transformer = Transformer(config, img_size, vis) 8 self.decoder = DecoderCup(config) 9 self.segmentation_head = SegmentationHead( 10 in_channels=config['decoder_channels'][-1], 11 out_channels=config['n_classes'], 12 kernel_size=3, 13 ) 14 self.config = config 15 16 def forward(self, x): 17 if x.size()[1] == 1: 18 x = x.repeat(1,3,1,1) 19 x, attn_weights, features = self.transformer(x) # (B, n_patch, hidden) 20 x = self.decoder(x, features) 21 logits = self.segmentation_head(x) 22 return logits 23Copy",activeloop How to visualize the UWMGI Dataset?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Once you have downloaded the data, you can explore it by running this script: print(ds.tensors.keys()) dict_keys([‘images', 'labels', ‘masks']) You can use the above keys to get the required output according to the task. Since we are working with segmentation, we want to explore ‘images' and ‘masks'. We can then use these keys to visualize the image.1plt.imshow(ds.tensors['images'][1].numpy()) 2Copy Similarly, we can visualize the masks using the same command:1plt.imshow(ds.tensors['masks'][0].numpy()[:, :, 0]) 2Copy Final image after combining image and its respective segmentation.",activeloop What is Transformer in ML?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"The Transformer architecture was introduced by Vaswani et al. in 2016. It was designed for sequential modeling, language modeling, or NLP. One of the reasons why transformers are popular is because of the self-attention mechanism, which yields global feature extraction compared to local feature extraction, as we see in CNN. Due to its global feature extraction capabilities, it serves as a decent tool for image segmentation.Source: Attention Is All You Need",activeloop Querying the Data,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Data querying is an essential tool for ML practitioners as it can enable them to filter out necessary information according to the requirement to train the model. For instance, datasets such as the CoCo dataset have 91 classes, of which only 80 classes contain segmented masks. If we are building a DL model that has to be trained to classify only humans, then we can filter out images that only contain classes pertaining to humans.Deep Lakes enables its users to filter out images in two ways, i.e., using UI and CLI. Let's explore both of them in detail.",activeloop How to download UWMGI Dataset?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"To download the data, you can follow the steps given below: Install the Python package: pip3 install deeplake Visit https://app.activeloop.ai/ Select the data: You can search for the dataset you want to work within the search bar. In our case, it will be 'UWMGI' Once you find the dataset copy, the link provided. Source: Deep Lake Access the data: Now, let's download the data from the Activeloop server by following the code below:1import deeplake 2import torch 3from torchvision import transforms, models 4 5ds = deeplake.load('hub://perceptronai/UWMGI| training') 6Copy hub://perceptronai/UWMGI-1 loaded successfully. This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/perceptronai/UWMGI-1 1tform_i = transforms.Compose([ 2 transforms.ToTensor(), 3 transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), 4]) 5 6tform_m = transforms.Compose([ 7 transforms.ToTensor()]) 8 9deeplake_loader = ds.pytorch(batch_size=1, transform={ 'images': tform_i, 'masks': tform_m}) 10CopyThe ds.pytorch class works exactly like the data loader class from Pytorch. You can use the same methodology to get the images and masks. For instance,1imgs, msks = next(iter(deeplake_loader)) 2CopyDeep Lake also provides a way to visualize the images. You can use the simple command ds.visualize() to visualize the images.Here are some additional practices that can help you to understand and discover once you download the data.",activeloop How to query datasets using Python?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Querying data using CLI is quite simple. Activeloop provides filtering with user-defined functions which allow you to filter data based on your requirements. Here are the steps which can enable filtering data.a) Load the data.1ds = deeplake.load('hub://perceptronai/UWMGI| training') 2Copyb) Create a list of objects that you want to filter.1labels_list = ['stomach', 'large bowel'] 2Copyc) Create a function for filtering data using the deeplake.compute decorator.1labels_list = ['stomach'] 2 3@deeplake.compute 4def filter_labels(sample_in, labels_list): 5 # print(sample_in.labels.data()['text'][0]) 6 return sample_in.labels.data()['text'][0] in labels_list 7Copyd) We can start filtering the data using the ds.filter function, where we will pass the filter_labels function.1ds_view = ds.filter(filter_labels(labels_list)) ``` 2 3 Another way to filter data is by using the ds.query function. 4 5```python 6print(ds.query('SELECT * WHERE CONTAINS(labels, 'stomach')')) 7Copy Dataset(path='hub://perceptronai/UWMGI| training', index=Index([(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)]), tensors=['images', 'labels', ‘masks']) Once the data is filtered, you can check the length of the data and even plot an image and its corresponding segmentation mask.1print(len(ds_view)) 2Copy 100 1Image.fromarray(ds_view.images[1].numpy()) 2Copy Once the data is filtered, you can use it to train your DL model.",activeloop Training and Testing Loop,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/," Once the dataset is downloaded, we can then start defining the model. We will be finetuning the model since the authors have already uploaded the weights and biases in their repository. To do that, we will clone the repository. You can find the complete training code here.1!git clone https://github.com/Beckschen/TransUNet 2CopyOnce we have cloned the repository, we can then import the model.1from TransUNet.networks.vit_seg_modeling import VisionTransformer as ViT_seg 2from TransUNet.networks.vit_seg_modeling import CONFIGS as CONFIGS_ViT_seg 3CopyWe will then configure the model according to our requirements. We can define a class that will download the pre-trained weights and biases. We can also use the same class to define the model. The following code is an example of how we can download and configure the model:1class TransUnet(nn.Module): 2 def __init__(self, 3 model_name=CFG.MODEL_NAME, 4 pretrain_path='./R50+ViT-B_16.npz', 5 n_classes=3, 6 n_skip=3, 7 dropout_rate=0.2, 8 mlp_dim=3072, 9 num_heads=12, 10 num_layers=8, 11 img_size=CFG.img_size[0]): 12 13 super(TransUnet, self).__init__() 14 15 #config 16 config_vit = CONFIGS_ViT_seg[model_name] 17 config_vit.pretrained_path = pretrain_path 18 config_vit.n_classes = n_classes 19 config_vit.n_skip = n_skip 20 config_vit.transformer.dropout_rate = dropout_rate 21 config_vit.transformer.mlp_dim = mlp_dim 22 config_vit.transformer.num_heads = num_heads 23 config_vit.transformer.num_layers = num_layers 24 25 #model 26 self.model = ViT_seg(config_vit, img_size=img_size, num_classes=n_classes) 27 self.model.load_from(weights=np.load(config_vit.pretrained_path)) 28 29 def forward(self, x): 30 img_segs = self.model(x) 31 32 return img_segs 33 34model = TransUnet() 35CopyLet us define the other essential modules, including the loss function, optimizer, and scheduler. If your task requires you to predict binary segmentation masks, then you can use LovaszSoftmax, Hausdorff, FocalLoss, DiceLoss, or DiceBCELoss",activeloop Training and Testing Loop,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"These loss functions can be downloaded from the following git repo:1!git clone https://github.com/JunMa11/SegLoss.git 2CopyBut since our task requires us to predict multilabel segmentation, we will use CrossEntropyLoss.1def Loss(): 2 if CFG.criterion == 'Multiclass': 3 criterion = nn.CrossEntropyLoss() 4 elif CFG.criterion == 'DiceBCELoss': 5 criterion = DiceBCELoss() 6 elif CFG.criterion == 'DiceLoss': 7 criterion = DiceLoss() 8 elif CFG.criterion == 'FocalLoss': 9 criterion = FocalDLoss() 10 elif CFG.criterion == 'Hausdorff': 11 criterion = Hausdorff_loss() 12 elif CFG.criterion == 'Lovasz': 13 criterion = Lovasz_loss() 14 return criterion 15CopyWe will use an Adam optimizer with a learning rate of 2e-3 and a scaler for the backward propagation and uniform scaling of gradients, respectively.1optimizer = Adam(model.parameters(), lr=CFG.lr) 2scaler = torch.cuda.amp.GradScaler() 3CopyNow we can write our training loop and start the finetuning process.1def train_engine(model, train_loader, device=CFG.device): 2 epoch_loss = [] 3 loss_list = [] 4 acc = [] 5 num_correct = 0 6 num_pixels = 0 7 8 model = model.to(device) 9 model.train() 10 for epoch in range(20): 11 loop = tqdm(enumerate(train_loader),total=len(train_loader), desc='training') 12 for batch_idx, (X, y) in loop: 13 X = X.to(device) 14 y = y.to(device) 15 # forward 16 with torch.cuda.amp.autocast(): 17 logits = model(X) 18 loss = Loss()(logits, y) 19 20 # backward 21 optimizer.zero_grad() 22 scaler.scale(loss).backward() 23 scaler.step(optimizer) 24 scaler.update() 25 26 #loss and accuracy 27 softmax = nn.Softmax(dim=1) 28 preds = softmax(logits) 29 preds = torch.argmax(preds,axis=1) 30 num_correct += (preds == y).sum() 31 num_pixels += torch.numel(preds) 32 loss_list.append(loss.item()) 33 accuracy=num_correct.item()/num_pixels*100 34 acc.append(accuracy) 35 # update tqdm loop 36 loop.set_postfix(loss=loss.item(), accuracy=accuracy) 37 epoch_loss.append(loss.item()) 38 return loss_list, epoch_loss, acc 39 40loss, epoch_loss, accuracy = train_engine(model, train_loader) 41Copy training: 79%|███████▊ | 23606/30000 [58:13<15:46, 6.76it/s, accuracy=67.8, loss=0.23] The result after the first epoch: One of the advantages of using a pre-trained model is that the model learns faster than training the model from scratch . As you can see from the image above, the model is starting to learn anomalies.When we train the model for a more extended period, the model will start predicting accurate segmentation masks. Please check the following section for the final results.",activeloop Concluding remarks: UNet vs TransUNet,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"This article showed how to finetune & build a deep-learning model for machine learning in radiology.UNet remains one of the primary architectures for the task of image segmentation, no matter what the field is. Especially in radiology, it can also impact lives. Accuracy score is an essential metric for developing UNet for medical purposes. Lee et al. introduced a variant of UNet predicting affinities between the nearest neighbors. The network can leverage this information from the derived affinities to create a segmentation mask by greedily clustering the mean affinity. This implies that UNet can achieve superhuman accuracy in predictive tasks.Likewise, many methods, such as error detection and correction tasks (both important in image segmentation), may enhance the model's accuracy.TransUNet can leverage both of these techniques. We've already seen how TransUNet uses the multi-head self-attention mechanism to preserve long-range dependencies. Similarly, we can enhance the preserving power of TransUNet by predicting the affinity of the neighbor, clustering likable data points, and creating segmentation masks. This, combined with error detection, increases the accuracy of the model.",activeloop References,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Deep Lake: a Lakehouse for Deep Learning U-Net: Convolutional Networks for Biomedical Image Segmentation Attention Is All You Need An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale TransUNet | Transformers Make Strong Encoders for Medical Image Segmentation https://github.com/Beckschen/TransUNet https://github.com/JunMa11/SegLoss.git : Loss functions for image segmentation https://www.kaggle.com/stainsby/fast-tested-rle https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/data https://www.kaggle.com/code/awsaf49/uwmgi-unet-train-pytorch Activeloop Superhuman Accuracy on the SNEMI3D Connectomics Challenge An Error Detection and Correction Framework for Connectomics Show, Attend and Tell: Neural Image Caption Generation with Visual Attention",activeloop How to query datasets using Deep Lake UI?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"To filter data using Activeloop's UI, you only need to click on the 'Run query' button, as shown in the image below. Source: Deep LakeOnce you click on the button, you will get the SQL command line just below the button. You can explore the queries by clicking on the 'Example Queries' button on the command line.We will use the second command to filter out images that contain only humans. We can copy-paste the command into the SQL command line and press shift+enter to execute it. Source: Deep LakeYou can then use the 'Save query result' button to save the query. Source: Deep LakeAfter saving the query result, you can click on 'Query history' to get the saved query. Source: Deep LakeAfter which, you can use the query id, which will serve as a Python API enabling you to train the DL model on the filtered dataset.",activeloop Key Takeaways,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"Here are some of the key takeaways from this article: Pre-trained TransUNet can be used in radiology machine learning projects. The TransUnet architecture is SOTA regarding image segmentation, as it can model hard-to-find anomalies. Because the network uses both CNN and a self-attention mechanism, it can find local features and preserve them longer, thus enabling it to find features that would typically be out-of-reach. As seen below, the network can predict correct segmentation masks even for images that aren't clear, showing the reliability of TransUnet. Deep Lake provides an efficient way to load, query, visualize, & stream the data for training and testing purposes. The queried dataset can be saved and later materialized, which enables reproducibility.",activeloop What is UWMGI Dataset?,https://activeloop.ai/resources/radiology-machine-learning-multi-image-segmentation-with-trans-u-net/,"The UWMGI dataset is the competition dataset released in Kaggle. The dataset represents gastrointestinal tract images, consisting of roughly 50 cases. The images in each case are a 16-bit grayscale in a PNG format. The segmentation masks representing three classes, stomach, small and large bowels, are provided as Run Length encoded or RLE-encoded masks.You can check out this notebook that explains how you can download the data from Kaggle and upload it to Deep Lake after preprocessing it to the required Numpy format. Source: Deep Lake",activeloop The New Way to Understand Code Repositories,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"The new way is just four steps that take less than an hour to build: Index the codebase Store embeddings and the code in Deep Lake Use Conversational Retriever Chain from LangChain Ask any questions you'd like! Now, this doesn't mean you don't need to take the steps outlined above in the previous section, but we do hope this new approach aids the learning speed along the way. We will delve deeper into this process below, but let's review the basics first.",activeloop LangChain basics,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Before moving on to the process and architecture behind code comprehension, let's first understand the basics.",activeloop What is Deep Lake as a Vector Store in LangChain?,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"In the LangChain ecosystem, Deep Lake is a serverless, open-source, and multi-modal vector store. Deep Lake not only stores embeddings but also the original data with automatic version control. For these reasons, Deep Lake can be considered one of the best Vector Stores for LangChain (if you ask us, haha!). Deep Lake goes beyond a simple vector store, but we'll dive into it in another blog post.",activeloop What is LangChain?,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"In essence, LangChain is a wrapper for utilizing Large Language Models like GPT-4 and connecting them to many tools (such as vector stores, calculators, or even Zapier). LangChain is especially appealing to developers because it offers a novel way to construct user interfaces. Instead of relying on dragging and dropping or coding, users can state their desired outcome. Broadly speaking, LangChain is enticing to devs as it augments already robust LLMs with memory and context (which comes in handy in tasks such as code understanding). By artificially incorporating 'reasoning,' we can tackle more sophisticated tasks with greater precision.If you want to learn more about LangChain, read the ultimate guide on LangChain. In this example, we build ChatGPT to answer questions about your financial data. If you were to use an LLM about the top-performing quarter of all time for Amazon (maybe after feeding it a copy-paste text from a pdf), It would likely produce a plausible SQL query to retrieve the result using fabricated yet real-sounding column names. However, using LangChain, you can compose a workflow that would iteratively go through the process and arrive at a definitive answer, such as 'Q4 2022 was the strongest quarter for Amazon all-time'. You can read more about analyzing your financial data with LangChain.",activeloop "How to Build Code Understanding App with LangChain, GPT-4, & Conversational Retriever Chain?",https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Index the Codebase: Duplicate the target repository, load all contained files, divide the files, and initiate the indexing procedure. Alternatively, you can bypass this step and use a pre-indexed dataset. Store Embeddings and the Code: Code segments are embedded using a code-aware embedding model and saved in the Deep Lake VectorStore. Assemble the Retriever: Conversational Retriever Chain searches the VectorStore to find a specific query's most relevant code segments. Build the Conversational Chain: Customize retriever settings and define any user-defined filters as necessary. Pose Questions: Create a list of questions about the codebase, then use the Conversational Retrieval Chain to produce context-sensitive responses. The LLM (GPT-4, in this case) should now generate detailed, context-aware answers based on the retrieved code segments and conversation history.",activeloop What is LangChain Conversational Retriever Chain?,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"A conversational Retriever Chain is a retrieval-centric system interacting with data stored in a VectorStore like Deep Lake. It extracts the most applicable code snippets and details for a specific user request using advanced methods like context-sensitive filtering and ranking. The conversational Retriever Chain is designed to provide high-quality, relevant outcomes while considering conversation history and context.",activeloop The Legacy Way to Understand Code,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Acquire a broad comprehension of the codebase's role within the project. Read the codebase documentation. Develop a dependency map for the codebase to comprehend its organization and interconnections. Examine the primary function to grasp the code's structure. Ask a colleague, 'wtf is the main function doing?'. For test-driven development, execute test cases and use breakpoints to decipher the code. If test cases exist but are outside test-driven development, review them to comprehend the specifications. Shed a few tears. Employ a debugger to step through the code if test cases are absent. Examining Git history can reveal the codebase's evolution and areas more susceptible to modifications. Alter the code and introduce personal test cases to assess the consequences. Investigate previous alterations to identify potential impact areas and confirm your assumptions. Continually monitor changes made by teammates to remain informed on current advancements.",activeloop Step 2: Indexing the Twitter Algorithm Code Base (Optional),https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"You can skip this part and jump right into using an already indexed dataset (just like the one in this example). To index the code base, first clone the repository, parse the code, break it into chunks, and apply OpenAI indexing:1!git clone https://github.com/twitter/the-algorithm # replace any repository of your choice 2CopyNext, load all files inside the repository.1import os 2from langchain.document_loaders import TextLoader 3 4root_dir = './the-algorithm' 5docs = [] 6for dirpath, dirnames, filenames in os.walk(root_dir): 7 for file in filenames: 8 try: 9 loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8') 10 docs.extend(loader.load_and_split()) 11 except Exception as e: 12 pass 13CopySubsequently, divide the loaded files into chunks:1from langchain.text_splitter import CharacterTextSplitter 2 3text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) 4texts = text_splitter.split_documents(docs) 5 6CopyPerform the indexing process. This takes roughly 4 minutes to calculate embeddings and upload them to Activeloop. Afterward, you can publish the dataset publicly:1username = 'davitbun' # replace with your username from app.activeloop.ai 2db = DeepLake(dataset_path=f'hub://{username}/twitter-algorithm', embedding_function=embeddings) 3db.add_documents(texts) 4CopyIf the dataset has been already created, you can load it later without recomputing embeddings as seen below.",activeloop Step 3: Conversational Retriever Chain,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"First, load the dataset, establish the retriever, and create the Conversational Chain:1db = DeepLake(dataset_path='hub://davitbun/twitter-algorithm', read_only=True, embedding_function=embeddings) 2CopyA preview of the dataset would look something like this:1Dataset(path='hub://davitbun/twitter-algorithm', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text']) 2 3 tensor htype shape dtype compression 4 ------- ------- ------- ------- ------- 5 embedding generic (23152, 1536) float32 None 6 ids text (23152, 1) str None 7 metadata json (23152, 1) str None 8 text text (23152, 1) str None 9Copy1retriever = db.as_retriever() 2retriever.search_kwargs['distance_metric'] = 'cos' 3retriever.search_kwargs['fetch_k'] = 100 4retriever.search_kwargs['maximal_marginal_relevance'] = True 5retriever.search_kwargs['k'] = 10 6CopyYou can also define custom filtering functions using Deep Lake filters:1def filter(x): 2 if 'com.google' in x['text'].data()['value']: 3 return False 4 metadata = x['metadata'].data()['value'] 5 return 'scala' in metadata['source'] or 'py' in metadata['source'] 6 7# Uncomment the following line to apply custom filtering 8# retriever.search_kwargs['filter'] = filter 9CopyConnect to GPT-4 for question answering.1from langchain.chat_models import ChatOpenAI 2from langchain.chains import ConversationalRetrievalChain 3 4model = ChatOpenAI(model='gpt-3.5-turbo') # switch to 'gpt-4' 5qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever) 6Copy",activeloop Step 1: Installing required libraries and authenticating with Deep Lake and Open AI,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"First, we will install everything we'll need.1!python3 -m pip install --upgrade langchain deeplake openai tiktoken 2 3CopyNext, let's import the necessary packages and make sure the Activeloop and OpenAI keys are in the environmental variables ACTIVELOOP_TOKEN, OPENAI_API_KEY and define the OpenAI embeddings. For full documentation of Deep Lake please the Deep Lake LangChain docs page and the Deep Lake API reference.You'd need to authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the Deep Lake platform here1import os 2import getpass 3 4from langchain.embeddings.openai import OpenAIEmbeddings 5from langchain.vectorstores import DeepLake 6 7os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') 8os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:') 9embeddings = OpenAIEmbeddings() 10Copy",activeloop The Twitter Recommendation Algorithm,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Ironically, we will use some words to describe the Twitter Algorithm for the general audience. Still, you can skip right to the code part (that will answer even more questions on how the Twitter algorithm works in 2023).With approximately 500 million Tweets daily, the Twitter recommendation algorithm is instrumental in selecting top Tweets for your the 'For You' feed. The Twitter trending algorithm employs intertwined services and jobs to recommend content across different app sections, such as Search, Explore, and Ads. However, we will focus on the home timeline's 'For You' feed.",activeloop Twitter Recommendation Pipeline,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Twitter's open-sourced recommendation algorithm works in three main steps: Candidate Sourcing (fancy speak for data aggregation): the algorithm collects data about your followers, your tweets, and you. The 'For You' timeline typically comprises 50% In-Network (people you follow) and 50% Out-of-Network (people you don't follow) Tweets. Feature Formation & Ranking: Turns the data into key feature buckets: Embedding Space (SimClusters and TwHIN), In Network (RealGraph and Trust & Safety), and Social Graph (Follower Graph, Engagements); look for our practical example to discover what each of those is. Later, a neural network trained on Tweet interactions to optimize for positive engagement is used to obtain the final ranking. Mixing: Finally, in the mixing step, the algorithm groups all features into candidate sources and uses a model called Heavy Ranker to predict user actions, applying heuristics and filtering.",activeloop Code Understanding with LangChain: Asking about Functions or Algorithms,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Question: How does Heavy ranker work. what are it's main inputs? Answer: The Heavy Ranker, a vital machine learning model, ranks and scores candidates within the recommendation algorithm, mainly predicting user engagement with tweets or connections on the platform.Key Heavy Ranker inputs include: Static Features: Features calculated directly from a tweet as it's created, such as URL presence, cards, quotes, etc. The Index Ingester generates these features and stores them in the index. Real-time Features: Per-tweet features that change post-indexing, including social engagements (retweet, favorite, reply counts) and spam signals. The Signal Ingester, part of a Heron topology, processes event streams to gather and compute these features. User Table Features: Per-user features obtained from the User Table Updater, processing a stream written by the user service. This input stores sparse real-time user data, later linked to the tweet being scored via author lookup. Search Context Features: Features representing the searcher's context, like UI language, content consumption, and current time (implied). These features are combined with Tweet Data for scoring purposes. The Heavy Ranker processes these inputs to score and rank candidates based on relevance and user engagement likelihood. Question: What are some unexpected fingerprints for spam factors? Answer: Within the given context, a peculiar spam sign is a tweet containing a non-media, non-news link. If a tweet has a link without an image, video, or news URL, it's seen as a potential spam source, setting a user reputation threshold (tweepCredThreshold) to MIN_TWEEPCRED_WITH_LINK.Note that this rule may not contain all atypical spam indicators but is based on the particular codebase and logic provided in the context.",activeloop Code Understanding with LangChain: Asking about Variables,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Question: What is favCountParams? Is it just Likes and Bookmarks? Answer: From the provided code, it is not clear if the favorite count metric is determined by the sum of likes and bookmarks. The favorite count is mentioned in the code, but there is no explicit reference to how it is calculated in terms of likes and bookmarks. Question: What are the major negative modifiers that lower your linear ranking parameters? Answer: In this code repository, the major negative modifiers lowering the linear ranking parameters are: scoringData.querySpecificScore: This score adjustment is based on the query-specific information. If its value is negative, it will lower the linear ranking parameters. scoringData.authorSpecificScore: This score adjustment is based on the author-specific information. If its value is negative, it will also lower the linear ranking parameters. If the author has been blocked, muted, reported for abuse or spam, or has been unfollowed (over the last 50 day rolling period) - the algorithm will adjust the score negatively.",activeloop Step 4: Ask Questions to the Codebase in Natural Language,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Define all the juicy questions you want to be answered:1questions = [ 2 'What does favCountParams do?', 3 'is it Likes + Bookmarks, or not clear from the code?', 4 'What are the major negative modifiers that lower your linear ranking parameters?', 5 'How do you get assigned to SimClusters?', 6 'What is needed to migrate from one SimClusters to another SimClusters?', 7 'How much do I get boosted within my cluster?', 8 'How does Heavy ranker work. what are it's main inputs?', 9 'How can one influence Heavy ranker?', 10 'why threads and long tweets do so well on the platform?', 11 'Are thread and long tweet creators building a following that reacts to only threads?', 12 'Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?', 13 'Content meta data and how it impacts virality (e.g. ALT in images).', 14 'What are some unexpected fingerprints for spam factors?', 15 'Is there any difference between company verified checkmarks and blue verified individual checkmarks?', 16] 17chat_history = [] 18 19for question in questions: 20 result = qa({'question': question, 'chat_history': chat_history}) 21 chat_history.append((question, result['answer'])) 22 print(f'-> **Question**: {question} \n') 23 print(f'**Answer**: {result['answer']} \n') 24CopyFinally, configure the conversational model and chain:The output would be a series of questions and answers. We've redacted the answers of the model for brevity. You're welcome to read through the re-run of the notebook here)",activeloop Concluding Remarks: Analyzing Codebase with LangChain and Deep Lake,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"In conclusion, the powerful combination of LangChain, Deep Lake, and GPT-4 revolutionizes code comprehension, making it faster and more efficient. Developers can quickly grasp complex codebases like Twitter's recommendation algorithm using four key steps: indexing the codebase, storing embeddings and code in Deep Lake, using LangChain's Conversational Retriever Chain, and asking questions in natural language.Hopefully, this powerful combination of tools enables developers to quickly gain insights into the inner workings of any code repository, eliminating the need for tedious, time-consuming methods. Since the release of this blogpost, we've seen some great usage of our code to build exciting projects, just like this one:",activeloop Code Understanding with LangChain: Asking about Suggestions Based on the Code,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"You can also brainstorm with the source code in mind on how to accomplish a certain task. Some of the output will be more general (and generic), but it can be finetuned to heavily base its answers on what the model detects in the source code. Question: Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet? Growing followers: The primary objective is to expand your audience. Tactics include:____ Sharing top-notch, niche-specific content regularly to gain autority within SimCluster. Interacting with users through replies, retweets, and mentions. Engage with your commenters: recap.engagement.is_replied_reply_engaged_by_author in Heavy Ranker gives you 75x boost. Utilizing pertinent hashtags and joining popular discussions. Partnering with influencers and users with sizable followings. Publishing content when your target audience is most active. Boosting likes and bookmarks per tweet: The goal is to produce content that connects with your existing followers and promotes engagement. Tactics include: Creating tweets on topics that are trending. tweetHasTrendBoost gives you a 1.1x boost for your tweet to be shown to people. Incorporating eye-catching visuals like images, or videos. twieetHasImageUrlBoost or twieetHasVideoUrlBoost will get you a 2x boost for having a video or image in your tweet. Posing questions, expressing opinions, or starting conversations for enhanced user engagement.",activeloop Top 7 Twitter Recommendation Algorithm Tips: How to Trend on Twitter,https://activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/,"Here's some other interesting facts we've found from our exploration of the Twitter code base. Perhaps they'll help you to gain a larger Twitter following and even trend on Twitter!To be more visible on Twitter, you should: Aim for more likes and bookmarks as they give your tweet a significant boost. Encourage retweets as they give your tweet a 20x boost. Include videos or images in your tweets for a 2x boost. Avoid posting links or using unrecognized languages to prevent deboosts. Create content that appeals to users in your SimClusters to increase relevance. Engage in conversations by replying to others and encouraging replies to your tweets. Maintain a good reputation by avoiding being classified as a misinformation spreader, blocked, muted, reported for abuse, or unfollowed.",activeloop Behind the Scenes at Activeloop,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"In 2016, before starting the company, I started my Ph.D. research at the Connectomics lab at Princeton Neuroscience Institute. I have witnessed the transition from a gigabyte to a terabyte, then to petabyte-scale datasets to achieve super-human accuracy in reconstructing neural connections inside a brain in just several years. Our problem was to figure out how to optimize and cut the cost 4-5x by rethinking how the data is stored, streamed from storage to the compute, which models to use, and how to compile them and run them at scale. While the industry moved slowly, we have observed how similar patterns repeat themselves on a much larger scale.We started Activeloop (formerly known as Snark AI) as part of the Y Combinator Summer 2018 batch to enable organizations to be more efficient at deploying deep learning solutions. We helped build a large language model for patents for a legal tech startup and streamable data pipelines for the petabyte-scale machine learning use case in AgriTech. Through trial and error and talking to hundreds of companies, we learned that all the awesome databases, data warehouses, and data lakes (joined by lakehouses) are great at analytical workloads but not as much for deep learning applications. The demand for storing unstructured data such as audio, video, and images has exploded over the years (more than 90% of the data is now generated in unstructured form). We knew that building the database for AI, the solution to store it, was the proper challenge for us.In 2020, we open-sourced the dataset format called 'Hub', which enabled storing images, videos, and audio as chunked arrays on objects storages and connecting to deep learning frameworks such as PyTorch or Tensorflow. We have collaborated with teams from Google AI, Waymo, Oxford University, Yale University, and other deep learning groups to figure out the nuts and bolts of a solid data infrastructure for deep learning applications.In 2021, the open-source project trended #1 in Python & #2 across all GitHub repositories and was even named as one of the top 10 python ML packages. As of writing this post, the project has 4.8K stars, 75+ contributors, and +1K community members. It is in production both at research institutions, startups, and public companies.We also released the managed version of Activeloop that lets you visualize datasets, version-control or query them, and stream to deep learning frameworks. Apart from providing access to 125+ machine learning datasets, it enables sharing private datasets and collaboration on building and maintaining datasets across organizations. Of course, I couldn't be more proud of our small and under-resourced team for achieving results in such a short time, but the industry has been innovating at a staggering speed.",activeloop Executive Summary,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"One of three ML projects fails due to the lack of a solid data foundation. Projects suffer from low-quality data, under-utilized compute resources, and significant labor overhead required to build and maintain large amounts of data. Traditional data lakes break down data silos for analytical workloads, enable data-driven decision-making, improve operational efficiency, and reduce organizational costs. However, most of these benefits are unavailable for deep learning workloads such as natural language processing (NLP), audio processing, computer vision, agriculture, healthcare, multimedia, and robotics/automotive, and safety & security verticals. Hence repeatedly, organizations opt-in to develop in-house systems. Deep Lake maintains the benefits of a vanilla data lake and enables you to iterate on your deep learning models 2x faster without teams spending time building complex data infrastructure. Deep Lake stores complex data, such as images, videos, annotations, embeddings, and tabular data, in the form of tensors and rapidly streams the data over the network to Tensor Query Language, in-browser visualization engine, and deep learning frameworks without sacrificing GPU utilization. As deep learning rapidly takes over traditional computational pipelines, storing datasets in a Deep Lake will become the new norm",activeloop Machine Learning Loop with Deep Lake,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,These are five fundamental pillars of Deep Lake. Version Control: Git for data Visualize: In-browser visualization engine Query: Rapid queries with Tensor Query language Materialize: Format native to deep learning Stream: Streaming Data Loaders We discuss those features in-depth in the Deep Lake White Paper and shed light on how it works in the Academic Paper.,activeloop What is the Deep Lake?,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"Deep Lake is a vanilla data lake for deep learning, but with one key difference. Deep Lake stores complex data, such as images, audio, videos, annotations, embeddings, and tabular data, in the form of tensors and rapidly streams the data over the network to Tensor Query Language, an in-browser visualization engine, or deep learning frameworks without sacrificing GPU utilization.Deep Lake provides key features that make it the optimal data storage platform for deep learning applications, including: A scalable and efficient data storage system that can handle large amounts of complex data in a columnar fashion Querying and visualization engine to fully support multimodal data types Native integration with deep learning frameworks and efficient streaming of data to models and back Seamless connection with MLOps tools.",activeloop Deep Lake and the Data Loader Landscape,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"Data loaders are one of the more significant bottlenecks of machine learning pipelines (Mohan et al., 2020), and we've built Deep Lake to specifically resolve the data to compute handoff bottleneck.We are thankful to Ofeidis, Kiedanski, & Tassiulas from Yale Institute For Network Science, who have spent a lot of time producing an independent, extensive survey, & benchmarking of open-source data loaders. The research concluded that the third major iteration of our product, Deep Lake, is not only 2x faster than the previous version but is superior to other data loaders in various scenarios.*Comparing the performance of Activeloop Hub, Deep Lake, and Webdataset when loading data from different locations: Local, AWS, and MinIO. (Ofedis et al. 2022) *Speed as a function of the number of workers for RAN- DOM on a single GPU. (Ofedis et al. 2022)",activeloop Large Foundational Models Taking Over by Storm,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"Deep learning achieved super-human accuracy in applications across industries in a few years. Cancer detection from X-Ray images, anatomical reconstruction of human neural cells, playing highly complex games such as Dota or Go, driving cars, unfolding proteins, having human-like conversations, generating code, and even realistic images that took the internet by storm (it took about 40 words to create the perfect prompt, but AI generated the stunning title image of this post). Three factors enable this speed: (1) novel architectures such as Transformers, (2) massive compute capabilities using GPUs or TPUs, and the large volume of datasets such as ImageNet, CommonCrawl, and LAION-400M.At Activeloop, we firmly believe that connecting deep learning models to the value chain in the next five years will produce a foundational shift in the global economy. While innovators primarily focused on models and computing hardware, maintaining or streamlining the complex data infrastructure has been an afterthought. In the build versus buy dilemma, organizations (for the lack of a 'buy' option) repeatably build hard-to-manage in-house solutions. All this led us to decide on the next chapter for the company - Deep Lake.",activeloop Where does Deep Lake fit in the MLOps?,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"As numerous MLOps tools get into the market, it becomes hard for buyers to understand the landscape. We collaborated with the AI Infrastructure Alliance to craft the new MLOps blueprint that provides a clear overview across tools. The blueprint goes bottom-up from infrastructure to human interface and left-to-right from ingestion to development. In the blueprint, Deep Lake takes on the role of a solid data foundation.",activeloop Why we renamed Hub to Deep Lake?,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"Originally, Hub was a chunked array format that naturally evolved with version control, streaming engine, and query capabilities. Our broad open-source community - users from companies, startups, and academia were instrumental in iterating on the product. Increasingly, we found the name too generic of a descriptor (or, as one of our team members put it, 'everyone has a Hub nowadays'). Often, it would cause confusion with dataset hubs. Internally, we were calling it a 'deep lake' (or named it after the deepest lakes in the world). We were delighted to see people like A. Pinhassi also think in the same direction. Overnight, calling the tool, we're building 'deeplake' instead of 'hub', which felt just right (although our marketing department wasn't too thrilled on account of freshly-ordered swag with the Hub branding).pip3 install deeplake",activeloop The reasoning behind some of Deep Lake's architectural decisions,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"Naturally, it took a lot of thinking and iteration cycles to arrive at the way Deep Lake is architected - and here are a few of considerations we've had.",activeloop "Is there a Deep Lakehouse, and where does it come into place?",https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"The format includes versioning, and lineage is fully open-source. Query, streaming, and visualization engines are built in C++ and are closed source for the time being. Nonetheless, they are accessible via Python interface for all users. Being committed to open-source principles, we plan to open-source high-performance engines as they commoditize.",activeloop What's next for Deep Lake?,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"As Deep Lake evolves, we will continuously optimize the performance, add a custom data sampler, sub-tile queries for constructing complex datasets for the 3.1.0 release, performant TensorFlow support, and ACID transactions scheduled for the 3.2.0 release (watch our GitHub repo to stay tuned).We believe that the next step for AI research is to capture text, audio, images, and videos by large multi-modal foundational models. Just think about how many days it took to get to Dall-E and how many took from that milestone to Stable Diffusion or Make-A-Video by Meta AI. Having a solid data infrastructure is going to be a necessary condition for delivering those models into consumers' hands. As deep learning rapidly takes over traditional computational pipelines, storing datasets in a Deep Lake is becoming the new norm.You can dive right into Deep Lake (yes, we will be making endless water puns) by trying out this Getting Started with Deep Lake Colab, and checkout our new C++ dataloader and query engine (Alpha) in this Colab. Join our slack community or book an introductory call with us if you want to start the onboarding immediately.Citations The Future of Deep Learning with Deep Lake. Activeloop Hambardzumyan, Sasun, et al. 'Deep Lake: a Lakehouse for Deep Learning.' arXiv preprint arXiv:2209.10785 (2022). Deep Lake — an architectural blueprint for managing Deep Learning data at scale — part I. Pihnasi, A. Ofeidis, Kiedanski, & Tassiulas 'An overview of the data-loader landscape: comparative analysis.' arXic preprint arXiv:2209.13705 (2022) Mohan, Jayashree, et al. 'Analyzing and mitigating data stalls in dnn training.' arXiv preprint arXiv:2007.06775 (2020).",activeloop Does Deep Lake connect to the Modern Data Stack and MLOps tools?,https://activeloop.ai/resources/introducing-deep-lake-the-data-lake-for-deep-learning/,"The Deep Lake Airbyte destination allows ingesting a dataset from vast amounts of data sources. On the MLOps side, we have been collaborating with W&Bs, Heartex LabelStudio, Sama, CleanLab, AimStack, and Anyscale Ray to provide seamless integrations, which we are going to release in subsequent blog posts.",activeloop What is image augmentation?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Image augmentation is a technique in computer vision to supplement the dataset with artificial variations of existing images. The goal of image augmentation is to prevent overfitting, increase the diversity of the data, and make the model more robust to different types of data. Augmentation can be conducted with transformations such as rotation, scaling, flipping, cropping, or adding noise to the images. The newly synthesized images are used to train the machine learning models, leading to better generalization and improved performance on unseen data.For example, consider a deep learning model trained to recognize handwritten digits. Suppose, like in the MNIST dataset, the training data consists of only 28x28 pixel grayscale images of numbers centered in the picture. In that case, the model may only be able to recognize digits that appear in a similar fashion. The training data can be augmented by rotating, scaling, skewing the images, adding random noise, or blurring to simulate different lighting conditions to reduce the likelihood of this happening. This helps the ML model be more robust and recognize handwritten digits in various formats. That is why, specifically for the MNIST dataset, a handful of enhanced versions of MNIST alternatives like EMNIST or NOT MNIST were introduced.",activeloop Examples of image augmentation include:,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,Flipping the image Random cropping Reducing the contrast or brightness Shearing the image Image augmentation is also widely applied when the amount of training data is not enough.,activeloop What is image enhancement?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Image enhancement encompasses a wide range of techniques aimed at improving the quality and visual appeal of an image. It involves the manipulation of either the entire image or specific attributes within it. For example, adjusting the contrast and brightness can bring out details and make the image more vibrant, while fine-tuning the range of the RGB color pattern can enhance the color accuracy and overall tonality.The nature and extent of these image modifications can vary significantly depending on the specific goal of the enhancement. Image enhancement is, to some extent, a subjective process as the desired outcome heavily relies on the intended purpose of the image. For instance, in the realm of medical imaging, the objective may be to highlight and emphasize specific structures or anomalies within the body. In contrast, when it comes to headshot photography, the primary focus might be on accentuating facial features while fine-tuning brightness and contrast levels to achieve a pleasing aesthetic.Traditionally, image enhancement has been accomplished through the use of specialized image editing software like Photoshop and Affinity. However, advancements in generative AI technology have opened up new possibilities. Innovative products such as Midjourney, LAION's Stable Diffusion model, DreamStudio, and Adobe's Firefly have emerged, enabling image enhancement through the assistance of large language models. While these AI-based approaches are gaining traction, the predominant methods of image enhancement currently involve programmatically manipulating images using libraries and tools such as Pillow, OpenCV, and machine learning algorithms.Moreover, image enhancement technology has proven to be invaluable in the field of medical imaging. By applying enhancement techniques to radiographs, CT scans, and other medical images, the aim is to facilitate the interpretation of results by medical professionals. These enhancements can improve the visibility of critical structures, highlight anomalies or abnormalities, and enhance the overall diagnostic accuracy, ultimately aiding in better patient care.",activeloop What is the difference between image enhancement and image augmentation?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Yes, DataChad supports chatting with many files at the same time. You can chat with PDFs, text documents, Word documents or CSV files all at the same time.While image enhancement is mainly focused on changing the visual appeal of a single image, image augmentation aims to generate additional image files to expose a deep learning model to more aspects of the same image.",activeloop What are some examples of image enhancement?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Histogram Equalization: Histogram equalization is a method of image enhancement that stretches the contrast of an image. In code, this is most often achieved using the OpenCV or Pillow libraries with the following functions: cv2.equalizeHist() or PIL.ImageOps.equalize(). Gamma Correction: Gamma correction is used to brighten or darken an image. In code, this can be accomplished easily using the OpenCV or Pillow libraries with the following function calls: cv2.LUT() or PIL.ImageEnhance.Brightness(). Contrast Stretching: Contrast stretching is a technique that stretches the contrast of an image in order to increase its visibility. In code, contrast stretching is most often done using the OpenCV or Pillow libraries with the following functions: cv2.resize() or PIL.ImageOps.autocontrast(). Sharpening: Sharpening is a technique used to enhance certain features in an image. In code, this can be achieved using the OpenCV or Pillow libraries with the following functions: cv2.filter2D() or PIL.ImageEnhance.Sharpness(). Noise Reduction: Noise reduction is an image enhancement technique used to reduce the amount of noise in an image. In code, this can be accomplished using the OpenCV or Pillow libraries with the following functions: cv2.blur() or PIL.ImageFilter.GaussianBlur(). Image Dehazing: is a common image enhancement technique used to reduce the amount of haze in an image. In code, this can be accomplished using the OpenCV or Pillow libraries with the following functions: cv2.bilateralFilter() or PIL.ImageTweak.Dehaze().",activeloop Joint image filtering,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,Image filtering with guidance signals is known as joint or guided filtering. Joint image filtering involves transferring critical structural details from the guidance image to the target image. It aims at enhancing the target image while not passing along structures that did not exist in the image. It is typically applied in various computer vision tasks such as: Structure-texture separation. Joint upsampling Cross-modality noise reduction Depth map enhancement The Joint Image Filtering with Deep Convolutional Networks method proposes a CNN network with three-sub networks and skip connections. The first two sub-networks are responsible for extracting informative features from the guidance and input image. These features are concatenated and used as features for the third sub-network. Skip connections are used so that the network learns to predict residuals between the target image and the ground truth output. The parameters of the three sub-networks are updated simultaneously during training.,activeloop Changing colors/recoloring,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Colorization is commonly applied in changing black and white images to colored images. This can be achieved using photo editing tools or neural networks. It is applicable in photography, where one wants to color old images and videos.Coloring images manually is a cumbersome and expensive process that takes long. Fortunately, image recoloring can be automated using deep learning. This is done using a Convolutional Neural Network that takes in an input image and predicts the colors to fill the image.Image recoloring can also be applied to colored images where you want to change the colors in an image. Recoloring images can be used to achieve various artistic goals. Recoloring is also applied in photography to ensure an image looks professional. The example below shows recoloring a flower to purple, green, red, and orange using Generative Adversarial Networks (GANs).",activeloop Image shifting,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Image shifting involves shifting the pixels in an image to new locations. For example, you can shift an image vertically or horizontally. When shifting an image, all the image's pixels are moved to a new position while the dimensions of the image are maintained.",activeloop Image denoising,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Denoising convolutional neural networks (DnCNNs) is a technique for reducing image noise using feedforward neural networks. It uses residual learning and batch normalization to speed up training and improve denoising performance. The method tackles denoising with unknown levels of Gaussian noise –– blind Gaussian denoising. The first image below shows a noisy image, while the second one shows the result generated by DnCNNs.",activeloop Advanced examples of image enhancement,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Image quality enhancement techniques such as Super Resolution and Low Light Image Enhancement are used to recover as much detail as possible from low-resolution images. Accurate Image Super-Resolution Using Very Deep Convolutional Networks and the Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement (Zero-DCE) utilize both traditional methods and deep learning approaches such as AI upscaling and Zero-DCE respectively to reduce noise, increase contrast and brightness, allowing them to be used in applications such as security cameras and object detection. LIME: A Method for Low-Light Image Enhancement also solves this problem by creating an illumination map from the R, G, and B channels and does not require paired or unpaired data, making it less prone to overfitting and to be used in various applications such as security cameras, object detection, and scene understanding.",activeloop Image demosaicing,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Color image demosaicing involves interpolating missing color values using nearby pixels in raw images captured by cameras. The images are typically captured with 12 bits per pixel, and Color Filter Arrays (CFA) are placed in front of the camera's image sensor array. CFAs allow different wavelengths of light to pass through the camera, and one common type is the Bayer pattern, which captures information at red, green, and blue wavelengths.The output of the Bayer pattern is a mosaiced image, which needs to be converted to a standard RGB image through the process of color image demosaicing. During image capture, a camera runs a demosaicing algorithm to generate the full RGB image. Bilinear interpolation is a commonly used method in demosaicing, where missing pixels are assumed to be similar to adjacent ones, and the missing values are replaced by the average of neighboring pixels. However, demosaicing itself does not remove noise from the image, so additional denoising may be required, either through a separate denoising algorithm or by using an algorithm that combines denoising and demosaicing.Traditional demosaicking methods include bilinear interpolation and Malvar interpolation. In addition to these conventional techniques, machine learning methods can also be applied to demosaicing. Demosaicing is similar to super resolution in that both involve filling in missing pixel information, so super-resolution techniques can be used in demosaicing as well. Some machine learning methods suitable for image demosaicing are k-Nearest Neighbors, Support Vector Regression, and Super-Resolution Convolutional Neural Network.Nearest neighbor Bayer image demosaicing can be performed in the following steps: Split the bayer-pattern image into three color channels. Fill in missing pixel information for each color channel using bilinear interpolation. Merge the interpolation result with the detailed information predicted by the k-Nearest Neighbors model.",activeloop Watermark Removal,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Adding watermarks to images serves various purposes, such as preventing unauthorized copying or reproduction. However, watermarks can sometimes obscure details in the image. In such cases, it becomes necessary to remove the watermark to reveal those hidden details.",activeloop Defading,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Defading is a process used to recover information from faint or degraded images. For instance, the ink on old documents can wear off over time, making the text difficult to read. Fading can also occur due to overexposure during the digitization of documents. Defading methods aim to restore a more visible and legible version of the image by mitigating the effects of fading. UnblurUnblur is the technique employed to remove blur from an image, enhancing its clarity. Blur in images can be caused by subject movement during capture, camera focus issues, or camera shake. Various methods can be used to deblur images, including prior-based methods that estimate blur kernels and parameters, as well as learning-based methods that utilize deep learning algorithms to learn deblurring models.",activeloop Contrast-Limited Adaptive Histogram Equalization (CLAHE),https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"CLAHE is a method for contrast enhancement that works on small regions or tiles of an image. It improves local contrast, which is particularly useful in areas such as microscopic imaging, X-ray imaging, medical image analysis, and high-definition television (HDTV). CLAHE utilizes parameters like the number of tiles and clip limit to control noise amplification and enhance visibility in various applications.",activeloop Histogram Matching,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Histogram matching, also known as histogram specification, is an image processing technique used to generate an image based on a specified histogram. It involves obtaining histograms for both a reference image and a target image, computing cumulative distribution functions, and applying a histogram matching function to each pixel of the reference image. Histogram matching is employed to normalize images taken under different conditions and can be performed using libraries like skimage, OpenCV, or deep learning techniques.",activeloop Wiener Filter,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Wiener filter is an image processing technique that removes additive noise and inversely deblurs an image. It outperforms the inverse filter by accounting for both degradation functions and the statistical properties of noise, resulting in a better restoration of the original image. Wiener filtering minimizes mean squared error in noise smoothing and inverse filtering, providing a linear estimation of the original image.",activeloop Median Filter,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Median filter is a non-linear image processing technique used to remove noise while preserving edges. It replaces each pixel in an image with the median value of neighboring pixels within a window or neighborhood. The median filter is particularly effective in removing salt and pepper noise, producing better results than the mean filter due to its robustness and preservation of edge information.",activeloop Binarization,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Binarization is a process that separates an image into foreground and background, resulting in a binary image with black-and-white regions. It is commonly used to eliminate degradations like noise and is crucial for tasks such as optical character recognition and document layout analysis. Binarization can be performed using different approaches, including global thresholding and local thresholding based on neighboring pixels.",activeloop Linear Contrast Enhancement,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Linear contrast enhancement, also known as contrast stretching, is a method for stretching the pixel values in an image to a new distribution. It can be achieved through different approaches such as Min-Max Linear Contrast Stretch, Percentage Linear Contrast Stretch, or Piecewise Linear Contrast Stretch. Linear contrast enhancement improves the visibility of attributes in an image by expanding the range of contrast.",activeloop Using Deeplake + Pillow for image enhancement,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Finding suitable datasets for image enhancement tasks can be challenging. However, several datasets are available for specific purposes, such as the LOL (Low-Light) dataset for low-light image enhancement, ARID dataset for activity recognition in videos, Bickley diary dataset containing images of a diary, and NoisyOffice dataset with noisy images of an office. These datasets can be used for training machine learning models or evaluating image enhancement algorithms.The LOL (Low-Light) dataset can be easily downloaded using deeplake from Activeloop.1import deeplake 2ds = deeplake.load('hub://activeloop/lowlight-train') 3CopyNext, let's look at a summary of the data.ds.summary()Activeloop also makes it easy to visualize the data using the visualize function.ds.visualize()With some datasets at hand and having learned various image enhancement techniques, let's now look at how to implement them in Python using Pillow and OpenCV.First, copy the dataset from Activeloop to gain write access. This is done using the deepcopy function, which expects: The data source The dataset destination, that is, your personal Activeloop account Your API token The function signature is shown below:1ds = deeplake.deepcopy('hub://activeloop/lowlight-train', dest='hub://mwitiderrick/lowlight-data', token='YOUR_TOKEN')` 2 3CopyNext, load your version of the low-light dataset. Passing your API token and read_only=False gives you write access to the dataset.1 2ds = deeplake.load('hub://mwitiderrick/lowlight-data', token='YOUR_TOKEN',read_only=False)` 3Copy",activeloop Make images sharper with Pillow,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Let's start by looking at how to make all the images sharper—checkout to a different branch. You will then commit the changes you make to this branch and later merge it to main if you want to.1from PIL import Image, ImageEnhance, ImageFilter 2import numpy as np 3from PIL import Image 4import numpy as np 5 6ds.checkout('sharp_image', create = True) 7CopyNext, loop through the images, make them sharp using Pillow, and commit the changes.1with ds: 2 for i, sample in enumerate(ds): 3 image = ds['highlight_images'][i].numpy() 4 im = Image.fromarray(image).convert('RGB') 5 enhancer = ImageEnhance.Sharpness(im) 6 ds.highlight_images[i] = np.asarray(enhancer.enhance(10.0)) 7 sharpen_commit_id = ds.commit('Sharpen the images') 8Copy Head back to your Activeloop account to see the new images. Click Version and switch to the sharp_image branch.",activeloop Unsharp Mask Filtering,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Unsharp mask filtering is an image processing technique used to enhance image sharpness and reveal details that may not be clear in the original image. It involves removing low-frequency spatial information by creating an unsharp mask through the application of a Gaussian low-pass filter. Combining the unsharp mask with the original image results in a less blurry image, although it may increase noise.",activeloop Using Deep Lake + OpenCV for image enhancement,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"OpenCV is another Python package for processing images. Let's apply the median filter to all images using OpenCV. To do that, call the medianBlur method. It expects the image and the size of the kernel.1 import cv2 2 3 ds.checkout('median_filter', create = True) 4 5 with ds: 6 for i, sample in enumerate(ds): 7 image = ds['highlight_images'][i].numpy() 8 ds.highlight_i 9 images[i] = cv2.medianBlur(image, 11) 10 median_commit_id = ds.commit('Median filter') 11Copy",activeloop Find edges in an image,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Finally, let's look at how you can use Pillow to find edges in the images. Create a branch for the new images and commit the changes there.1 ds.checkout('find_edges_image', create = True) 2 3 with ds: 4 for i, sample in enumerate(ds): 5 image = ds['highlight_images'][i].numpy() 6 im = Image.fromarray(image).convert('RGB') 7 enhancer = ImageEnhance.Contrast(im) 8 ds.highlight_images[i] = np.asarray(im.filter(ImageFilter.FIND_EDGES)) 9 edges_commit_id = ds.commit('find the edges') 10Copy",activeloop Increase the brightness of images,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"You can increase the brightness of all the images in a similar way. The steps are: Create a new branch for the new images Brighten the images Commit the changes This can be done with the following code.1 ds.checkout('bright_image', create = True) 2 with ds: 3 for i, sample in enumerate(ds): 4 image = ds['highlight_images'][i].numpy() 5 im = Image.fromarray(image).convert('RGB') 6 enhancer = ImageEnhance.Contrast(im) 7 ds.highlight_images[i] = np.asarray(enhancer.enhance(3)) 8 bright_commit_id = ds.commit('increase contrast') 9Copy Switch to the `'bright_image' branch on Activeloop to see the brightened images.",activeloop Sharpening images with OpenCV,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Sharpening an image in OpenCV involves creating a sharpening kernel and passing the kernel and the image to the filter2D function.1 ds.checkout('cv_sharpen', create = True) 2 3 with ds: 4 sharp_kernel = np.array([[-1,-1,-1], [-1,10,-1], [-1,-1,-1]]) 5 for i, sample in enumerate(ds): 6 image = ds['highlight_images'][i].numpy() 7 ds.highlight_images[i] = cv2.filter2D(image, -1, sharp_kernel) 8 sharp_commit_id = ds.commit('Sharpen image') 9CopyApart from viewing the changes from the Activeloop web UI, you can also check out to previous branches in your coding environment and visualize the image. For example, let's take a look at a sample image that has been sharpened.`Image.fromarray(ds.highlight_images[1].numpy())` Checkout to the main branch and compare this image with the original one.1 ds.checkout('main') 2 Image.fromarray(ds.highlight_images[1].numpy()) 3CopyYou can clearly see the difference between the original and the modified image.",activeloop Deep learning for image enhancement,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Various papers have been proposed for enhancing images in deep learning. They include: Learning Enriched Features for Real Image Restoration and Enhancement Uformer: A General U-Shaped Transformer for Image Restoration Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation EnlightenGAN: Deep Light Enhancement without Paired Supervision Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement Accurate Image Super-Resolution Using Very Deep Convolutional Networks The implementation and pre-trained models for some of these papers are available on GitHub. For example, using 'torch-enhance', you can perform super-resolution based on the last mentioned paper.To perform super-resolution with torch-enhance, you: Fetch the pre-trained model. Initialize the model with the desired configuration, in this case, doubling the resolution of the image. Apply the model to the image. Convert the output to an image. Save or display the image. And here is the code to do this.1 ds.checkout('super_resolution', create = True) 2 3 import torch 4 import torch_enhance 5 import torchvision 6 import torchvision.transforms as T 7 from PIL import Image 8 9 i = 234 10 image = image = ds['highlight_images'][i].numpy() 11 12 lr = torchvision.transforms.functional.pil_to_tensor(Image.fromarray(image).convert('RGB')) / 255.0 13 model = torch_enhance.models.SRResNet(scale_factor=2, channels=3) 14 sr = model(lr.unsqueeze(0)) 15 16 transform = T.ToPILImage() 17 img = transform(sr.squeeze()) 18 ds.highlight_images[i] = np.asarray(img) 19 20 super_commit_id = ds.commit('super resolution') 21CopyInspecting the image shows that the size has increased and the dimensions have doubled.",activeloop Blurring images with OpenCV,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Blurring an image in OpenCV is done by creating a blurring kernel and passing it to the image.1 ds.checkout('blurring', create = True) 2 3 with ds: 4 kernel_4x4 = np.ones((4, 4), np.float32) / 16 5 for i, sample in enumerate(ds): 6 image = ds['highlight_images'][i].numpy() 7 ds.highlight_images[i] = cv2.filter2D(image, -1, kernel_4x4) 8 blurred_commit_id = ds.commit('blur images') 9 ds.checkout('binary', create=True) 10Copy",activeloop Final thoughts,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"There are many image enhancement techniques. The choice of method depends on the subsequent task, such as object detection or optical character recognition, for example. Generally, enhancing images increases the probability of success in tasks that follow it. You have now added these techniques and their implementation to your machine learning repertoire. More specifically you have learned: What is image enhancement? Image enhancement techniques in Python. How to implement image enhancement in Pillow. Image enhancement in OpenCV. How to perform image enhancement in PyTorch.",activeloop What are the benefits of using these image enhancement techniques?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"These techniques help reduce noise, increase contrast and brightness in images, making them suitable for various applications such as security cameras and object detection.",activeloop Image enhancement metrics and loss functions,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,There are losses and metrics used when training machine learning models for image enhancement. Some of the losses used include: Perpetual loss Mean Squared Error Metrics used to track the performance of image enhancement methods include: Mean absolute error (MAE) Mean squared error (MSE) Peak-signal-noise ratio (PSNR) Structural Similarity Index (SSIM),activeloop What are some techniques used for image quality enhancement?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,Super Resolution and Low Light Image Enhancement are some techniques used for improving image quality. These methods aim to recover as much detail as possible from low-resolution images.,activeloop Does LIME require paired or unpaired data for image enhancement?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"No, LIME does not require paired or unpaired data for image enhancement. This feature makes it less prone to overfitting and versatile for various applications including security cameras, object detection, and scene understanding.",activeloop What methods do Accurate Image Super-Resolution and Zero-DCE use for image enhancement?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Accurate Image Super-Resolution uses Very Deep Convolutional Networks, while Zero-DCE (Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement) leverages both traditional methods and deep learning approaches like AI upscaling and Zero-DCE.",activeloop How does the LIME technique enhance low-light images?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"LIME (A Method for Low-Light Image Enhancement) enhances low-light images by creating an illumination map from the Red, Green, and Blue channels of the image.",activeloop What are some practical applications of these image enhancement techniques?,https://activeloop.ai/resources/image-enhancement-in-machine-learning-the-ultimate-guide/,"Some practical applications of these image enhancement techniques include security cameras, object detection, and scene understanding among others.",activeloop How to monitor models with Activeloop & manot,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"Well done. Your model is one of the few that finally made it into production. But, in the words of Lori Greiner from Shark Tank,Why? Because apart from Words of Affirmation and Acts of Service, your models require… monitoring (some may say this stands for Quality Time).",activeloop Why should you monitor your ML model performance?,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"If you're still scratching your head, these are some reasons why models require monitoring. As your data evolves, your model performance may decay (as some companies found out the hard way by over-relying on data collected during pandemic lockdowns). After deploying a model to production, it's crucial to continue monitoring its performance to ensure that it's functioning correctly and delivering accurate results. This model monitoring process helps to identify and resolve any issues with the model or the system that serves it before they cause a negative business impact. Like losing $440M because picking up wrong cues from redditors' sentiment kind of negative. Finally, you want to continuously monitor your models as it helps to maintain transparency in the prediction process for all stakeholders involved, as well as serves as a path for continuous improvement.",activeloop "Our new integration with manot, an ML model monitoring tool",https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"Today, we're thrilled to announce that you can monitor the performance of your models trained on your Deep Lake datasets, thanks to the partnership between Activeloop and manot. manot is a computer vision model observability platform that enables AI teams to improve their feedback loop by gaining insights into where their model may be performing poorly. This integration will bring tons of value across various applications, such as surveillance ML, autonomous vehicles & robotics, or image search. Before we present it in practice, let's first review what challenges you might encounter while monitoring ML systems in production.",activeloop Why computer vision models fail in the production environment,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"As mentioned previously, the fact that your computer vision model performs well on the test set, it doesn't mean it will do so on the real-world data. Why may a computer vision model fail in production? Data Shift: Naturally, the real-world data can differ from the limited training data (i.e., its distribution may differ). If the distribution of data in production varies from the distribution of training data, the model's accuracy could worsen. This typically occurs when data is trained on a different geographic location or at significantly different time of the year (e.g. buyer behavior during Black Friday is different from off-peak buyer behavior). When the distribution of data changes between the training phase and the production phase, we call it a covariate shift or distribution shift. Data Drift: In contrast to data shift, data drift occurs when the distribution of data changes over time in the production phase (or a product's popularity fading away over time). What causes data drift? A couple of factors, such as changes in the underlying process being modeled (e.g., switch of product packaging when doing ML in manufacturing), changes in the environment (buyer behavior due to macroeconomic factors), or changes in the data collection process (upgrading cameras that are used for facial recognition model training). Overfitting: Overfitting typically occurs your model's trained too closely on the training data. In effect, the model will not behave well when it meets real-world data - since there can be huge gaps between the real world and the model's training dataset. As a result, the model in production does not generalize well and performs poorly. Poor hyperparameter tuning: Poor hyperparameter tuning may cause the model perform unexpectedly in production because the model might not perform well the specific use case at hand. Hardware limitations: The computational power & memory available in production may differ from those used during model development and testing, leading to performance bottlenecks. A typical example of this is training the model centrally in the cloud and then expecting it to perform well on edge devices (Raspberry Pis? More like raspberry pies…) Lack of monitoring & maintenance: Without proper monitoring practices, the model's accuracy will decay after deployment. This can manifest itself in outdated models, unpatched vulnerabilities, and more. This is why it is vital to incorporate model performance monitoring tools to detect the scenarios that the model may not be sufficiently exposed to during the training phase.",activeloop Model monitoring challenges at the data input level,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"There are several challenges that teams may face when monitoring ML models at the data input level. These include: Data quality: Providing the quality of the input data is crucial for the model's accuracy. As it has been proven time an again, poor-quality data leads to incorrect predictions and inaccurate results. Lack of single source of truth for ML data: You may be using both private and public data for your training purposes, or storing your data locally or in different clouds. These disparate and unreliable data sources may lead to including outdated or otherwise faulty samples in your dataset. Thus the flexibility to handle a wide variety of data types and formats and the ability to integrate data from multiple sources is integral to a project's success. Deep Lake recently solved an issue like this for Earthshot Labs, who managed to build a tree segmentation app and 5x their speed with 4x fewer resources required. Unclear or evolving data schema: Your data schema evolves as your data structure requirements were not agreed upon before the project started, so now you're comparing apples to oranges while evaluating model performance. Alternatively, the data science team doesn't use a unified data format for their data, which complicates all things downstream (if only there were a unified data format better than legacy stuff like HDF5, built specifically for ML? oh wait, there is!). Opaque or non-existent metadata for your production data workflow: you have done a gazillion experiments but can't tell one from another. Nuff said. Start ensuring better data lineage. Data preprocessing issues: Preprocessing the input data to be in the correct format for the model is a crucial step that can introduce errors and bias if not performed correctly. Data heterogeneity: The input data can be highly heterogeneous, coming from different sources with different formats, making it tough to ensure the data is consistent and suitable for use by the model. Data volume: The sheer volume of input data can make it challenging to monitor and detect issues with the data in real-time, unless you're connecting your data to ML models with a performant data streaming setup. Data privacy: Protecting sensitive data is a critical concern when monitoring data inputs (especially when doing ML in medicine, or applying machine learning in safety & security), as this information could contain personally identifiable or confidential information. In sum, to effectively monitor data inputs, organizations must have robust data management processes in place, including data quality checks, preprocessing and validation, and privacy protection measures. They must also have the necessary technical infrastructure and expertise to handle the volume and complexity of the data.",activeloop Deep Lake & Model Performance Monitoring,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"Model performance monitoring involves evaluating how well a machine learning model is able to make accurate predictions based on new data (for analyzing visual data, it could be images or video). Model performance monitoring is especially important when working with unstructured data to identify outlier scenarios that the model has not been sufficiently exposed to during training, which can cause its accuracy to decrease.Deep Lake can be utilized in model performance monitoring as well. As one of Activeloop long-time users, Arseniy Gorin, Head of the ML team at Ubenwa.ai mentioned in our chat recently: 'It is important to be able to launch and also kill a training process immediately because if something goes wrong, data scientists can stop it. So that's why you start tracking it immediately. You look at the losses, and if something does go wrong, you can just kill it, inspect the data if that's the bottleneck, fix it, and relaunch it again as opposed to, oh, I'm waiting, whilst we're paying for this training process that is already doomed' As a result, you can detect problems even before deploying the model in the real world, allowing teams to catch data drifts and deploy a model with high overall accuracy.",activeloop Computer vision model monitoring with manot,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"What is manot? manot is a platform for observing computer vision models. It assesses the performance of models during both pre- and post-production development. The system identifies outliers the model encounters and suggests similar data samples from sources such as Deep Lake datasets. When added to the training dataset, these improve the model's accuracy. In effect, computer vision teams streamline their training data improvement process, while saving time and resources.",activeloop Demo of manot and Activeloop's Integration,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/," We start off by installing manot and Deep Lake. Once installed, import all of the necessary dependencies as shown in the code below and define your Deep Lake and manot tokens.1!pip3 install -U manot 2!pip3 install -U deeplake 3!pip3 install --upgrade jupyter, ipywidgets 4 5from IPython.display import clear_output; clear_output() 6 7import deeplake 8from manot import manotAI 9import glob, os, tqdm 10 11#Tokens initializations 12os.environ['DEEPLAKE_TOKEN'] = '...' 13os.environ['MANOT_TOKEN'] = '...' 14 15deeplake_token = os.environ['DEEPLAKE_TOKEN'] 16manot_token = os.environ['MANOT_TOKEN'] 17CopyNext, we can move on to creating our Deep Lake dataset and defining our tensors.1ds = deeplake.empty('hub://manot/manot-activeloop-demo', overwrite=True, token=deeplake_token) 2with ds: 3 ds.create_tensor('images', htype='image', sample_compression='jpg') 4 ds.create_tensor('bbox', htype='bbox') 5 ds.create_tensor('labels') 6 ds.create_tensor('predicted_bbox', htype='bbox') 7 ds.create_tensor('predicted_labels') 8 ds.create_tensor('predicted_score') 9CopyOnce our tensors are ready, we have set up our dataset, we need to populate our tensors with the necessary data.1with ds: 2 for image_name in tqdm.tqdm(images_list): 3 file_name = image_name.rsplit('/')[-1].rsplit('.')[0] 4 5 #parse detections 6 with open(path + '/detections/' + file_name + '.txt') as file: 7 lines = file.readlines() 8 predicted_labels = [int(line.split('\n')[0].split(' ')[0]) for line in lines] 9 predicted_scores = [float(line.split('\n')[0].split(' ')[-1]) for line in lines] 10 predicted_bboxes = [list(map(lambda x: float(x), line.split('\n')[0].split(' ')[1:-1])) for line in lines] 11 12 #parse labels 13 with open(path + '/labels/' + file_name + '.txt') as file: 14 lines = file.readlines() 15 labels = [int(line.split('\n')[0].split(' ')[0]) for line in lines] 16 bboxes = [list(map(lambda x: float(x), line.split('\n')[0].split(' ')[1:])) for line in lines] 17 18 #append to the dataset 19 ds.append({ 20 'images': deeplake.read(image_name), 21 'bbox': bboxes, 22 'labels': labels, 23 'predicted_bbox': predicted_bboxes, 24 'predicted_score': predicted_scores, 25 'predicted_labels': predicted_labels 26 }) 27 28#if you want to visualize the dataset with Deep Lake, you can do so by running the following command 29ds.visualize() 30CopyWe are now ready to pass the data to manot in order to detect outliers. In this use case, we will be observing the performance of YOLOv5s (small) on the VisDrone Dataset (click here to see how to upload data in YOLOv5 format to Deep Lake). For the initial setup, we will define the paths of the images, the ground truth data (labels) and the detections. The format of the metadata refers to the format of the labels (bounding boxes). In the example below we are using XYX2Y2. Lastly, we will provide the classes for our model. Note that the data can be stored either locally or on a service such as Amazon S3. Once you have completed the setup process, manot will return a specific id number for the setup",activeloop Demo of manot and Activeloop's Integration,https://activeloop.ai/resources/how-to-monitor-models-in-production-with-activeloop-manot/,"We will use this in the next step for retrieving insights from manot.1 setup = manot.setup( 2 data_provider='deeplake', 3 arguments={ 4 'name': 'manot-activeloop', 5 'detections_metadata_format': 'xyx2y2', # it must be one of 'xyx2y2', 'xywh', or 'cxcywh' 6 'classes_txt_path': '', 7 'deeplake_token': deeplake_token, 8 'data_set': 'manot/manot-activeloop-demo', 9 'detections_boxes_key': 'predicted_bbox', 10 'detections_labels_key': 'predicted_labels', 11 'detections_score_key': 'predicted_score', 12 'ground_truths_boxes_key': 'bbox', 13 'ground_truths_labels_key': 'labels', 14 'classes': ['person'], 15 } 16 ) 17CopyNext, let's get insights from manot on where the model is performing poorly . We will call the insight method and pass a number of parameters to define the model, data stream, and dataset that we will be observing the model's performance on. The setup_id refers to the id that was returned after initializing the setup method in the earlier step. Our data provider for this example is Activeloop's Deep Lake, a data lake designed for deep learning applications. The computer vision model we will be using is YOLOv5s on the VisDrone Dataset, both which are defined using the weight_name and data_path parameters respectively.1insight = manot.insight( 2 name='manot-activeloop', 3 setup_id=setup_info['id'], 4 data_path='activeloop/visdrone-det-val', 5 data_provider='deeplake', 6 deeplake_token=deeplake_token, 7) 8insight 9CopyOnce the data has completed processing, manot will return an id value for the insights, and as seen below, using the visualize_data_set method, we can see a grid of outliers that manot has identified as areas where the model will perform poorly. These are the insights that we can now extract, label and add to our training dataset in order to improve the performance of our model.manot.visualize_data_set(insight_info['data_set']['id'], deeplake_token) You can try out the demo here. Moreover, you can run the Google Colab for Activeloop + manot yourself.",activeloop Introduction: Evaluating Large Language Models for Building AI-enabled Apps,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/," Let's skip the cliché opener about how AI is changing faster than ever and how Large Language Models (LLMs) are revolutionizing everything from building a chat with any data apps to medical imaging. It is, and they are. Instead, let's dive headfirst into a common problem that anyone will have when working with LLMs for their project - how to pick the right LLM for the task at hand.We're going to compare four leading LLMs from three top AI companies. We aim to explore the LLMs' strengths and weaknesses and provide a framework for comparing any models you choose for any task you want to perform. So, which models are we examining, and why have we chosen them? OpenAI's GPT-3.5: GPT-3.5 (today known as ChatGPT) is one of the first widely popular LLMs and is used both in a web form and API form by people all over the globe. We also want to illustrate the evolution within this series of models. By placing GPT-3.5 and its successor side by side, we aim to highlight the improvements and modifications made from one version to the next. OpenAI's GPT-4: OpenAI's latest model, GPT-4, holds some mysteries, such as its actual size and the exact data used in its training. Nonetheless, it is widely recognized as one of the top-performing LLMs, if not the best. Anthropic's Claude: Claude AI is Google-funded Anthropic's answer to ChatGPT by OpenAI. Claude stands out because of its unique training approach. It employs Reinforcement Learning from AI Feedback (RLAIF) instead of the more commonly used Reinforcement Learning from Human Feedback (RLHF). This divergence in training methodologies allowed Anthropic to train a model comparable to modern GPT models with fewer constraints. Their 'Constitutional AI' approach is considered a first of its kind, allowing Claude to consider dynamic instruction and change its outputs to align with the task seamlessly. Although it didn't impact our coverage of the model performance, we have to thank Claude's team for granting Activeloop an early preview. Cohere's Command series of models: Cohere is making the cut because of their proven potential when well-guided through prompt engineering and cost-effective nature. Moreover, including Cohere allows us to discuss potential challenges users may encounter with specific LLMs. When comparing any set of models, we also have to consider our task as that will influence how we judge the output of any LLM and therefore affects how we compare quality across models. We are going to walk through four examples: Text Classification - Detecting offensive language: This use case involves evaluating the models' performance detecting offensive language. It is a nuanced classification task where the models must identify offensive content without context. Creative Content Generation with rules/personas: We will ask our models to write email responses while adhering to specific rules or personas",activeloop How to Interpret Results in Vellum's Playground,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"We'll be making all the prompt comparisons in Vellum's Playground. Vellum is a tool for deploying production-grade Large Language Model features, providing resources for prompt creation, semantic search, version management, testing, and monitoring. It's compatible with all key LLM providers.Before jumping into the evaluation details, this screenshot has a quick primer on interpreting the UI.With this clear, let's get into the first example!",activeloop Text Classification - Offensive Language Detection,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Still, it also tended to end up as a team discussion on what kinds of texts are offensive and whether we truly 'tricked' the LLM or not depending on our definitions of the task, which speaks once more to the subjectivity of the task at hand.When I told our LLMs that I am the survivor of a Confederate attack, I was able to inject an additional instruction into the LLM, and some of the models (Mainly Claude, GPT 3.5, and Cohere) were more likely to change their answer to be more inline with the inputted direction . After all that, GPT-4 performed the best on this task regarding accuracy in our test cases.",activeloop Introduction: Evaluating Large Language Models for Building AI-enabled Apps,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"The goal is to see how well the models can follow instructions and adopt different voices or personas in their responses . Question Answering and Logical Reasoning: The objective is to test the models' mathematical and logical abilities. We will use examples from the Gsm8k dataset, which consists of math word problems, to evaluate the models' performance in solving mathematical problems and understanding logical reasoning. Code Generation: We'll ask our Large Language Models to generate Python code snippets from English-language descriptions of programming tasks, testing their abilities to comprehend these instructions and accurately translate them into code. This task will challenge the models' understanding of natural language and their grasp of programming language syntax and logic. To provide a framework for comparing the performance of our LLMs on these different tasks, we will consider three main metrics for performance/quality: Accuracy: This performance measure refers to how correctly a model responds to a given prompt. In the case of classification, it would be the percentage of instances where the model's classification matches the ground truth. It is the ratio of problems correctly solved to the total number of problems for problem-solving. Accuracy is a measure of how often the model gets the correct answer. Semantic Text Similarity: This considers the similarity in meaning between two pieces of text, regardless of the exact wording used. This metric is standard in use cases like content generation, where the goal isn't necessarily an exact match to a predefined template but to generate text that captures the same concept, sentiment, or information as the ground truth. It's a robust measure of how well a model understands and generates content that aligns with the intended message. In our case, semantic text similarity is measured as the cosine similarity between the embeddings produced by an LLM behind the scenes. Robustness: This refers to a model's resilience in the face of varying, unexpected, or challenging inputs. It measures how well the model performs when faced with unusual prompts, nuanced tasks, or adversarial scenarios. A model with high robustness should be able to handle a wide range of tasks and scenarios without a significant drop in performance. It's important to remember that each model has unique features and advantages, and the ‘best' model will ultimately depend on your specific use case, test cases, budget, and performance requirements. The goal of this post is not to declare any of these models a 'winner' but rather to help you think about judging the quality and performance of models in a more structured way using Vellum – a developer platform for building production LLM apps.",activeloop Text Classification - Offensive Language Detection,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/," Classification is the most apparent application of LLMs with prompts that are relatively easy to engineer. When working with classification, one straightforward metric for evaluating our LLMs is exact-match accuracy, representing the percentage of instances where the LLM's classification matches the ground truth. However, classification isn't always a simple task. The final label assigned to a text may require complex reasoning or be subjective. Consider the task of offensive language detection. Offensive language detection aims to… well… detect offensive language. But this is subjective and frankly pretty nuanced, and humans will disagree about which pieces of text are offensive and which aren't. All modern LLMs have at some point been exposed to raw human-generated data from the open/closed internet and feedback on output quality either explicitly with RLHF or implicitly with RLAIF. When humans are tasked with grading LLM outputs for offensive behavior, their biases will slip in, influencing the model. This isn't necessarily detrimental, but its influence becomes evident rather quickly. Check out what happens when I ask Cohere and GPT-4 to classify a piece of text as offensive or non-offensive with no clear instruction on what I consider offensive or not:Cohere and GPT-4, without instruction on what is offensive or non-offensive, have different opinions with a zero-shot classification of a nuanced and subjective classification task. Cohere gets the answer wrong (at least according to my ground truth label), whereas GPT-4 gets it right. So does that mean that GPT-4 is better at offensive language detection? We cannot state yes or no without doing at least some work to guide the LLMs on what we consider offensive.If I tweak the prompt slightly to give it some guidelines on what I consider offensive or not, then Cohere suddenly is on the same page as me:This modified classification prompt gives Cohere guidance on what I consider offensive, and then I can get my content about the Confederate flag labeled correctly.There are many ways we can provide in-context learning to our LLMs, including adding clear rules and instructions as I have done, by including a few examples of offensive or non-offensive content, and by utilizing 'chain of thought prompting' for the LLM to give its reasoning first.Regarding robustness, when we attempted to create prompts designed to 'trick' the LLM into outputting a particular category, we could sway the LLMs, as seen in the following figure",activeloop Latency and Cost in large Language Model Context,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"So far, we have primarily focused on the performance of LLM outputs in our tasks. Of course, there are other practical factors to consider when evaluating and comparing these models, particularly latency and cost. Let's pause our task evaluation to consider these two critical considerations that can significantly impact the usability and feasibility of a model for certain applications. Latency is a model's time to respond to a prompt or execute a task. This is important because it affects user experience, especially in real-time or near real-time applications such as chatbots, digital assistants, and real-time content generation. We've already noticed that GPT-4 tends to have a slower response time than the other models. It's important to highlight that this latency is at the input/generated token level—meaning the time taken to generate each token — and therefore doesn't primarily depend on the specific task. This means that whether the model is classifying text, generating the content, solving math problems, or producing code, its inherent latency (for the most part) remains consistent. So if speed is a critical factor for your application, this might be a significant deciding factor. Cost is another crucial consideration. The cost of running an LLM is often measured in input and generated tokens, with the total cost being dependent on the number of tokens processed. Tokens are essentially chunks of text that the model reads or generates. A token could be as short as one character or as long as one word. For English text, a token is typically a single word or part of a word. Again, this cost is primarily a factor of the token count, not the nature of the task. So whether the model generates an email reply, detects an offensive language, or solves a math problem, the cost will depend on the number of tokens it processes and generates.",activeloop Content Generation - Email Replies,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/," Content generation is where models like GPT first gained significant attention with their seemingly effortless ability to quickly generate readable content, given a few rules and guidelines. Companies leverage LLMs to facilitate the content generation, from drafting marketing copy to summarizing meeting notes. Let's take an example we can all relate to, replying to an email. For example, you can provide prompts that specify a particular persona, voice, and prompt chain and observe how each model generates email replies or other types of content accordingly. The comparison can focus on the models' ability to maintain consistency, creativity, and relevancy within the given constraints. Another metric we can see here is latency - the model's speed at processing a single input. GPT-4 is noticeably slower regarding longer content generation than the other models. We will touch on this a bit more in the next section. Once GPT-4 had finished generating its answers, I was able to compare the outputs, but there lies another problem, I don't want to read all four responses to the email for each of my test examples, and an exact match accuracy would be unfair because I want the LLMs to be creative but stick to a general gist therefore, I've chosen to use semantic similarity for my comparison. Simply put, I'm relying on yet another LLM behind the scenes at Vellum that compares our LLM outputs to a ground truth output based on semantic similarity - how similar they are from a conceptual perspective rather than an exact word-to-word match. The semantic similarity measurement tool leverages natural language understanding to comprehend the underlying meaning and sentiment of the text instead of merely checking for identical wording. It can detect the key themes, significant details, and overall intent, even if the linguistic expression varies. If the output strays too far from the main idea, the score would be low, regardless of the originality or creativity of the phrasing. Conversely, the score would be high if the output accurately captures the essence of the ground truth, even if expressed uniquely.GPT-4 (left) rambled on more than I wanted and was more formal than my ground truth reply. GPT-4 also included a subject line where it wasn't necessary. At the same time, Claude (right) was closer to my sample email semantically but also included a bit of extraneous information at the top (the 'Here's a draft response'). We can attempt to prompt away the chatter/subject but choose not to, in this case, showcase how different LLMs decide to give different bits of extraneous information without explicit instruction. Semantic similarity in its traditional sense is also a technique data scientists use to retrieve relevant documents, text, and images for comparison. Deep Lake's cloud offering pairs with various providers, such as S3 and Snowflake, to make this search easy to understand and implement",activeloop "How to Balance Performance, Latency, and Cost for Large Language Models?",https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Balancing performance against practical considerations like latency and cost is essential when selecting an LLM. A model that excels in accuracy or semantic similarity but takes too long to generate responses might not be suitable for time-sensitive applications. Similarly, a fast and affordable model that doesn't meet your accuracy needs might not be the best choice for your use case. Another key factor contributing to LLM costs is the number of tokens fed into the model. One way to reduce this overhead is by only using documents or parts of the text that are relevant to the task at hand. To do this, Deep Lake can store and retrieve your text and embeddings in the same table, providing a quick and easy solution for developers to reduce spending on LLM applications. It should come as no surprise that the key is understanding your specific needs, constraints, and budget and choosing a model that best balances your situation. Maybe you're ok with a 5% hit on accuracy if the model is 10x cheaper and 5x faster if you have sufficient error handling/support services built into the product. By understanding these aspects, you'll be better equipped to choose the most suitable LLM for your tasks and ensure the success of your AI-driven applications. Back to the tasks!",activeloop Question Answering with Logical Reasoning,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/," While classification and content generation are more 'useful' day-to-day applications of large language models, many enthusiasts and researchers, such as myself, are more intrigued by an LLM's ability to break down and reason through a problem. An LLM's capability isn't just about finding the right answer to a simple math problem—it's about demonstrating an understanding of the problem-solving process itself. We'll use a 'chain of thought' prompting technique to assess this. This involves asking the model to solve the problem and explain its reasoning step-by-step, as a human might. This approach nudges the LLM into producing a ‘chain of thought' that showcases its grasp of the problem-solving steps. Let's take an example word problem from the dataset: 'Kendra has three more than twice as many berries as Sam. Sam has half as many berries as Martha. If Martha has 40 berries, how many berries does Kendra have?' The correct answer, 43, is relatively straightforward to compute. But how does the LLM get there? A desirable response would be something like this: Martha has 40 berries Sam has half as many berries as Martha. Half of 40 is 20. So Sam has 20 berries. Kendra has 3 more than twice as many berries as Sam. Twice 20 is 40. 40 + 3 is 43. So Kendra has 43 berries This unveils the correct answer and reveals a logical journey toward it. We present our models with a spectrum of math problems, from elementary arithmetic to intricate algebra and calculus. We assess their responses for accuracy and the depth and clarity of their problem-solving pathway. How well do they demonstrate an understanding of mathematical concepts and processes? Can they explain their reasoning clearly and logically? These are the questions we seek to answer in our comparison of LLMs. Below is a sample of results from our four models showing that 3/4 of our models seem to have the prerequisite knowledge to solve many of these questions, with Cohere being a bit lagging in quality. Because we are asking for a chain-of-thought response that doesn't technically matter as much as the final answer, we can use a regex match in the Vellum Sandbox to find the answer at the end in the defined answer format (put in the prompt). In this instance, I used 'Answer: {{answer}}' as the regex pattern to look for.GPT-4, Claude, and GPT-3.5 (left, second from the left, and third from the left, respectively) all go about explaining the solution steps differently but ultimately get the answer right. Cohere (right) struggled to get the answer right and put it in the defined format in the prompt. It was noticeable once again how much slower GPT-4 is compared to the other models, so in this case, even though it ended up benign, the most performant",activeloop Code Generation,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Understanding codebases with LLMs or generating novel programming code using natural language descriptions has become quite an intriguing subject. So, to test the capabilities of our LLMs, we tasked them with generating Python code snippets from English-language descriptions of programming tasks. Our LLMs will be provided with tasks in English, describing what a particular piece of Python code should do. For example, 'Create a Python function that calculates the factorial of a given number.' The objective is to evaluate how well each model can comprehend these instructions and translate them into accurate, executable Python code.Again, while the exact-match accuracy metric works well for classification tasks, evaluating code generation presents unique challenges. Here, we aren't looking for word-for-word matches but for functionally correct code. Even if the code produced by the model doesn't match our reference code exactly, it's considered correct if it satisfies the task requirements and works as expected. For this reason, we will use a custom webhook solution with Vellum with a server we built to compile and run code against some unit tests. Notably, we could also evaluate the output on style and code requirements (typing, adherence to PEP-8, etc.) which our webhook could perform.Interestingly, the code snippets generated by the models often took different approaches to solve the same task, showcasing their unique problem-solving abilities. While this code generation comparison is far from exhaustive and doesn't cover all aspects of software development, it provides an exciting look at how Large Language Models can contribute to this field, assisting software developers with tasks ranging from debugging to coding assistance. On more than one occasion, with our temperature turned up (leading to more diverse outputs over several runs), the LLMs would sometimes generate buggy code even with the same model/prompt. This is not unexpected; it shows that a great prompt and LLM are rarely enough. We need to consider other levers at our disposal in future posts. With our examples, GPT-4 was the best performer and GPT 3.5 followed closely behind, and Cohere and Claude were better suited for more straightforward tasks. Remember, the success of a model depends on the nature and complexity of the tasks it's assigned, and it's always recommended to test the models in the context of your specific use case to determine the best fit. Code generation typically works best with an existing codebase on which to draw, such as a project or company's repository. Unfortunately, many LLM context windows are too small to accommodate a complete project, which is where Deep Lake comes in. By storing an indexed version of the code, LangChain, Deep Lake, and any LLM you choose can be used together to enable more accurate completions that can reduce development time and speed up the iterative design process.",activeloop Content Generation - Email Replies,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Semantic Similarity enables a fair evaluation of our LLMs' outputs by focusing on the quality and appropriacy of the content rather than its conformity to a predefined template . It also encourages creativity within the boundaries of the topic, ensuring that the responses are accurate but also engaging and insightful. It was fairly easy to trick the LLM into creating obscure content in this context for all models because all of these LLMs were specifically aligned to be helpful, so if we asked for odd things in the incoming email, the LLM addressed them with as much professionalism as possible. I receive emails about various things daily, so I'm more impressed than worried. If this were considered part of a production-ready application, I would strongly recommend adding input and output validation systems to catch these rather than rely on models to figure it out. For context, Claude performed the best with our examples, only slightly beating out GPT-3.5. We could attribute part of Claude's success to its fine-tuning process, as it was specifically designed to adapt its output based on dynamic instructions and guidance. This particular example has the most variables of any we've used, highlighting areas where Claude's capabilities truly excel.",activeloop What are Large Language Models (LLMs)?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Large language models are trained on vast amounts of text data. They are models capable of providing output across varioous tasks such as natural language generation, classification, translation, and more. Examples include GPT-4 by OpenAI, or Claude by Anthropic.",activeloop How do Large Language Models work?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"LLMs use self-attention and artificial neural networks to analyze large datasets, learn language patterns, and generate new text. They develop an understanding of semantics, context, and syntax to complete language tasks. Performance depends on size of datasets, computing power, and model architecture.",activeloop What is Anthropic Claude?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Anthropic's LLM, trained using Constitutional AI. It can adapt responses based on feedback and guidelines. Performed well on email reply generation.",activeloop What is Constitutional AI?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"An approach used to train Claude, Anthropic's LLM. It employs Reinforcement Learning from AI Feedback instead of human feedback. This allows Claude to consider dynamic instruction and adjust outputs accordingly.",activeloop What is Vellum?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,Vellum is a developer platform for building and deploying production LLM applications through simultaneously evaluating various LLMs' performance across the same set of tasks. Used in this study to compare LLM responses to prompts. Provides metrics like semantic similarity to evaluate generated content.,activeloop What metrics were used to evaluate the LLMs?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Several metrics that can be used to evaluate Large Language Models. These include accuracy (percentage of correct classifications or solutions), semantic text similarity (meaning-based comparison of content), robustness: Performance on varying, challenging inputs, latency (response time) and cost (based on number of tokens processed).",activeloop What are some limitations of current LLMs?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,Small context windows limit tasks like code generation. Subjective tasks are influenced by inherent biases. Models can generate 'buggy' outputs. More data and compute could continue improving performance.,activeloop What are promising future directions for LLMs?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Applications in various domains like healthcare, business, and education. Techniques like Constitutional AI for refinement. Integrating multiple models. Continued progress in model capabilities and mitigating limitations.",activeloop Concluding Remarks: No ‘One Size Fits All' LLMs for Now,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"In our quest to understand the performances of various LLMs in the context of various real-world applications, our goal was to underscore the importance of considering the performance metrics relevant to each use case and the unique attributes of the LLMs, including their respective training methodologies and characteristics. Offensive Language Detection proved to be a nuanced task, highlighting the influences of subjectivity and inherent biases on the models. While GPT-4 performed best on this task, the experiment emphasized the importance of providing clear guidelines to LLMs to refine their understanding of offensive content. The semantic similarity was crucial in evaluating the generated content regarding email replies. Claude excelled in this area, marginally outperforming GPT-3.5, by providing responses that adhered closely to the semantic structure of the sample emails, most likely due to its nature of taking in rules (constitution) and adjusting its output as needed. This test also exposed GPT-4's slower processing speed, which may be a significant consideration for applications requiring faster response times. In the Mathematical Problem-Solving task, GPT-4 demonstrated superior performance, albeit at a slower speed than the others. The experiment underscored the LLM's potential in understanding problem-solving processes and generating a coherent ‘chain of thought,' though Cohere struggled a bit in this area. This table gives a snapshot of our LLMs' performance across the tasks. Each task's performance is rated based on the combined evaluation of the specific metric applied (accuracy, semantic similarity, logical reasoning + regex match) and the inherent characteristics of the LLM (response speed, ability to follow instructions, creativity).Ultimately, the key takeaway from our comparative study is clearly understanding your needs and how different models can best meet those requirements. As we've seen, each model has strengths and nuances that can be effectively leveraged if appropriately understood. As the dynamic landscape of AI continues to evolve, mastering the ability to discern and compare the capabilities of different LLMs given specific tasks will become a vital skill for navigating this exciting field.If you see yourself trying to figure out which model to use for which task and would like to use Vellum to experiment with prompts and compare one modal to another, you can sign up here. For any feedback or questions about Vellum, please contact Akash at akash at vellum.ai.",activeloop What is the difference between OpenAI's GPT-4 and GPT-3.5?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"GPT-4 is OpenAI's latest, most advanced model, while GPT-3.5 (ChatGPT) is an older version. GPT-4 has greater parameters, training data, and compute, leading to better performance on complex tasks. However, GPT-4 also has higher latency (slower response time) and cost. GPT-3.5 has broader use but more limited capabilities.",activeloop How does OpenAI's GPT-4 compare to GPT-3.5?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"GPT-4 performed best in offensive language detection, math problem-solving, and code generation but was slower and more expensive. GPT-3.5 performed second-best in email reply generation and math reasoning, with faster responses but more limited abilities. Overall, GPT-4 excelled at complex, nuanced tasks but at the cost of speed and expense. GPT-3.5 provides a more balanced capability and performance.",activeloop Question Answering with Logical Reasoning,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Considering the speed, I might lean towards GPT-3.5 with some additional prompting.Content generation is a task that benefits from having source data from which to tailor a model's output . It is also typically quite difficult to feed in context without driving up the cost of the model and sometimes even the quality, as not all contexts may be useful. For this reason, we recommend exploring dedicated databases you can use to accurately retrieve the right information for the task at hand. One such offering is Activeloop's Deep Lake, which allows you to store and retrieve text, images, and their embeddings all in the same table.Considering robustness in the specific case of math/logical reasoning might look like asking the LLM to solve a problem rather oddly, as seen in the following figure.Constitutional AI models like Anthropic's Claude (right) can be more likely to follow dynamic instructions to a fault. In this case, I asked it to limit reasoning to a single line in Turkish, and Claude followed my instruction and then got the answer wrong. On the other hand, OpenAI's GPT-4 (left) blatantly ignored the instruction of only offering a single line of reasoning to get the answer right, and its Turkish isn't half bad either. For context, GPT-4 won this one out of the water, with Claude and GPT 3.5 being a relatively close second on a sample of GSM 8K, but again GPT-4 was easily 4-5x slower than its colleagues in generating results. It's known that GPT-4 and GPT-3.5 were graded on their ability to reason through problems with human feedback, so it's no surprise that one of these models excelled in our chain of thought example.",activeloop What advice is offered for developing LLM applications?,https://activeloop.ai/resources/how-to-compare-large-language-models-gpt-4-3-5-vs-anthropic-claude-vs-cohere/,"Understand your needs, constraints and budget. Test models in your context. Consider metrics beyond accuracy, e.g. speed and cost. Provide guidelines and feedback to improve performance. Use tools to overcome limitations. Choose models balancing performance, latency, and cost.",activeloop The ABC of LLMs: What are Large Language Models?,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"Large language models are a type of artificial intelligence system that utilize deep learning algorithms to generate and interpret human language. They are trained on massive amounts of text (more recently - multimodal) data and use that information to answer questions, summarize text, translate languages (Open GPT-3, Cohere, and Anthropic), and generate videos (Microsoft X-Clip), audio (OpenAI Whisper), or images (Stable Diffusion by Stability AI, Open AI's Dall-E).LLMs are not all-powerful. You can think of Chat-GPT as a studious English language student. They might have mastered the art of expression but not causal reasoning. Their performance is average (compared to other people), on average. Nonetheless, these models can generate high-quality natural language output, making them highly valuable for various applications in multimedia, healthcare, marketing, and beyond.However, their size and computational requirements make them more challenging to deploy, and there are concerns about the ethical and societal implications of using these models. For instance, they may hallucinate a truthful-looking answer (deceitful enough to get overlooked by even the Google Marketing team). In the next section, we will cover the common hurdles for LLM adoption and how to fix them by setting up a 'data flywheel' to fine-tune and correct mistakes.",activeloop What is a data flywheel?,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"A data flywheel refers to a self-reinforcing cycle in which data is acquired and ingested into the machine learning model training process, and the subsequent model performance is improved. This concept is associated with successful machine learning implementations at companies like Tesla and Netflix.The process starts with collecting and curating large amounts of high-quality machine learning datasets (e.g., Tesla car cameras recording the car surroundings), which are then used to train ML models (e.g., for lane detection). These models are deployed to automate decision-making and make predictions. As they are used after deployment, they create more real-world data, which can be used to refine the models and enhance their performance. This leads to better predictions and more data through more usage, thereby completing the 'data flywheel.' A next-gen data flywheel might also move beyond annotation, as the more and the faster high-quality data you acquire and process, the better your LLMs will be.For more information on the data flywheel, check out Andrej Karpathy's talk on it and the following work from Amazon's team on continual learning.",activeloop What issues arise when training LLMs?,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/," Data storage and retrieval bottleneck: It's no secret that LLMs require massive amounts of data. This data needs to be stored and retrieved efficiently to ensure that the model can be trained in a reasonable amount of time. Sadly, some of the status quo storage and retrieval methods are too outdated to keep up with the demands of these models. The challenge is further exacerbated if the data is stored in multiple locations, is in different formats, or is simply too large to be efficiently stored in memory… Don't take my word for it. Here's what a fellow YC founder in the generative AI space had to say about this bottleneck.On the retrieval part of the issue, it's essential to quickly kill the process and refrain from training a model on buggy data for weeks to hit a snag once you've burnt many compute credits.Both issues can be resolved with a data lake optimized for deep learning (or Deep Lake, as we call it). Deep Lake maintains the benefits of a vanilla data lake with one key difference: it stores complex data, such as images, videos, annotations, as well as tabular data, in the form of tensors and rapidly streams the data over the network to (a) Tensor Query Language, (b) in-browser visualization engine, or © deep learning frameworks without sacrificing GPU utilization. Datasets stored in Deep Lake can be accessed from PyTorch and TensorFlow and integrated with numerous MLOps tools.As a result, you can store data on the cloud and stream it efficiently to test the model performance locally before scaling it to the cloud. Secondly, thanks to streaming, you can unify data from disparate sources (in a standardized format explicitly built for deep learning). Lastly, you can immediately start the training process and kill buggy processes, not wasting money on idle GPU time. Data quality bottleneck. In a tale as old as time, your ML model is as good as your data (who needs a… rusty… CLIP?!). Language models can be trained on data heavily biased towards a particular demographic or group (thus possibly making unfair predictions when dealing with individuals from underrepresented groups). This leads to incorrect predictions and perpetuates harmful societal stereotypes and biases. For instance, if you were to prompt Midjourney to generate images of people in an educational setting, women would show up more than men. This would also be the case when asked to portray someone doing household chores. In contrast, men would be more often portrayed in office settings.Another example of poor data quality impairing language models is not diverse enough (or 'narrow') data. While it is OK to train a use case-specific model, insufficient data may lead to the model not being able to generalize well to new and diverse examples",activeloop What issues arise when training LLMs?,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"OpenAI's daily operating costs could be at least $100,000 or $3 million per month for serving 1M users and 100 times more for (estimated) 100M users . Serving LLMs is computationally very expensive. It is less likely that services based on LLMs will stay free as the search (especially with the estimated capital expenditure costs nearing $100B for a Google-like deployment working on real-time data). The reason why Open AI would prefer to keep the service free of charge as long as financially possible is to collect an immense amount of valuable data to re-train models. LLM deployment requires collecting data and storing the data in a ready-to-train format to close the data flywheel. Deep Lake's Tensor Storage format with native multi-branch version control provides the necessary data infrastructure to store and operate on the data.Sometimes LLMs require interacting with the database on-the-fly before the output. To serve real-time data, executing embedding search is necessary to retrieve relevant data sources and use LLM capability to summarize the output comprehensively. Deep Lake Tensor Query Language can quickly search for pertinent information feedback during the production or fine-tuning stage of LLMs.",activeloop What issues arise when training LLMs?,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"This can lead to the model missing essential patterns and relationships in the data, leading to faulty predictions.Next, and arguably the most difficult to fix, is the poor data quality arising from simply incorrect data . For instance, if the training data contains wrong labels and inconsistent annotations, it will negatively impact the model's performance.In sum, data quality has a significant impact, but it's often neglected by teams (or considered low-prestige hanging fruit to solve).Deep Lake-powered machine learning infrastructures help teams quickly address all these problems. It empowers the implementation of a flywheel-based setup for teams that do not have Tesla-grade infrastructure-building capabilities.One can utilize Deep Lake to complete the first training loop, ingest new data, visualize it with all the metadata, and query to explore under-performing samples (as well as create a complete data lineage to track the dataset evolution as you try to evaluate and improve model performance). Multimodality. Dall-E, Stable diffusion, or Whisper all deal with multimodal data. Multimodal data comes from multiple sources and can be represented in different modalities, such as text, images, and audio. This data type can be challenging for language models, as they often need help to process and integrate information from multiple modalities effectively. For example, consider a scenario where you want to classify an image of a dog as either a 'Golden Retriever' or a 'Labrador Retriever.' A language model trained on text data might need help with this task, as it has a greater understanding of the visual characteristics that distinguish these two breeds. On the other hand, a computer vision model trained on image data would perform well. However, if the task requires image and text data, such as labeling an image with a caption, combining the two models might perform better than desired. Further work on extending scaling laws for generative mixed-modal language models has been done by Aghajanyan et al. from Meta AI. Compute resource shortage. The recent surge of deploying both large language and stable diffusion models caused GPU shortage across major clouds. It fails to allocate resources whenever you try to spin up a p4d.24xlarge with 8xA100 GPUs on AWS. Sometimes you can find resources across different states compared to where your data is stored. That's where streaming data across regions using Deep Lake shines the most. Deep lake lets you save compute cycles on copying the data, especially when Activeloop-managed software has no egress fees. Deployment and maintenance costs: Once you're finished training your model, the cost of deployment becomes an issue. For example, Azure charges $3 per hour for one A100 GPU; to operate on one GPT 3.5 model, at least eight GPUs are required. So, when ChatGPT generates a response with an average length of 30 words, it will cost almost 1 cent for the company",activeloop Step-by-step training guide for Large Language Models with Deep Lake,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"Let's get hands-on. Full code is available here.First of all, we define the necessary functions for constructing the dataloader.OpenWebText dataset creationTo construct the OpenWebText dataset, we create ./data/deeplake/prepare.py1# define the dataset 2ds = deeplake.empty(path, overwrite=True) 3 4ds.create_tensor('text', htype='text', chunk_compression='lz4') 5ds.create_tensor('tokens', dtype=np.uint16, chunk_compression='lz4') 6 7@deeplake.compute 8def tokenize(example, ds): 9 ids = enc.encode_ordinary(example) # encode_ordinary ignores any special tokens 10 ids.append(enc.eot_token) # add the end of text token, e.g. 50256 for gpt2 bpe 11 ds.append({'text': example, 'tokens': np.array(ids).astype(np.uint16)}) 12 13# we now want to tokenize the dataset. first define the encoding function (gpt2 bpe) 14 15tokenize().eval(split_dataset[split]['text'], ds, num_workers=num_proc, scheduler='processed') 16ds.commit() 17ds.summary() 18CopyYou can explore the OpenWebText training dataset on your own in our UI.Replacing the dataloaderWe will proceed by replacing the dataloader with Deep Lake.For starters, let's define collate_fn that would do the batching and local sampling. To avoid undersampling, we add a coefficient to fetch twice more data.1def collate_fn(data: List[np.ndarray]) -> Tuple[torch.Tensor, torch.Tensor]: 2 ''' Collate function samples from a batch of documents ''' 3 #concatenate all the tokens from the batch 4 data = [d['tokens'] for d in data] 5 data = np.concatenate(data, axis=0) 6 7 #sample a random block of from concatenated documents 8 ix = torch.randint(max(len(data) - block_size, 1), (batch_size,)) 9 local_block_size = min(block_size, len(data)-1) 10 11 x = torch.stack( 12[torch.from_numpy((data[i:i+local_block_size]).astype(np.int64)) for i in ix]) 13 y = torch.stack( 14[torch.from_numpy((data[i+1:i+1+local_block_size]).astype(np.int64)) for i in ix]) 15 return x, y 16 17def get_dataloader(split: deeplake.Dataset, shuffle: bool = False, coef: float = 2, num_workers: int = 1): 18''' Returns a dataloader for the given split. Uses fast enterprise dataloader if available''' 19return dataloader(split)\ 20 .batch(int(coef*batch_size), drop_last=True)\ 21 .shuffle(shuffle)\ 22 .pytorch(num_workers=num_workers, tensors=['tokens'], collate_fn=collate_fn, distributed=ddp) 23CopyWe now load the dataset, checkout to a specific branch if necessary, where token tensor is available, then create train-val split and define iterators get_batch function.1ds = deeplake.load(dataset, read_only=True, token=token) 2ds.checkout(branch) 3 4meta_vocab_size = None 5 6n_tokens = sum(ds._tokens_shape.numpy()) 7print(f'There are ~{n_tokens[0]//10**9}B tokens in the dataset') 8 9split = int(len(ds)*train_split_ratio) 10dl = { 11 'train': get_dataloader(ds[:split], shuffle=shuffle, num_workers=num_workers), 12 'val': get_dataloader(ds[split:], shuffle=False, num_workers=1) 13} 14dl_iter = {'train': dl['train'].__iter__(), 'val': dl['val'].__iter__()} 15CopyAnd finally, run with1$ python3 train.py --dataset='hub://activeloop/openwebtext-train' 2CopyOr with Distributed Data Parallel1$ torchrun --standalone --nproc_per_node=8 train.py --dataset='hub://activeloop/openwebtext-train' 2CopyAs we can see below, GPUs are 99-100% utilized. A much better metric to track is the energy consumption at full capacity (sometimes 456W/400W 👀). It achieves an avg. of 914MBit/s data streaming from Deep Lake storage into a single 8xA100 machine, preserving the same Model Flops Utilization of 41% from peak bfloat16 as from directly reading from system memory. Each GPU just gets two CPU workers to preload the data. While streaming from cloud providers, we should be careful of egress costs; Activeloop managed storage comes without egress fees.Despite the shortage, we thank Lambda Cloud for providing GPUs to run the experiments. We can observe at least three times cost efficiency compared to major clouds while combining Deep Lake with Lambda Cloud for training large language models.",activeloop Concluding remarks,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"In summary, we discussed the increasing popularity of Large Language Models. As we expect an influx of new multi-modal models in 2023, it is essential to realize that most companies will gain from training their domain-specific LLMs. Doing so would allow for a more accurate and effective interpretation of specialized language, which should routinely beat more general LLMs at their predictions. More importantly, cost-efficiency, UX, and faster go-to-market for LLM-based solutions will be defining for many companies (thus, maintaining a lean training and dataset budget with the help of streaming is in your best interest).One should also remember that data, not size, is now the primary constraint on language modeling performance. To address this, companies must build a scalable data flywheel to efficiently acquire, retrain, and evaluate data to improve LLM performance. The article also highlighted common issues with LLM training, including data storage and retrieval bottlenecks, and suggested using a Deep Lake - the data lake optimized for deep learning as a solution. Finally, the article acknowledges the ethical and societal implications of using LLMs and calls for their responsible use.",activeloop NanoGPT architecture breakdown,https://activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/,"Recently, Andrej Karpathy released the easiest, swiftest repository for training & fine-tuning medium-sized GPTs called NanoGPT. All-in-all, the code to train the model is a 300-line boilerplate training loop and a 300-line GPT model definition completing GPT2. The model is relatively small, with 126M parameters. Thus, it is mainly bottlenecked by data feeding into GPUs. In contrast, data feeding becomes less of a bottleneck for larger models. Andrej implements the dataloader by loading memory-mapped NumPy arrays of preprocessed tokens from the local file system on top of NVMe into system memory. Then get_batch function randomly samples from it. In short, there is no way to make it faster aside from directly shipping the data to the GPUs (looking at you, GPUDirect!).1 # poor man's data loader 2data_dir = os.path.join('data', dataset) 3train_data = np.memmap(os.path.join(data_dir, 'train.bin'), dtype=np.uint16, mode='r') 4val_data = np.memmap(os.path.join(data_dir, 'val.bin'), dtype=np.uint16, mode='r') 5def get_batch(split): 6 data = train_data if split == 'train' else val_data 7 ix = torch.randint(len(data) - block_size, (batch_size,)) 8 x = torch.stack([torch.from_numpy((data[i:i+block_size]).astype(np.int64)) for i in ix]) 9 y = torch.stack([torch.from_numpy((data[i+1:i+1+block_size]).astype(np.int64)) for i in ix]) 10 if device_type == 'cuda': 11 # pin arrays x,y, which allows us to move them to GPU asynchronously (non_blocking=True) 12 x, y = x.pin_memory().to(device, non_blocking=True), y.pin_memory().to(device, non_blocking=True) 13 else: 14 x, y = x.to(device), y.to(device) 15 return x, y 16CopyIn this practical example, we'll achieve the same performance while streaming the data over the network 🤯. We replace the (ultrafast) local 'poor man's data loader' with Deep Lake's streamable data loader in NanoGPT while maintaining the full GPU power utilization when data is streamed from US-East to US-West instead of reading from a local file.OK, you may say, technically, the replacement works. But why should you care? Well, thanks to Deep Lake, you can continuously feed data into the data flywheel by: Ensuring that preprocessing for 5 hours is avoided after one team member shares the dataset and everyone has a single view of the data. Enabling dataset querying for curation purposes and troubleshooting model performance on edge cases such as car detection in rainy conditions. Being able to conduct advanced just-in-time transformations (i.e., normalizing images) since loading is optimized for large volumes of data to leave sufficient bandwidth for that action. Handling different data locations and computing with no GPU time is spent moving the data around (in case computing power becomes limited on a given platform). Finally, as data grows, you are no longer limited to local NVMe storage, which is very expensive.",activeloop Introduction to Detecting Diabetic Retinopathy with Machine Learning,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Over the past decade, deep learning algorithms have greatly evolved. 2021 introduced large models like GPT-3 from OpenAI and GoPher from DeepMind, capable of performing billions of operations. However, they require extensive training time, often weeks, and can be costly and complex to configure. Furthermore, they can be slow during inference without GPUs. However, not all tasks require such large models for accuracy. Some, particularly within supervised learning, can use smaller yet effective models.This article introduces EfficientNet, a smaller and faster model ideal for quick-result applications. It explains how EfficientNet operates and how it can be fine-tuned to detect early signs of Diabetic Retinopathy.",activeloop What is Diabetic Retinopathy?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Retina functionality depends on a steady blood supply. In diabetes patients, the blood vessels carry high sugar levels, which over time can damage the retina and lead to vision loss. Often, retinopathy is only detectable through a retinal exam, as it typically has no noticeable impact on vision. Diabetic retinopathy can be prevented if it can be detected at the early stages of development. Once detected doctors can start appropriate treatment. This poses a challenge for doctors in identifying diabetic retinopathy before it progresses and causes irreversible issues.",activeloop What are the Diabetic Retinopathy Stages?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Diabetic retinopathy progresses through four stages: Mild nonproliferative: Early stage with minor fluid leakage from tiny retinal vessel swellings, usually without vision impact. Moderate nonproliferative: Progressed swelling of vessels hinders nourishing blood flow to the retina, potentially causing blurry vision. Severe nonproliferative: Significant blockage in retinal blood vessels prompts fragile new vessels' growth, causing swelling, blurry vision, dark spots, and possible vision loss. Proliferative: Advanced stage with continual growth of fragile vessels leading to scar tissue, possibly causing retinal detachment, vision blurriness, and even permanent blindness.",activeloop What is Neural Architecture Search?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Before applying compound scaling, it's essential to establish a baseline network. The researchers achieved this using a technique called Neural Architecture Search (NAS). This approach automatically designs the neural network.A key element in the NAS method is the 'search space'. In essence, the search space defines the set of possible architectures that NAS can generate. This might include different building blocks or operations such as convolutions and pooling, or pre-established architectures that these operations are arranged into.For EfficientNet, the search space was based on the MobileNet architecture, specifically the inverted residual structure. This choice served as the backbone or foundation for the networks created by NAS.The resulting model from NAS then underwent compound scaling, a method of uniformly increasing the depth (the number of layers), width (the number of neurons per layer), and resolution (the size of the input) of the network. This carefully balanced scaling process resulted in a high-performance, computationally efficient network, thus giving birth to the EfficientNet family of models.",activeloop What is Compound Scaling?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Compound scaling, a key aspect of the EfficientNet model, involves a scaling factor to proportionally increase the depth, width, and resolution of the network. This scaling factor is known as a compound coefficient.The depth of the network (αN), the width of the network (βN), and the image size (γN) are all scaled in accordance with this compound coefficient. In other words, the depth, width, and resolution of the network are all increased by a certain factor to ensure a balanced scale-up of the model.The values of the constants α, β, and γ are determined through a grid search on the original, smaller model. A grid search is a method used to perform hyperparameter optimization, an important step in machine learning model building. In the case of EfficientNet, this grid search was used to find the most effective values for α, β, and γ, which then set the scaling factor for the network's depth, width, and resolution.So in a nutshell, compound scaling in EfficientNet involves a thoughtful scaling of the model's architecture to maintain a balance between the network's depth (number of layers), width (number of neurons per layer), and resolution (input size), which leads to a better performing, yet still computationally efficient, model.The image above shows the systematic study of how the model is scaled up. As you can see in the last image (e) the network finds a balanced relationship between the different scaling dimensions such as (a) baseline network, (b) width, © depth, and (d) resolution that can lead to better performance. This is known as the compound scaling method. The aim of this method is to uniformly scale the network in all dimensions to increase efficiency.",activeloop How does EfficientNet compare to other ML models?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"EfficientNet offers better performance compared to other state-of-the-art models due to its unique scaling methodology. By scaling up all dimensions of the network (depth, width, and resolution), EfficientNet ensures a balanced and efficient improvement in performance. In various benchmarks, EfficientNet has shown superior accuracy with fewer parameters and less computational complexity, outperforming larger models like ResNet and DenseNet. You can view the different models' performance on the ImageNet dataset below. We also provide more statistics in the FAQ section.",activeloop What is EfficientNet?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"EfficientNet is a family of Convolutional Neural Networks (CNN) which was introduced by Mingxing Tan and Quoc V. Le from Google Brain. The authors used neural architecture search (NAS) to design an appropriate baseline neural network for image classification. NAS uses techniques like search space, search strategy, and performance estimation strategy which allows it to automatically design a neural network from scratch given the appropriate data. Once NAS yields a baseline architecture it can then be scaled up using a method called compound scaling. EfficientNet and TransUNet are two distinct models used in computer vision tasks. EfficientNet is designed to balance depth, width, and resolution to achieve high performance while being computationally efficient, making it suitable for image classification. On the other hand, TransUNet combines UNet and Transformers to enhance image segmentation, particularly in medical imaging, by leveraging the power of Transformers to capture global context and long-range dependencies.",activeloop What is the Messidor Dataset?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"The Messidor dataset, an acronym for 'Methods to Evaluate Segmentation and Indexing Techniques in the field of Retinal Ophthalmology,' focuses on Diabetic Retinopathy, a condition that is often difficult to identify manually. Ophthalmologists require significant skill and time to detect it, which can cause delays and miscommunication in treatment.Diabetic retinopathy is identified by the presence of lesions, often related to vascular abnormalities. The dataset offers two medical diagnoses: Retinopathy grade: Ranging from 0 (Normal) to 3 (More than 15 microaneurysms). Risk of macular edema: Ranging from 0 (No risk) to 2 (Shortest distance between the macula and hard exudates is equal to or less than one papilla diameter). Given the high expense of the equipment needed and the urgent requirement in high-diabetes regions, like rural India, automated methods using machine learning techniques are essential. These techniques can handle tasks such as image classification, pattern recognition, segmentation, and object detection.",activeloop Using Deep Lake for Healthcare Machine Learning Data,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/," Deep Lake provides a vast range of publicly accessible datasets across different domains and tasks, updated daily by engineers. Deep Lake is built to make ML data management easy, and is compatible with third-party libraries for preprocessing.Data Loading The deeplake library provides functionality for loading and interacting with large-scale datasets in an efficient manner. Here, we're loading a Diabetic Retinopathy dataset hosted on Deep Lake. You do not need to create an API key to access this publicly hosted dataset.1import deeplake 2 3ds = deeplake.load('hub://activeloop/messidor-diabetic-retinopathy') 4ds.summary() 5 6CopyThe ds.summary() function provides a concise summary of the dataset, detailing the number of samples, and other relevant information.You can also run ds.visualize() to initiate a visualizer that lets you interact with the data.Optional Data Exploration Deep Lake provides an amazing feature that allows you to save your query and reuse it over and over again. This allows data uniformity across various users. For version control all you need to do is to follow these 4 steps: Run Save View Load Run the query This is where you execute the desired query in the UI.Save the queryOnce the query is executed click on Save query result to save the query.View the queryYou can view the query results by clicking on the query history. Load the query You can then copy paste the query id in the load_view function as shown below to access the query, this is similar to what we saw earlier. Here we are using the hash for the first query commit. ds.load_view('637cdeeb022d9e1237c9c364', optimize = True)For more details you can check out this video: What is Deep Lake? Activeloop Deep Lake Product Demo.Data Querying and Filtering Data Querying is very simple. Since the dataset contains five different classes you can either use all the classes to train the model or constrain them to two, three, or even four classes. To query the dataset just mention the classes that you want to work with.You can find the name of the classes by running this code:1ds.labels.info['class_names'] 2['g0', 'g3', 'g1', 'g2'] 3CopyDeep Lakes allows you to perform querying and filtering using the Tensor Query Language (TQL) query method.Using the TQL query method1`g0 = ds.query('SELECT * WHERE CONTAINS(labels, 'g0')')` 2CopyThe TQL query method can be further modified as well. Let's say you want to first visualise the frequency of each class. And based upon the frequency you want to select the classes for training.1class_names = ds.labels.info.class_names 2num_classes = len(class_names) 3class_count_raw = np.bincount(np.concatenate(ds.labels.numpy(aslist = True), axis=0)) 4plt.bar(np.arange(num_classes), class_count_raw, tick_label = class_names) 5plt.xlabel('Grade', weight='bold') 6plt.xticks(rotation='horizontal') 7plt.ylabel('Number of Instances', weight='bold') 8plt.title('Occurance of grades', weight='bold') 9plt.show() 10CopyLet's assume that we want to use grade 0 and grade 3 as they have greater frequency compared to the other two. But they are still imbalance",activeloop Using Deep Lake for Healthcare Machine Learning Data,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"We can use the following code to essentially filter the data based upon the class weightage.1balanced_view = ds.query('select * sample by max_weight(contains(labels, 'g0'): 7, contains(labels, 'g3'): 5, contains(labels, 'g1'): 10, contains(labels, 'g2'): 10)') 2class_count_balanced = np.bincount(np.concatenate(balanced_view.labels.numpy(aslist = True), axis=0)) 3X_axis = np.arange(len(class_names)) 4plt.figure(figsize=(8, 4)) 5plt.bar(X_axis - 0.2, class_count_raw, 0.4, label = 'Raw Data') 6plt.bar(X_axis + 0.2, class_count_balanced, 0.4, label = 'Sampled Data') 7plt.xticks(X_axis, class_names) 8plt.xlabel('Condition', weight='bold') 9plt.xticks(rotation='vertical') 10plt.ylabel('Number of Instances', weight='bold') 11plt.title('Frequency per Condition', weight='bold') 12plt.legend() 13plt.show() 14CopyAs you can see that grade g0 and g3 are almost balanced . Once you are satisfied we can then move forward and perform data transformation.Saving the view Once the view is finalized you can then save the view. Saving the view is an essential practice because then you can share view with your teammates and reuse it whenever you are reiterating the model. The following syntax will help you save the view:1balanced_view.save_view(message = 'balanced_view') 2CopyAfter the view is saved you can view it in the UI as well.Loading the view In order to download the view you can use the following syntax:1new_view = ds.load_view('c1f647bf6eaa49c6258dac584a556bd3ec49b4f5c0ba588843f3840b3154e0f6', optimize = True) # hash comes from QUERY ID 2CopyOnce downloaded, you can visualise the view using ds.visualize().",activeloop Defining the Model - EfficientNet,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"We are using the EfficientNet model which is a powerful convolutional neural network for image classification tasks. In the line model._fc = nn.Linear(in_features=1280, out_features=5, bias=True), we replace the final layer of the pretrained EfficientNet model to match our number of classes (5). We then transfer the model to the designated device (GPU if available, else CPU) for computation.1import torch 2from torch import nn 3from efficientnet_pytorch import EfficientNet 4 5device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 6 7model = EfficientNet.from_pretrained('efficientnet-b0') 8model._fc = nn.Linear(in_features=1280, out_features=5, bias=True) 9model = model.to(device) 10Copy",activeloop Image Transformations,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Image transformations are an essential part of deep learning pipelines. They help to augment the data, introduce variability, and reduce overfitting. Here is a sample input image:With albumentations, we specify a series of transformations that will be applied to our images.1import albumentations as A 2from albumentations.pytorch import ToTensorV2 3from torchvision import datasets, transforms, models 4 5tform = A.Compose([ 6 A.Rotate(limit=20, p=0.5), # Rotate the image within a range of ±20 degrees with a probability of 50% 7 A.Resize(224, 224), # Resize the image to 224x224 pixels 8 A.HorizontalFlip(p=0.5), # Flip the image horizontally with a probability of 50% 9 A.Normalize(mean=(0), std=(1)), # Normalize the image with mean=0 and std=1 10 ToTensorV2() # Convert the image from numpy array to PyTorch tensor 11]) 12Copy",activeloop "Loss Function, Optimizer, and Scheduler",https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"We use the CrossEntropyLoss, a popular loss function for multi-class classification problems. The Adam optimizer is used as it adapts the learning rate for each weight individually and is generally more efficient. We also define a learning rate scheduler to decrease the learning rate as the training progresses to fine-tune the model.1from torch.optim import Adam 2from torch.optim.lr_scheduler import StepLR 3import torchmetrics 4 5optimizer = Adam(model.parameters(), lr=0.01) 6scheduler = StepLR(optimizer, step_size=4, gamma=0.5) 7loss_fn = nn.CrossEntropyLoss() 8train_accuracy = torchmetrics.Accuracy(task='multiclass', num_classes=4).to(device) 9n_epochs = 2 10Copy",activeloop Training EfficientNet for Healthcare ML task,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"The model is trained using a standard PyTorch training loop. For each epoch, we load a batch of data, perform a forward pass through the model, calculate the loss, perform backpropagation to compute gradients, and then update the model parameters with the optimizer. We also compute the accuracy metric for monitoring purposes.1from tqdm import tqdm 2 3for epoch in range(1, n_epochs + 1): 4 total_acc = 0 5 with tqdm(data_loader, unit='iteration') as train_epoch: 6 train_epoch.set_description(f'Epoch {epoch}') 7 for i, (data, target) in enumerate(train_epoch): 8 data = data.to(device) 9 target = target.to(device) 10 11 optimizer.zero_grad() 12 logits = model(data) 13 _, pred = torch.max(logits, dim=1) 14 15 loss = loss_fn(logits, target.squeeze(0)) 16 loss.backward() 17 optimizer.step() 18 scheduler.step() 19 20 acc = train_accuracy(pred, target.squeeze(0)) 21 total_acc += acc.item() 22 train_epoch.set_postfix(loss=loss.item(), accuracy=total_acc / (i + 1)) 23CopyAnd here is the optimized model's performance during training for each of the epochs: Each of these steps contributes to creating an effective pipeline for diagnosing Diabetic Retinopathy. Careful configuration and tuning of these components can lead to high-performance models that can greatly aid in the medical diagnosis process.",activeloop Concluding remarks,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"In conclusion, this article has explored three key areas: Diabetic Retinopathy detection, EfficientNet, and the application of EfficientNet on the Messidor dataset. Diabetic retinopathy, a progressive eye disorder, often goes unnoticed until it begins to affect vision. It can be effectively managed with prompt diagnosis and proper treatment, but the current manual methods for diagnosis and screening are time-consuming and require substantial expertise.EfficientNet, an advanced model architecture for image classification, was introduced as a solution. It balances depth, width, and resolution to provide optimal performance and computational efficiency.We demonstrated how to train EfficientNet on the Messidor dataset to alleviate the burden of manual diabetic retinopathy diagnosis. These techniques significantly enhance the extraction of crucial information, particularly when dealing with limited data and computational resources. Automated systems, like those powered by AI and deep learning algorithms, play a crucial role in addressing the challenges associated with diabetic retinopathy diagnosis, underscoring the need for their wider adoption in the medical imaging sector.Keep in mind that we trained and tested two implementations. This notebook is more comprehensive, but suffers from poor performance, while this notebook shows the performance were capable of achieving.",activeloop Evaluating EfficientNet Performance in a Healthcare Machine Learning Task,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Now let's evaluate our model's performance. We will be using the improved model trained in this notebook. We will evaluate the it on the following metrics AUROC, Confusion Matrix, and F1Score.1from torchmetrics import AUROC, ConfusionMatrix, PrecisionRecallCurve,F1Score 2auc = AUROC('multiclass', num_classes=4) 3print(auc(pred, y).item()) 4>> 0.79 5 6 This AUROC score is pretty good. How about its F1 score? 7f1 = F1Score(num_classes=5).to('cuda') 8f1(logits, y).item() 9>> 73.45 10CopyLet's take a look at our model's confusion matrix, or in other words which classes it confuses.1import seaborn as sns 2cm = ConfusionMatrix('multiclass', num_classes=4).to('cuda') 3mat = cm(pred, y).detach().cpu().numpy() 4sns.heatmap(mat) 5CopyThis is an example of confusion matrix in epoch 7. As you can see the model is able class correctly for the 1st, 3rd and 4th classes. The possible reason of why classes 1 and 2 confused is because the features of both classes are almost similar. This is quite normal in deep neural networks. To overcome this issue you can train it for a longer period of time.",activeloop What are some EfficientNet benchmarks?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"The table compares key parameters of several deep learning model families: ResNet, ResNeXt, SENet, NASNet, GPipe, DenseNet, Xception, and EfficientNet. Each is evaluated on accuracy and model complexity (Parameters in Million). Notably, GPipe presents the highest complexity with 556 million parameters and 84.3% accuracy. EfficientNet models, while more efficient in terms of parameters, manage to achieve comparable accuracy, with EfficientNet-B7 hitting 84.3% with 66 million parameters. ModelAccuracyParameters (Million) ResNet-15277.8%60 ResNeXt-10180.9%84 SENet82.7%146 NASNet-A82.7%89 GPipe84.3%556 DenseNet-201~77%~20 Xception~79%~20.1 EfficientNet-B0~77%~7 EfficientNet-B179.1%7.8 EfficientNet-B2~81%~7.9 EfficientNet-B381.6%12 EfficientNet-B482.9%19 EfficientNet-B5~83%~38 EfficientNet-B6~84%~42 EfficientNet-B784.3%66",activeloop Practical Guide to Diabetic Retinopathy Detection,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"Now with all of the theory out of the way, let's take a look at how to implement this model in practice. We will focus on image classification using the retinopathy dataset. The complete process is documented in this linked notebook, which can be modified based on your objectives. It should be noted that we ran multiple experiments with training this model on this dataset, and we achieved better results using this notebook. Both training and testing datasets are available from Activeloop.",activeloop References,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,https://github.com/Nielspace/Custom-NN/blob/main/EffNet/effnet.py https://www.sciencedirect.com/science/article/pii/S1319157821001397 https://www.nhs.uk/conditions/diabetic-retinopathy/ https://en.wikipedia.org/wiki/Diabetic_retinopathy https://en.wikipedia.org/wiki/Neural_architecture_search https://www.infoworld.com/article/3648408/what-is-neural-architecture-search.html https://arxiv.org/pdf/1704.04861.pdf https://towardsdatascience.com/mobilenetv2-inverted-residuals-and-linear-bottlenecks-8a4362f4ffd5 https://towardsdatascience.com/introduction-to-squeeze-excitation-networks-f22ce3a43348 https://python.plainenglish.io/implementing-efficientnet-in-pytorch-part-3-mbconv-squeeze-and-excitation-and-more-4ca9fd62d302 https://amaarora.github.io/2020/08/13/efficientnet.html#inverted-bottleneck-mbconv https://arxiv.org/pdf/1905.11946.pdf https://analyticsindiamag.com/implementing-efficientnet-a-powerful-convolutional-neural-network/,activeloop What are the features of the 4 Diabetic Retionopathy stages?,https://activeloop.ai/resources/efficient-net-for-diabetic-retinopathy-healthcare-ml-models/,"There are 4 distinct stages for Diabetic Retinopathy. The various differences are summarized in the following table:Sure, I'll add some padding to the table by adding extra spaces. Please note that how the table will appear can also be dependent on the platform or software where it is viewed. GradeClinical FeaturesCategory 0No symptoms observedNo Diabetic Retinopathy 1Presence of Microaneurysms in one of the four quadrantsMild NPDR 2Microaneurysms, dot and blot Hemorrhages, and cotton wool spotsModerate NPDR 3Intraretinal microvascular abnormalities (in 1 quadrant), Definite venous beading (in 2 quadrants), Intraretinal Hemorrhages (>= 20 in each quadrant), NeovascularizationSevere NPDR 4Advanced stage with continual growth of fragile vessels (neovascularization) leading to scar tissue, possibly causing retinal detachment, vision blurriness, and even permanent blindnessProliferative DR",activeloop Introduction,https://activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/,"When selecting a Vector Database for production-grade Retrieval Augmented Generation (RAG) applications, two things matter most: speed and affordability. Deep Lake 3.7.1 introduces a unique and performant implementation of the HNSW Approximate Nearest Neighbor (ANN) search algorithm that improves the speed of index creation, reduces the RAM usage, and integrates Deep Lake's Query Engine for fast filtering based on metadata, text, or other attributes. The new index implementation pushes the limit for sub-second vector search from 1 million to >35 million embeddings, while significantly reducing costs for running a vector database in production.",activeloop Scalability and Performance in Deep Lake 3.7.1,https://activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/,"Deep Lake's prior versions utilized a high-performance implementation of linear search for computing embedding similarity. While it is effective for smaller vector stores, this method is not suitable for vector stores exceeding 1 million embeddings. With the introduction of Deep Lake 3.7.1, we've added an advanced implementation of Approximate Nearest Neighbor (ANN) search, supercharging search speeds to under one second for 35 million embeddings. For smaller databases under 100,000 embeddings, linear search remains the preferred method due to maximum accuracy, while ANN search is recommended at larger scales.",activeloop Deep Lake's HNSW Implementation,https://activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/,"Hierarchical Navigable Small World (HNSW) graphs are among the best performing and reliable indexes for vector similarity search. Deep Lake has made the HNSW index even more powerful by adding enhancements such as intelligent memory utilization and multithreading during index creation. By distributing data in the Deep Lake Vector Store between object storage, attached storage (on-disk), and RAM, Deep Lake minimizes the usage of costly memory while maintaining high performance. This is a must-have for building RAG-based Large Language Model (LLM) applications at scale.",activeloop Conclusion,https://activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/,"As the world of Large Language Model (LLM) applications grows and matures, scalability without burning though your budget is paramount for taking your prototypes to production. Deep Lake stands out by delivering fast, sub-second vector search capabilities for datasets with up to 35 million embeddings, and at a cost that's 80% more affordable than other vector databases in the market. Your search for a powerful, budget-friendly Vector Database solution ends with Deep Lake.Try out Deep Lake Index today.",activeloop "Top-tier Performance, Without the Hefty Price Tag, Unlike Other Vector Databases",https://activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/,"Most vector databases were originally designed for applications such as recommendation engines, which require real-time search and millions of requests per day. As a result, they are typically implemented in-memory, relying heavily on RAM for data storage. Since LLM responses may take several seconds, implementing an in-memory real-time vector database is unnecessary, and it significantly increases costs without meaningfully improving the end-to-end user experience. Thanks to Deep Lake's efficient memory architecture, we've slashed storage costs by over 80% compared to many leading competitors without sacrificing the performance of your LLM apps. Our lower costs combined with industry-leading ease of use offer customers a risk-free journey for scaling projects from prototyping to production.",activeloop "Use LangChain, OpenAI GPT, & Deep Lake to Chat with CSVs, PDFs, JSONs, GitHub Repos, URLs, & More",https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"We've previously explored chatting with PDFs or understanding GitHub repos with LangChain. Many apps are popping up here and there inspired by those use cases, but DataChad, created by our community member Gustav von Zitzewitz, takes it several steps further, and works both locally or in the cloud, and allows chatting with multiple data sources of various types (PDFs, Excel sheets, etc.) at the same time.DataChad is an open-source project that allows users to ask questions about any data source by leveraging embeddings, Deep Lake as a vector database, large language models like GPT-3.5-turbo or GPT-4, and LangChain. The data source can be anything from a local file like a pdf or CSV to a website url, a GitHub repository, or even the path to a directory, scanned recursively if the app is deployed locally. The app now supports Local Mode, where all data is processed only locally and no API calls are made. This is made possible by leveraging pre-trained open source LLM models like GPT4all, and creating Deep Lake-powered embedding storage on the local disk vs in the Deep Lake cloud.The app works by uploading any file or entering any way or URL (or pointing to the location of your files using the Local Mode). Subsequently, the app detects and loads the data source into text documents, embeds the text documents using OpenAI embeddings, then stores them embeddings as a vector dataset to Activeloop's Deep Lake Cloud. A Langchain is established, comprising an LLM model and the embedding database index as a retriever. This chain serves as the context for answering user queries over any data they upload.",activeloop Why Do You Need a Chat With Any Data App?,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"DataChad is designed to serve as an indispensable tool for individuals who require swift and precise data querying from any source.Whether you're seeking a comprehensive understanding of a complete project or looking for swift answers from a single data source without manually sifting through the material (say, a Wikipedia article, codebase, or an academic paper you're cramming), with DataChad, users can ask natural language questions and get relevant answers in seconds without writing complex SQL queries or using other data querying tools.Finally, the app can be hosted and used from anywhere, like in the demo, or deployed locally to enable querying local directories. This makes it essential to be able to run this types of solutions locally, without the need to send companies like OpenAI your data (in that case, you'd need to use an open-source large language model).",activeloop Editorial Note on OpenAI Embeddings,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"Costs can become a factor for extensive OpenAI API usage. To have full transparency and control of this critical factor, DataChad will display the app's usage of tokens and total costs in $. To get a feeling about the scale, prompts using the maximum number of tokens of 4069 still lay well below a single cent of total usage costs.",activeloop Vector Database,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"DataChad uses Deep Lake - the vector database for all AI data, to store the embeddings generated from the text documents. Vector databases are designed to store and search vectors efficiently and are optimized for large-scale datasets. Deep Lake stands out from various vector databases in its multi-modality (i.e., ability to support multiple data types and store embedding metadata). It is highly relevant if you're looking to build an all-in-one chat with a data app like DataChad.",activeloop OpenAI Embeddings,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,DataChad uses OpenAI Embeddings to convert text documents into vectors that can be indexed and searched efficiently. OpenAI's embeddings are instrumental in evaluating the semantic similarity between two or more text fragments or the relevance of extensive documents to a concise query. They are extensively vital for tasks like search or classification. OpenAI embeddings employ the cosine similarity method to calculate the similarity between documents and a question.,activeloop Large Language Models (LLMs),https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,DataChad uses large language models like GPT-3.5 Turbo to generate responses to user questions. LLMs are powerful models trained on massive amounts of text data that can generate natural language responses to a wide range of questions.,activeloop LangChain,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,DataChad uses LangChain to combine the embeddings and LLMs into a single retrieval chain that can be used to answer user questions. LangChain is a powerful technique for integrating natural language processing tools into a single pipeline. Read this ultimate LangChain guide if you want to understand the power of LangChain.,activeloop The Attention Mechanism of LLMs,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"DataChad taps into the attention mechanism offered by LLMs, such as GPT-3, using the OpenAI API. This attention mechanism enables the model to weigh the importance of different words or tokens in a text sequence, capturing contextual relationships and semantic nuances. By leveraging LLMs, DataChad benefits from their ability to generate rich and accurate representations of textual data.",activeloop "Factors to Consider as You Build a LangChain & Large Language Model-based app (k. arg, chunks, etc.)",https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"The DataChad project is built upon the fusion of two critical natural language processing (NLP) technologies, leveraging the attention mechanism of Language Model-based Models (LLMs) like GPT-4 through the OpenAI API and employing vector similarity for efficient embedding comparison when querying the vector database. This combination allows for robust analysis and retrieval of information from textual data. Let's delve into the details, focusing on the querying parameters of the vector database within DataChad.",activeloop Streamlit,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"DataChad is implemented as a Streamlit app, a way to build demo apps in Python quickly. It takes the pain away from caring about how to implement a UI and how to host the app properly and lets you focus on the backend work.",activeloop Vector Similarity for Embedding Comparison,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"When querying the vector database, DataChad employs vector similarity to compare document embeddings. This technique measures the geometric similarity between embeddings, allowing for the efficient retrieval of similar documents. Vector similarity provides a simple yet effective method for identifying related content in large-scale datasets.",activeloop chunk_size,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,chunk_size in LangChain-based apps determines the size at which the text is divided into smaller chunks before being embedded. This parameter ensures the efficient processing of large documents and controls the granularity of the resulting embeddings. The DataChad default is 1000.,activeloop Parameters for Querying the Vector Database and the LLM,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,DataChad's querying process involves several important parameters that influence the retrieval and analysis of document embeddings. What are those parameters?,activeloop fetch_k,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,fetch_k in LangChain-based apps specifies the number of documents to pull from the vector database. This parameter determines the scope of the search and influences the relevance of the retrieved documents. The DataChad default is 20.,activeloop max_tokens,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,The max_tokens parameter limits the documents returned from the vector store based on tokens before building the context to query the LLM. This parameter ensures that DataChad does not run into the LLM's prompt limit (4069 for gpt-3.5-turbo). The DataChad default is 3375.,activeloop temperature,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"LLM temperature controls the randomness of the LLM output. A temperature of 0 means the response is deterministic: it always returns the same completion (making it significantly less prone to hallucination). A temperature of greater than zero results in increasing variation in the completion. The DataChad default is 0.7.By carefully tuning these parameters, DataChad optimizes the trade-off between computational efficiency and the quality of results obtained from both the vector database and LLM-based querying. By ticking the Advanced Options checkbox in the app, experienced users can further modify these parameters.",activeloop Issue 1: Running into errors related to the prompt length,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"Solution: Decrease one or many of k, chunk_size, and max_tokens.",activeloop How to Solve Most Common Issues When Building With LangChain,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"The previous section discussed the importance of selecting appropriate parameters for querying the vector database and the language model within the DataChad project. However, despite the default values having been carefully chosen and tested, it is not uncommon to encounter challenges or the desire for further improvement in the overall query experience. In this section, we will address some common issues you may face as you build your app and provide suggested solutions that can help overcome these challenges.",activeloop Issue 3: The answers are not relevant enough,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"Solution: Increase chunk_size, or if this leads to running into issue 1, then increase k and fetch_k while decreasing chunk_size",activeloop k,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,The k in LangChain-based apps represents the most similar embeddings selected to build the context for the LLM prompt in the langchain. This parameter affects the contextual understanding and response generation of the LLM when querying the OpenAI API. The DataChad default is 10.,activeloop Practical Guide: Building an All-In-One Chat with Anything App,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"The code is split in three parts. First, we build out the Streamlit app defined in app.py. The second part, utils.py, contains all processing functionality and API calls. Final part is constants.py, where all project-specific paths, names, and descriptions are defined.",activeloop app.py,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"1import streamlit as st 2from streamlit_chat import message 3 4from constants import ( 5 ACTIVELOOP_HELP, 6 APP_NAME, 7 AUTHENTICATION_HELP, 8 CHUNK_SIZE, 9 DEFAULT_DATA_SOURCE, 10 ENABLE_ADVANCED_OPTIONS, 11 FETCH_K, 12 MAX_TOKENS, 13 OPENAI_HELP, 14 PAGE_ICON, 15 REPO_URL, 16 TEMPERATURE, 17 USAGE_HELP, 18 K, 19) 20from utils import ( 21 advanced_options_form, 22 authenticate, 23 delete_uploaded_file, 24 generate_response, 25 logger, 26 save_uploaded_file, 27 update_chain, 28) 29 30# Page options and header 31st.set_option('client.showErrorDetails', True) 32st.set_page_config( 33 page_title=APP_NAME, page_icon=PAGE_ICON, initial_sidebar_state='expanded' 34) 35st.markdown( 36 f'<h1 style='text-align: center;'>{APP_NAME} {PAGE_ICON} <br> I know all about your data!</h1>', 37 unsafe_allow_html=True, 38) 39 40# Initialise session state variables 41# Chat and Data Source 42if 'past' not in st.session_state: 43 st.session_state['past'] = [] 44if 'usage' not in st.session_state: 45 st.session_state['usage'] = {} 46if 'chat_history' not in st.session_state: 47 st.session_state['chat_history'] = [] 48if 'generated' not in st.session_state: 49 st.session_state['generated'] = [] 50if 'data_source' not in st.session_state: 51 st.session_state['data_source'] = DEFAULT_DATA_SOURCE 52if 'uploaded_file' not in st.session_state: 53 st.session_state['uploaded_file'] = None 54# Authentication and Credentials 55if 'auth_ok' not in st.session_state: 56 st.session_state['auth_ok'] = False 57if 'openai_api_key' not in st.session_state: 58 st.session_state['openai_api_key'] = None 59if 'activeloop_token' not in st.session_state: 60 st.session_state['activeloop_token'] = None 61if 'activeloop_org_name' not in st.session_state: 62 st.session_state['activeloop_org_name'] = None 63# Advanced Options 64if 'k' not in st.session_state: 65 st.session_state['k'] = K 66if 'fetch_k' not in st.session_state: 67 st.session_state['fetch_k'] = FETCH_K 68if 'chunk_size' not in st.session_state: 69 st.session_state['chunk_size'] = CHUNK_SIZE 70if 'temperature' not in st.session_state: 71 st.session_state['temperature'] = TEMPERATURE 72if 'max_tokens' not in st.session_state: 73 st.session_state['max_tokens'] = MAX_TOKENS 74 75# Sidebar with Authentication 76# Only start App if authentication is OK 77with st.sidebar: 78 st.title('Authentication', help=AUTHENTICATION_HELP) 79 with st.form('authentication'): 80 openai_api_key = st.text_input( 81 'OpenAI API Key', 82 type='password', 83 help=OPENAI_HELP, 84 placeholder='This field is mandatory', 85 ) 86 activeloop_token = st.text_input( 87 'ActiveLoop Token', 88 type='password', 89 help=ACTIVELOOP_HELP, 90 placeholder='Optional, using ours if empty', 91 ) 92 activeloop_org_name = st.text_input( 93 'ActiveLoop Organisation Name', 94 type='password', 95 help=ACTIVELOOP_HELP, 96 placeholder='Optional, using ours if empty', 97 ) 98 submitted = st.form_submit_button('Submit') 99 if submitted: 100 authenticate(openai_api_key, activeloop_token, activeloop_org_name) 101 102 st.info(f'Learn how it works [here]({REPO_URL})') 103 if not st.session_state['auth_ok']: 104 st.stop() 105 106 # Clear button to reset all chat communication 107 clear_button = st.button('Clear Conversation', key='clear') 108 109 # Advanced Options 110 if ENABLE_ADVANCED_OPTIONS: 111 advanced_options_form() 112 113# the chain can only be initialized after authentication is OK 114if 'chain' not in st.session_state: 115 update_chain() 116 117if clear_button: 118 # resets all chat history related caches 119 st.session_state['past'] = [] 120 st.session_state['generated'] = [] 121 st.session_state['chat_history'] = [] 122 123# file upload and data source inputs 124uploaded_file = st.file_uploader('Upload a file') 125data_source = st.text_input( 126 'Enter any data source', 127 placeholder='Any path or url pointing to a file or directory of files', 128) 129 130# generate new chain for new data source / uploaded file 131# make sure to do this only once per input / on change 132if data_source and data_source != st.session_state['data_source']: 133 logger.info(f'Data source provided: '{data_source}'') 134 st.session_state['data_source'] = data_source 135 update_chain() 136 137if uploaded_file and uploaded_file != st.session_state['uploaded_file']: 138 logger.info(f'Uploaded file: '{uploaded_file.name}'') 139 st.session_state['uploaded_file'] = uploaded_file 140 data_source = save_uploaded_file(uploaded_file) 141 st.session_state['data_source'] = data_source 142 update_chain() 143 delete_uploaded_file(uploaded_file) 144 145# container for chat history 146response_container = st.container() 147# container for text box 148container = st.container() 149 150# As streamlit reruns the whole script on each change 151# it is necessary to repopulate the chat containers 152with container: 153 with st.form(key='prompt_input', clear_on_submit=True): 154 user_input = st.text_area('You:', key='input', height=100) 155 submit_button = st.form_submit_button(label='Send') 156 157 if submit_button and user_input: 158 output = generate_response(user_input) 159 st.session_state['past'].append(user_input) 160 st.session_state['generated'].append(output) 161 162if st.session_state['generated']: 163 with response_container: 164 for i in range(len(st.session_state['generated'])): 165 message(st.session_state['past'][i], is_user=True, key=str(i) + '_user') 166 message(st.session_state['generated'][i], key=str(i)) 167 168# Usage sidebar with total used tokens and costs 169# We put this at the end to be able to show usage starting with the first response 170with st.sidebar: 171 if st.session_state['usage']: 172 st.divider() 173 st.title('Usage', help=USAGE_HELP) 174 col1, col2 = st.columns(2) 175 col1.metric('Total Tokens', st.session_state['usage']['total_tokens']) 176 col2.metric('Total Costs in $', st.session_state['usage']['total_cost']) 177 178Copy ",activeloop utils.py,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"Consider contributing: {REPO_URL}!' 194 error_msg = f'Failed to load {st.session_state['data_source']} with Error:\n{e}' 195 st.error(error_msg, icon=PAGE_ICON) 196 logger.info(error_msg) 197 st.stop() 198 199def load_git(data_source: str, chunk_size: int = CHUNK_SIZE) -> List[Document]: 200 # We need to try both common main branches 201 # Thank you github for the 'master' to 'main' switch 202 repo_name = data_source.split('/')[-1].split('.')[0] 203 repo_path = str(DATA_PATH / repo_name) 204 text_splitter = RecursiveCharacterTextSplitter( 205 chunk_size=chunk_size, chunk_overlap=0 206 ) 207 branches = ['main', 'master'] 208 for branch in branches: 209 if os.path.exists(repo_path): 210 data_source = None 211 try: 212 docs = GitLoader(repo_path, data_source, branch).load_and_split( 213 text_splitter 214 ) 215 break 216 except Exception as e: 217 logger.info(f'Error loading git: {e}') 218 if os.path.exists(repo_path): 219 # cleanup repo afterwards 220 shutil.rmtree(repo_path) 221 try: 222 return docs 223 except Exception as e: 224 handle_load_error() 225 226def load_any_data_source( 227 data_source: str, chunk_size: int = CHUNK_SIZE 228) -> List[Document]: 229 # Ugly thing that decides how to load data 230 # It aint much, but it's honest work 231 is_text = data_source.endswith('.txt') 232 is_web = data_source.startswith('http') 233 is_pdf = data_source.endswith('.pdf') 234 is_csv = data_source.endswith('csv') 235 is_html = data_source.endswith('.html') 236 is_git = data_source.endswith('.git') 237 is_notebook = data_source.endswith('.ipynb') 238 is_doc = data_source.endswith('.doc') 239 is_py = data_source.endswith('.py') 240 is_dir = os.path.isdir(data_source) 241 is_file = os.path.isfile(data_source) 242 243 loader = None 244 if is_dir: 245 loader = DirectoryLoader(data_source, recursive=True, silent_errors=True) 246 elif is_git: 247 return load_git(data_source, chunk_size) 248 elif is_web: 249 if is_pdf: 250 loader = OnlinePDFLoader(data_source) 251 else: 252 loader = WebBaseLoader(data_source) 253 elif is_file: 254 if is_text: 255 loader = TextLoader(data_source) 256 elif is_notebook: 257 loader = NotebookLoader(data_source) 258 elif is_pdf: 259 loader = UnstructuredPDFLoader(data_source) 260 elif is_html: 261 loader = UnstructuredHTMLLoader(data_source) 262 elif is_doc: 263 loader = UnstructuredWordDocumentLoader(data_source) 264 elif is_csv: 265 loader = CSVLoader(data_source, encoding='utf-8') 266 elif is_py: 267 loader = PythonLoader(data_source) 268 else: 269 loader = UnstructuredFileLoader(data_source) 270 try: 271 # Chunk size is a major trade-off parameter to control result accuracy over computaion 272 text_splitter = RecursiveCharacterTextSplitter( 273 chunk_size=chunk_size, chunk_overlap=0 274 ) 275 docs = loader.load_and_split(text_splitter) 276 logger.info(f'Loaded: {len(docs)} document chucks') 277 return docs 278 except Exception as e: 279 handle_load_error(e if loader else None) 280 281def clean_data_source_string(data_source_string: str) -> str: 282 # replace all non-word characters with dashes 283 # to get a string that can be used to create a new dataset 284 dashed_string = re.sub(r'\W+', '-', data_source_string) 285 cleaned_string = re.sub(r'--+', '- ', dashed_string).strip('-') 286 return cleaned_string 287 288def setup_vector_store(data_source: str, chunk_size: int = CHUNK_SIZE) -> VectorStore: 289 # either load existing vector store or upload a new one to the hub 290 embeddings = OpenAIEmbeddings( 291 disallowed_special=(), openai_api_key=st.session_state['openai_api_key'] 292 ) 293 data_source_name = clean_data_source_string(data_source) 294 dataset_path = f'hub://{st.session_state['activeloop_org_name']}/{data_source_name}-{chunk_size}' 295 if deeplake.exists(dataset_path, token=st.session_state['activeloop_token']): 296 with st.spinner('Loading vector store...'): 297 logger.info(f'Dataset '{dataset_path}' exists -> loading') 298 vector_store = DeepLake( 299 dataset_path=dataset_path, 300 read_only=True, 301 embedding_function=embeddings, 302 token=st.session_state['activeloop_token'], 303 ) 304 else: 305 with st.spinner('Reading, embedding and uploading data to hub...'): 306 logger.info(f'Dataset '{dataset_path}' does not exist -> uploading') 307 docs = load_any_data_source(data_source, chunk_size) 308 vector_store = DeepLake.from_documents( 309 docs, 310 embeddings, 311 dataset_path=dataset_path, 312 token=st.session_state['activeloop_token'], 313 ) 314 return vector_store 315 316def build_chain( 317 data_source: str, 318 k: int = K, 319 fetch_k: int = FETCH_K, 320 chunk_size: int = CHUNK_SIZE, 321 temperature: float = TEMPERATURE, 322 max_tokens: int = MAX_TOKENS, 323) -> ConversationalRetrievalChain: 324 # create the langchain that will be called to generate responses 325 vector_store = setup_vector_store(data_source, chunk_size) 326 retriever = vector_store.as_retriever() 327 # Search params 'fetch_k' and 'k' define how many documents are pulled from the hub 328 # and selected after the document matching to build the context 329 # that is fed to the model together with your prompt 330 search_kwargs = { 331 'maximal_marginal_relevance': True, 332 'distance_metric': 'cos', 333 'fetch_k': fetch_k, 334 'k': k, 335 } 336 retriever.search_kwargs.update(search_kwargs) 337 model = ChatOpenAI( 338 model_name=MODEL, 339 temperature=temperature, 340 openai_api_key=st.session_state['openai_api_key'], 341 ) 342 chain = ConversationalRetrievalChain.from_llm( 343 model, 344 retriever=retriever, 345 chain_type='stuff', 346 verbose=True, 347 # we limit the maximum number of used tokens 348 # to prevent running into the models token limit of 4096 349 max_tokens_limit=max_tokens, 350 ) 351 logger.info(f'Data source '{data_source}' is ready to go!') 352 return chain 353 354def update_chain() -> None: 355 # Build chain with parameters from session state and store it back 356 # Also delete chat history to not confuse the bot with old context 357 try: 358 st.session_state['chain'] = build_chain( 359 data_source=st.session_state['data_source'], 360 k=st.session_state['k'], 361 fetch_k=st.session_state['fetch_k'], 362 chunk_size=st.session_state['chunk_size'], 363 temperature=st.session_state['temperature'], 364 max_tokens=st.session_state['max_tokens'], 365 ) 366 st.session_state['chat_history'] = [] 367 except Exception as e: 368 msg = f'Failed to build chain for data source {st.session_state['data_source']} with error: {e}' 369 logger.error(msg) 370 st.error(msg, icon=PAGE_ICON) 371 372def update_usage(cb: OpenAICallbackHandler) -> None: 373 # Accumulate API call usage via callbacks 374 logger.info(f'Usage: {cb}') 375 callback_properties = [ 376 'total_tokens', 377 'prompt_tokens', 378 'completion_tokens', 379 'total_cost', 380 ] 381 for prop in callback_properties: 382 value = getattr(cb, prop, 0) 383 st.session_state['usage'].setdefault(prop, 0) 384 st.session_state['usage'][prop] += value 385 386def generate_response(prompt: str) -> str: 387 # call the chain to generate responses and add them to the chat history 388 with st.spinner('Generating response'), get_openai_callback() as cb: 389 response = st.session_state['chain']( 390 {'question': prompt, 'chat_history': st.session_state['chat_history']} 391 ) 392 update_usage(cb) 393 logger.info(f'Response: '{response}'') 394 st.session_state['chat_history'].append((prompt, response['answer'])) 395 return response['answer'] 396 397Copy ",activeloop Issue 2: The answers contain hallucinations or do not match the true data content,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,Solution: Decrease the temperature. Set it to 0 for the most conservative answers that are unlikely to deviate from the sources.,activeloop utils.py,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"1import logging 2import os 3import re 4import shutil 5import sys 6from typing import List 7 8import deeplake 9import openai 10import streamlit as st 11from dotenv import load_dotenv 12from langchain.callbacks import OpenAICallbackHandler, get_openai_callback 13from langchain.chains import ConversationalRetrievalChain 14from langchain.chat_models import ChatOpenAI 15from langchain.document_loaders import ( 16 CSVLoader, 17 DirectoryLoader, 18 GitLoader, 19 NotebookLoader, 20 OnlinePDFLoader, 21 PythonLoader, 22 TextLoader, 23 UnstructuredFileLoader, 24 UnstructuredHTMLLoader, 25 UnstructuredPDFLoader, 26 UnstructuredWordDocumentLoader, 27 WebBaseLoader, 28) 29from langchain.embeddings.openai import OpenAIEmbeddings 30from langchain.schema import Document 31from langchain.text_splitter import RecursiveCharacterTextSplitter 32from langchain.vectorstores import DeepLake, VectorStore 33from streamlit.runtime.uploaded_file_manager import UploadedFile 34 35from constants import ( 36 APP_NAME, 37 CHUNK_SIZE, 38 DATA_PATH, 39 FETCH_K, 40 MAX_TOKENS, 41 MODEL, 42 PAGE_ICON, 43 REPO_URL, 44 TEMPERATURE, 45 K, 46) 47 48# loads environment variables 49load_dotenv() 50 51logger = logging.getLogger(APP_NAME) 52 53def configure_logger(debug: int = 0) -> None: 54 # boilerplate code to enable logging in the streamlit app console 55 log_level = logging.DEBUG if debug == 1 else logging.INFO 56 logger.setLevel(log_level) 57 58 stream_handler = logging.StreamHandler(stream=sys.stdout) 59 stream_handler.setLevel(log_level) 60 61 formatter = logging.Formatter('%(message)s') 62 63 stream_handler.setFormatter(formatter) 64 65 logger.addHandler(stream_handler) 66 logger.propagate = False 67 68configure_logger(0) 69 70def authenticate( 71 openai_api_key: str, activeloop_token: str, activeloop_org_name: str 72) -> None: 73 # Validate all credentials are set and correct 74 # Check for env variables to enable local dev and deployments with shared credentials 75 openai_api_key = ( 76 openai_api_key 77 or os.environ.get('OPENAI_API_KEY') 78 or st.secrets.get('OPENAI_API_KEY') 79 ) 80 activeloop_token = ( 81 activeloop_token 82 or os.environ.get('ACTIVELOOP_TOKEN') 83 or st.secrets.get('ACTIVELOOP_TOKEN') 84 ) 85 activeloop_org_name = ( 86 activeloop_org_name 87 or os.environ.get('ACTIVELOOP_ORG_NAME') 88 or st.secrets.get('ACTIVELOOP_ORG_NAME') 89 ) 90 if not (openai_api_key and activeloop_token and activeloop_org_name): 91 st.session_state['auth_ok'] = False 92 st.error('Credentials neither set nor stored', icon=PAGE_ICON) 93 return 94 try: 95 # Try to access openai and deeplake 96 with st.spinner('Authentifying...'): 97 openai.api_key = openai_api_key 98 openai.Model.list() 99 deeplake.exists( 100 f'hub://{activeloop_org_name}/DataChad-Authentication-Check', 101 token=activeloop_token, 102 ) 103 except Exception as e: 104 logger.error(f'Authentication failed with {e}') 105 st.session_state['auth_ok'] = False 106 st.error('Authentication failed', icon=PAGE_ICON) 107 return 108 # store credentials in the session state 109 st.session_state['auth_ok'] = True 110 st.session_state['openai_api_key'] = openai_api_key 111 st.session_state['activeloop_token'] = activeloop_token 112 st.session_state['activeloop_org_name'] = activeloop_org_name 113 logger.info('Authentification successful!') 114 115def advanced_options_form() -> None: 116 # Input Form that takes advanced options and rebuilds chain with them 117 advanced_options = st.checkbox( 118 'Advanced Options', help='Caution! This may break things!' 119 ) 120 if advanced_options: 121 with st.form('advanced_options'): 122 temperature = st.slider( 123 'temperature', 124 min_value=0.0, 125 max_value=1.0, 126 value=TEMPERATURE, 127 help='Controls the randomness of the language model output', 128 ) 129 col1, col2 = st.columns(2) 130 fetch_k = col1.number_input( 131 'k_fetch', 132 min_value=1, 133 max_value=1000, 134 value=FETCH_K, 135 help='The number of documents to pull from the vector database', 136 ) 137 k = col2.number_input( 138 'k', 139 min_value=1, 140 max_value=100, 141 value=K, 142 help='The number of most similar documents to build the context from', 143 ) 144 chunk_size = col1.number_input( 145 'chunk_size', 146 min_value=1, 147 max_value=100000, 148 value=CHUNK_SIZE, 149 help=( 150 'The size at which the text is divided into smaller chunks ' 151 'before being embedded.\n\nChanging this parameter makes re-embedding ' 152 'and re-uploading the data to the database necessary ' 153 ), 154 ) 155 max_tokens = col2.number_input( 156 'max_tokens', 157 min_value=1, 158 max_value=4069, 159 value=MAX_TOKENS, 160 help='Limits the documents returned from database based on number of tokens', 161 ) 162 applied = st.form_submit_button('Apply') 163 if applied: 164 st.session_state['k'] = k 165 st.session_state['fetch_k'] = fetch_k 166 st.session_state['chunk_size'] = chunk_size 167 st.session_state['temperature'] = temperature 168 st.session_state['max_tokens'] = max_tokens 169 update_chain() 170 171def save_uploaded_file(uploaded_file: UploadedFile) -> str: 172 # streamlit uploaded files need to be stored locally 173 # before embedded and uploaded to the hub 174 if not os.path.exists(DATA_PATH): 175 os.makedirs(DATA_PATH) 176 file_path = str(DATA_PATH / uploaded_file.name) 177 uploaded_file.seek(0) 178 file_bytes = uploaded_file.read() 179 file = open(file_path, 'wb') 180 file.write(file_bytes) 181 file.close() 182 logger.info(f'Saved: {file_path}') 183 return file_path 184 185def delete_uploaded_file(uploaded_file: UploadedFile) -> None: 186 # cleanup locally stored files 187 file_path = DATA_PATH / uploaded_file.name 188 if os.path.exists(DATA_PATH): 189 os.remove(file_path) 190 logger.info(f'Removed: {file_path}') 191 192def handle_load_error(e: str = None) -> None: 193 e = e or 'No Loader found for your data source ",activeloop constants.py,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"1from pathlib import Path 2 3APP_NAME = 'DataChad' 4MODEL = 'gpt-3.5-turbo' 5PAGE_ICON = '🤖' 6 7K = 10 8FETCH_K = 20 9CHUNK_SIZE = 1000 10TEMPERATURE = 0.7 11MAX_TOKENS = 3357 12ENABLE_ADVANCED_OPTIONS = True 13 14DATA_PATH = Path.cwd() / 'data' 15DEFAULT_DATA_SOURCE = 'git@github.com:gustavz/DataChad.git' 16 17REPO_URL = 'https://github.com/gustavz/DataChad' 18 19AUTHENTICATION_HELP = f''' 20Your credentials are only stored in your session state.\n 21The keys are neither exposed nor made visible or stored permanently in any way.\n 22Feel free to check out [the code base]({REPO_URL}) to validate how things work. 23''' 24 25USAGE_HELP = f''' 26These are the accumulated OpenAI API usage metrics.\n 27The app uses '{MODEL}' for chat and 'text-embedding-ada-002' for embeddings.\n 28Learn more about OpenAI's pricing [here](https://openai.com/pricing#language-models) 29''' 30 31OPENAI_HELP = ''' 32You can sign-up for OpenAI's API [here](https://openai.com/blog/openai-api).\n 33Once you are logged in, you find the API keys [here](https://platform.openai.com/account/api-keys) 34''' 35 36ACTIVELOOP_HELP = ''' 37You can create an Activeloop account (including 200GB of free database storage) [here](https://www.activeloop.ai/).\n 38Once you are logged in, you find the API token [here](https://app.activeloop.ai/profile/gustavz/apitoken).\n 39The organisation name is your username, or you can create new organisations [here](https://app.activeloop.ai/organization/new/create) 40''' 41 42Copy",activeloop "Concluding Remarks: Build your Chat with Data Tool, or Use DataChad",https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,"DataChad elevates conversing with CSVs, PDFs, JSONs, GitHub repositories, local paths or web URLs to a completely new level. If you've read this far, consider giving DataChad a try.By harnessing the power of embeddings, Deep Lake's vector database for all AI data, large language models (LLMs), and LangChain, DataChad enables users to query any data source easily. DataChad seamlessly transforms any data into text documents, embeds them using OpenAI embeddings, and stores the embeddings as a vector dataset in Activeloop's Deep Lake Cloud. And creates a LangChain, which serves as the context for generating precise responses to user queries. Whether the task at hand is understanding a complex project or seeking quick answers from a single data source, DataChad allows users to pose natural language questions and receive relevant answers in seconds.",activeloop How can I deploy a ChatGPT for my Data fully locally?,https://activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/,Local Mode,activeloop What is SalesCopilot?,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"SalesCopilot is a sales call assistant that transcribes audio in real-time and connects the user to a chatbot with full knowledge of the transcript, powered by GPT-3.5 or GPT-4. This live chat allows for highly relevant assistance to be provided within seconds upon the user's request.Additionally, SalesCopilot is able to detect potential objections from the customer (e.g. 'It's too expensive' or 'The product doesn't work for us') and provide well-informed recommendations to the salesperson on how best to handle them. Relying solely on the LLM to come up with these recommendations has some flaws - ChatGPT isn't fine tuned to be a great salesperson, and it may give recommendations that don't align with your personal approach. Integrating it with Deep Lake and a custom knowledge base is the perfect solution - let's dive into how it works!",activeloop What Did and Didn't Work while Building an Open-Source Conversational Intelligence Assistant,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"Before we look at the exact solution we eventually decided on, let's take a glance at the approaches that didn't work, and what we learned from them:",activeloop Didn't Work: Naively Splitting the Custom Knowledge Base,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"To leverage our custom knowledge base, our initial approach was to split the knowledge base into chunks of equal length using LangChain's built-in text splitters. Then we took the detected customer objection, embedded it, and searched the database for those chuhnks that were most relevant. This allowed us to pass relevant excerpts from our knowledge base to the LLM every time we wanted a response, which improved the quality of the responses and made the prompts to the LLM shorter and more efficient. However, our 'naive' approach to splitting the custom knowledge base had a major flaw.To illustrate the issue we faced, let's look at an example. Say we have the following text:1Objection: 'There's no money.' 2It could be that your prospect's business simply isn't big enough or generating enough cash right now to afford a product like yours. Track their growth and see how you can help your prospect get to a place where your offering would fit into their business. 3 4Objection: 'We don't have any budget left this year.' 5A variation of the 'no money' objection, what your prospect's telling you here is that they're having cash flow issues. But if there's a pressing problem, it needs to get solved eventually. Either help your prospect secure a budget from executives to buy now or arrange a follow-up call for when they expect funding to return. 6 7Objection: 'We need to use that budget somewhere else.' 8Prospects sometimes try to earmark resources for other uses. It's your job to make your product/service a priority that deserves budget allocation now. Share case studies of similar companies that have saved money, increased efficiency, or had a massive ROI with you. 9CopyIf we naively split this text, we might end up with individual sections that look like this:1A variation of the 'no money' objection, what your prospect's telling you here is that they're having cash flow issues. But if there's a pressing problem, it needs to get solved eventually. Either help your prospect secure a budget from executives to buy now or arrange a follow-up call for when they expect funding to return. 2 3Objection: 'We need to use that budget somewhere else.' 4CopyHere, we see that the advice does not match the objection. When we try to retrieve the most relevant chunk for the objection 'We need to use that budget somewhere else', this will likely be our top result, which isn't what we want. When we pass it to the LLM, it might be confusing.What we really need to do is split the text in a more sophisticated way, that maintains semantic boundaries between each chunk. This will improve retrieval performance and keep the LLM responses higher quality.",activeloop Didn't Work: No Custom Knowledge Base,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"This was the first solution we tried - instead of using a custom knowledge base, we could completely rely on the LLM.Unfortunately, we ran into some issues: GPT-4 is awesome, but way too slow: To get the highest quality responses without a custom knowledge base, using GPT-4 is the best choice. However, the time it takes for the API to return a response mean that by the time the user gets the advice they need, it's too late. This means we have to use GPT-3.5, which comes with a noticeable drop in quality. Response quality is inconsistent: Relying solely on the LLM, sometimes we get great responses that are exactly what we're looking for. However, without any additional info, guidelines, or domain-specific information, we also get bad responses that aren't very on-topic. Cramming information into the prompt is not token-efficient: Using the OpenAI API, cost is a consideration, as you pay for the amount of tokens in your prompt + the completion. To ground the LLM and keep the responses high-quality, we could fill the prompt with tons of relevant information, how we'd like it to respond in different situations, etc. This solution isn't ideal, because it means every time we query the LLM we have to pass all that information, and all those tokens, to the API. The costs can add up, and GPT-3.5 can get confused if you give it too much info at once. That didn't work - the main issue is that we need a way to efficiently ground the LLM's response. The next thing we tried was to use a custom knowledge base combined with a vector database to pass the LLM relevant info for each individual customer objection.",activeloop "Creating, Loading, and Querying Our Database for AI",https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"We're going to define a class that handles the database creation, database loading, and database querying.1import os 2import re 3from langchain.embeddings import OpenAIEmbeddings 4from langchain.vectorstores import DeepLake 5 6class DeepLakeLoader: 7 def __init__(self, source_data_path): 8 self.source_data_path = source_data_path 9 self.file_name = os.path.basename(source_data_path) # What we'll name our database 10 self.data = self.split_data() 11 if self.check_if_db_exists(): 12 self.db = self.load_db() 13 else: 14 self.db = self.create_db() 15CopyThere's a few things happening here. First, the data is being processed by a method called split_data:1def split_data(self): 2 ''' 3 Preprocess the data by splitting it into passages. 4 5 If using a different data source, this function will need to be modified. 6 7 Returns: 8 split_data (list): List of passages. 9 ''' 10 with open(self.source_data_path, 'r') as f: 11 content = f.read() 12 split_data = re.split(r'(?=\d+\. )', content) 13 if split_data[0] == '': 14 split_data.pop(0) 15 split_data = [entry for entry in split_data if len(entry) >= 30] 16 return split_data 17CopySince we know the structure of our knowledge base, we use this method to split it into individual entries, each representing an example of a customer objection. When we run our similarity search using the detected customer objection, this will improve the results, as outlined above.After preprocessing the data, we check if we've already created a database for this data. One of the great things about Deep Lake is that it provides us with persistent storage, so we only need to create the database once. If you restart the app, the database doesn't disappear!Creating and loading the database is super easy:1def load_db(self): 2 ''' 3 Load the database if it already exists. 4 5 Returns: 6 DeepLake: DeepLake object. 7 ''' 8 return DeepLake(dataset_path=f'deeplake/{self.file_name}', embedding_function=OpenAIEmbeddings(), read_only=True) 9 10def create_db(self): 11 ''' 12 Create the database if it does not already exist. 13 14 Databases are stored in the deeplake directory. 15 16 Returns: 17 DeepLake: DeepLake object. 18 ''' 19 return DeepLake.from_texts(self.data, OpenAIEmbeddings(), dataset_path=f'deeplake/{self.file_name}') 20CopyJust like that, our knowledge base becomes a vector database that we can now query.1def query_db(self, query): 2 ''' 3 Query the database for passages that are similar to the query. 4 5 Args: 6 query (str): Query string. 7 8 Returns: 9 content (list): List of passages that are similar to the query. 10 ''' 11 results = self.db.similarity_search(query, k=3) 12 content = [] 13 for result in results: 14 content.append(result.page_content) 15 return content 16CopyWe don't want the metadata to be passed to the LLM, so we take the results of our similarity search and pull just the content from them. And that's it! We now have our custom knowledge base stored in a Deep Lake vector database and ready to be queried!",activeloop Integrating SalesCopilot with Deep Lake,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"By using Deep Lake as a vector database, we can quickly and easily retrieve only the most relevant info to provide to the LLM. We can also persist the vector database, so we don't have to re-create it every time we load the app. The knowledge base we're using here is this list of common customer objections. Before we get into the code, here's a rough overview of how it works:First, we take our knowledge base and embed it, storing the embeddings in a Deep Lake vector database. Then, when we detect an objection in the transcript, we embed the objection and use it to search our database, retrieving the most similar guidelines. We then pass those guidelines along with the objection to the LLM and send the result to the user.",activeloop Did Work: Intelligent Splitting,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"In our example text, there is a set structure to each individual objection and its recommended response. Rather than split the text based on size, why don't we split the text based on its structure? We want each chunk to begin with the objection, and end before the 'Objection' of the next chunk. Here's how we could do it:1text = ''' 2Objection: 'There's no money.' 3It could be that your prospect's business simply isn't big enough or generating enough cash right now to afford a product like yours. Track their growth and see how you can help your prospect get to a place where your offering would fit into their business. 4 5Objection: 'We don't have any budget left this year.' 6A variation of the 'no money' objection, what your prospect's telling you here is that they're having cash flow issues. But if there's a pressing problem, it needs to get solved eventually. Either help your prospect secure a budget from executives to buy now or arrange a follow-up call for when they expect funding to return. 7 8Objection: 'We need to use that budget somewhere else.' 9Prospects sometimes try to earmark resources for other uses. It's your job to make your product/service a priority that deserves budget allocation now. Share case studies of similar companies that have saved money, increased efficiency, or had a massive ROI with you. 10''' 11 12# Split the text into a list using the keyword 'Objection: ' 13objections_list = text.split('Objection: ')[1:] # We ignore the first split as it is empty 14 15# Now, prepend 'Objection: ' to each item as splitting removed it 16objections_list = ['Objection: ' + objection for objection in objections_list] 17CopyThis gave us the best results. Nailing the way we split and embed our knowledge base means more relevant documents are retrieved and the LLM gets the best possible context to generate a response from. Now let's see how we integrated this solution with Deep Lake and SalesCopilot!",activeloop Leveraging Custom Knowledge Bases with Deep Lake,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"Integrating SalesCopilot with Deep Lake allows for a significant enhancement of its capabilities, with immediate and relevant responses based on a custom knowledge base. The beauty of this solution is its adaptability. You can curate your knowledge base according to your own unique sales techniques and customer scenarios, ensuring SalesCopilot's responses are perfectly suited for your situation.An efficient vector storage solution is essential to working with large knowledge bases and connecting them to LLM's, allowing the LLM to offer knowledgeable, situation-specific advice. On top of that, Deep Lake's persistent storage means we only create the database once, which saves computational resources and time.In conclusion, the integration of SalesCopilot with Deep Lake creates a powerful tool that combines the speed and intelligence of LLM's with the rapid, precise information retrieval of a vector database. This hybrid system offers a highly adaptable, efficient, and effective solution to handling customer objections. The efficiency and simplicity Deep Lake brings to applications like this alongside its seamless integration make it a top choice for vector storage solutions.",activeloop Connecting Our Database to GPT-4,https://activeloop.ai/resources/conversation-intelligence-gong-io-open-source-alternative-ai-sales-assistant/,"Now, all we need to do is connect our LLM to the database. First, we need to create a DeepLakeLoader instance with the path to our data.1db = DeepLakeLoader('data/salestesting.txt') 2CopyNext, we take the detected objection and use it to query the database:1results = db.query_db(detected_objection) 2CopyTo have our LLM generate a message based off these results and the objection, we use LangChain. In this example, we use a placeholder for the prompt - if you want to check out the prompts used in SalesCopilot, check out the prompts.py file.1from langchain.chat_models import ChatOpenAI 2from langchain.schema import SystemMessage, HumanMessage, AIMessage 3 4chat = ChatOpenAI() 5system_message = SystemMessage(content=objection_prompt) 6human_message = HumanMessage(content=f'Customer objection: {detected_objection} | Relevant guidelines: {results}') 7 8response = chat([system_message, human_message]) 9CopyTo print the response:1print(response.content) 2CopyAnd we're done! In just a few lines of code, we've got our response from the LLM, informed by our own custom knowledge base.If you want to check out the full code for SalesCopilot, click here to visit the GitHub repo.",activeloop "Announcing Gen AI 360: Activeloop Collaborates with Towards AI to Launch 3 Free Skill Certification Courses on LangChain, Vector Databases, & Foundational Models",https://activeloop.ai/resources/announcing-gen-ai-360-foundational-model-certification-in-collaboration-with-intel-towards-ai/,"Mountain View, CA. June 20 2023 - Activeloop is proud to announce the launch of industry-pioneering educational certifications on Large Language Models, Deep Lake, the vector database for all AI data, and LangChain, a framework for developing applications powered by language models. The certification program, available at no cost, is designed as part of the Gen AI 360: Foundational Model Certification program in collaboration with Intel and Towards AI. The first installment of the course combines a comprehensive overview of foundational model theory and practical projects with LangChain and Deep Lake, equipping AI developers with the tools they need to apply Large Language Models across industries.'Reaching over 385,000 AI developers monthly, we're passionate about educating and upskilling engineers in this rapidly growing field. That is why we designed a practical course engineers can take to implement AI into their company processes or use Foundational Models to build entirely new products,' said Louie Peters, CEO of Towards AI.'Every company will be adding Foundational Models and vector databases to their day-to- day operations and the products they build very soon. Upon course completion, Deep Lake Certified developers can harness the full potential of Large Language Models and advanced AI technologies like Deep Lake and LangChain. Companies that want to achieve a competitive advantage should include these frameworks in their toolset. We're already seeing solutions powered by LangChain and Deep Lake rapidly gaining adoption,' said Davit Buniatyan, CEO of Activeloop.'I believe engineers and technology executives could greatly benefit from taking this course to stay at the forefront of AI,' said Arijit Bandyopadhyay, CTO – Enterprise Analytics & AI, Head of Strategy – Cloud & Enterprise, DCAI Group at Intel Corporation. 'Intel continues to be at the vanguard of AI and new technology adoption. This Foundational Model Certification could help better equip the next generation of innovators with what they need to succeed with Generative AI and Large Language Models. It could also contribute to the broader adoption of AI applications and solutions across various industries.'Activeloop's Deep Lake is designed to combine the best of both data lakes and vector databases, enabling companies to build their own data flywheels to fine-tune their Large Language Models on complex data beyond embeddings - such as text, image, video, or audio, and power AI products. LangChain, in its turn, seamlessly connects Deep Lake datasets with Foundational Models in diverse use cases - from understanding GitHub repositories to analyzing financial statements.Created with support from industry leaders, the certification will foster the adoption of cutting-edge AI technology within any engineering organization across industries. To learn more about the Deep Lake Foundational Model Certification and to sign up for free, visit Gen AI 360 Certification page.",activeloop About Activeloop,https://activeloop.ai/resources/announcing-gen-ai-360-foundational-model-certification-in-collaboration-with-intel-towards-ai/,"Activeloop is building Deep Lake, a vector database for all AI data. With Deep Lake, ML teams can store and manage complex data, such as embeddings, text, images, audio, videos, annotations, or tabular data, in a deep learning-native format and stream them to ML frameworks in real time. Matterport, Yale University, MILA Quebec, & others use Deep Lake.",activeloop About Towards AI,https://activeloop.ai/resources/announcing-gen-ai-360-foundational-model-certification-in-collaboration-with-intel-towards-ai/,Towards AI simplifies AI learning & AI product building with 30-50 weekly blogs & tutorials for 385k followers and their 60K-member Discord community. Towards AI addresses AI students' and practitioners' challenges through community-driven practical courses and a Jobs board.,activeloop Didn't Work: Instructing GPT-4 To Generate Stable Diffusion Prompts,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Initially, it seemed like it might be possible to send the LLM the text of our book and tell it to generate a prompt for each page. However, this didn't work for a few reasons: Stable Diffusion released in 2022: While it might seem like Stable Diffusion is already 'old news', to GPT-3.5 and GPT-4 it's in the future. Look at GPT-4's response to the question, 'What is Stable Diffusion?': Teaching the LLM how to prompt is difficult: It's possible to instruct the LLM to generate prompts without the LLM knowing what Stable Diffusion is; giving it the exact format to generate a prompt with has decent results. Unfortunately, the often injects plot details or non-visual content into the prompts, no matter how often you tell it not to. These details skew the relevance of the prompts and negatively impact the quality of the generated images.",activeloop What Did and Didn't Work while Building FableForge?,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Before we look at the exact solution we eventually decided on, let's take a glance at the approaches that didn't work and what we learned from them:",activeloop How FableForge Uses Functions,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"For our Stable Diffusion prompts, we need structured data that strictly adheres to specific rules - a function is perfect for that! Let's take a look at one of the functions we used:1get_visual_description_function = [{ 2 'name': 'get_passage_setting', 3 'description': 'Generate and describe the visuals of a passage in a book. Visuals only, no characters, plot, or people.', 4 'parameters': { 5 'type': 'object', 6 'properties': { 7 'setting': { 8 'type': 'string', 9 'description': 'The visual setting of the passage, e.g. a green forest in the pacific northwest', 10 }, 11 'time_of_day': { 12 'type': 'string', 13 'description': 'The time of day of the passage, e.g. nighttime, daytime. If unknown, leave blank.', 14 }, 15 'weather': { 16 'type': 'string', 17 'description': 'The weather of the passage, eg. rain. If unknown, leave blank.', 18 }, 19 'key_elements': { 20 'type': 'string', 21 'description': 'The key visual elements of the passage, eg tall trees', 22 }, 23 'specific_details': { 24 'type': 'string', 25 'description': 'The specific visual details of the passage, eg moonlight', 26 } 27 }, 28 'required': ['setting', 'time_of_day', 'weather', 'key_elements', 'specific_details'] 29 } 30}] 31CopyWith this, we can send the chat model a page from our book, the function, and instructions to infer the details from the provided page. In return, we get structured data that we can use to form a great Stable Diffusion prompt!",activeloop "Meet FableForge, AI Picture Books Generator powered by OpenAI, LangChain, Stable Diffusion, & Deep Lake",https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Imagine a world where children's picture books are created on-demand by children from a single prompt. With each generated image, the text and prompt pairs are stored for further finetuning if the child likes the story - to fit one human's imagination perfectly.This is the grand vision of FableForge.FableForge is an open-source app that generates children's picture books from a single prompt. First, GPT-3.5/4 is instructed to write a short children's book. Then, using the new function calling feature OpenAI just announced, the text from each book page is transformed into a prompt for Stable Diffusion. These prompts are sent to Replicate, corresponding images are generated, and all the elements are combined for a complete picture book. The matching images and prompts are stored in a Deep Lake vector database, allowing easy storing and visualizing of multimodal data (image and text pairs). Beyond that, the generated data can be streamed to machine learning frameworks in real time while training, to finetune our generative AI model. While the latter is beyond the scope of this example, we'd love to cover how it all works together.But first…",activeloop LangChain and OpenAI Function Calling,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"When we created FableForge, OpenAI announced the new function calling capabilities. Since then, LangChain - the open-source library we use to interact with OpenAI's Large Language Models - has added even better support for using functions. Our implementation of functions using LangChain is as follows: Define our function: First, we define our function, as we did above with get_visual_description_function. Give the chat model access to our function: Next, we call our chat model, including our function within the functions parameter, like so: 1 2response= self.chat([HumanMessage(content=f'{page}')],functions=get_visual_description_function) 3 4Copy Parse the JSON object: When the chat model uses our function, it provides the output as a JSON object. To convert the JSON object into a Python dictionary containing the function output, we can do the following: 1 2function_dict = json.loads(response.additional_kwargs['function_call']['arguments']) 3 4CopyIn the function, we defined earler, ‘setting' was one of the parameters. To access this, we can write:1setting = function_dict['setting'] 2CopyAnd we're done! We can follow the same steps for the each of the other parameters to extract them.",activeloop What is OpenAI Function Calling?,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"On June 13th, OpenAI announced a huge update to the chat completions API - function calling!. This means we can provide the chat model with a function, and the chat model will output a JSON object according to that function's parameters.Now, the chat models can interpret natural language input into a structured format suitable for external tools, APIs, or database queries. The chat models are designed to detect when a function needs to be called based on the user's input and can then respond with JSON that conforms to the described function's signature.In essence, function calling is a way to bridge the gap between unstructured language input and structured, actionable output that other systems, tools, or services can use.",activeloop Perfecting the Process: Using Deep Lake for Storage and Analysis,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"The final step breakthrough for perfecting FableForge was using Deep Lake to store the generated images and text. With Deep Lake, we could store multiple modalities of data, such as images and text, in the cloud. The web-based UI provided by Deep Lake made it incredibly straightforward to display, analyze, and optimize the generated images and prompts, improving the quality of our picture book output. For future Stable Diffusion endeavors, we now have a decently-sized dataset showing us what prompts work, and what prompts don't!",activeloop Second Component: Combining Text and Images,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Now that we have our text and images, we can open up MS Paint and copy-paste the text onto each corresponding image. That would be different, and it's also time-consuming; instead, let's do it programmatically. In pdf_gen_utils.py, we turn our ingredients into a proper book in these steps: Text Addition and Image Conversion: First, we take each image, resize it, and apply a fading mask to the bottom - a white space for us to place our text. We then add the text to the faded area, convert it into a PDF, and save it. Cover Generation: A book needs a cover that follows a different format than the rest of the pages. Instead of a fading mask, we take the cover image and place a white box over a portion for the title to be placed within. The other steps (resizing and saving as PDF) are the same as above. PDF Assembly: Once we have completed all the pages, we combine them into a single PDF and delete the files we no longer need.",activeloop Third Component: Saving to Deep Lake,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Now that we have finalized our picture book, we want to store the images and prompts in Deep Lake. For this, we created a SaveToDeepLake class:1import deeplake 2 3class SaveToDeepLake: 4 def __init__(self, buildbook_instance, name=None, dataset_path=None): 5 self.dataset_path = dataset_path 6 try: 7 self.ds = deeplake.load(dataset_path, read_only=False) 8 self.loaded = True 9 except: 10 self.ds = deeplake.empty(dataset_path) 11 self.loaded = False 12 13 self.prompt_list = buildbook_instance.sd_prompts_list 14 self.images = buildbook_instance.source_files 15 16 def fill_dataset(self): 17 if not self.loaded: 18 self.ds.create_tensor('prompts', htype='text') 19 self.ds.create_tensor('images', htype='image', sample_compression='png') 20 for i, prompt in enumerate(self.prompt_list): 21 self.ds.append({'prompts': prompt, 'images': deeplake.read(self.images[i])}) 22 23CopyWhen initialized, the class first tries to load a Deep Lake dataset from the provided path. If the dataset doesn't exist, a new one is created.If the dataset already existed, we simply added the prompts and images. The images can be easily uploaded using deeplake.read(), as Deep Lake is built to handle multimodal data.If the dataset is empty, we must first create the tensors to store our data. In this case, we create a tensor ‘prompts' for our prompts and ‘images' for our images. Our images are in PNG format, so we set sample_compression to 'png'.Once uploaded, we can view them in the UI, as shown above.All code can be found in the deep_lake_utils.py file.",activeloop First Component: AI Book Generation,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"All code for this component can be found in the api_utils.py file. Text Generation: To generate the text for the children's book, we use LangChain and the ChatOpenAI chat model. 1def get_pages(self): 2 pages = self.chat([HumanMessage(content=f'{self.book_text_prompt} Topic: {self.input_text}')]).content 3 return pages 4 5Copyself.book_text_prompt is a simple prompt instructing the model to generate a children's story. We specify the number of pages inside the prompt and what format the text should come in. The full prompt can be found in the prompts.py file. Visual Prompts Generation: To produce the prompts we will use with Stable Diffusion, we use functions, as outlined above. First, we send the whole book to the model: 1 def get_prompts(self): 2 base_atmosphere = self.chat([HumanMessage(content=f'Generate a visual description of the overall lightning/atmosphere of this book using the function.' 3 f'{self.book_text}')], functions=get_lighting_and_atmosphere_function) 4 summary = self.chat([HumanMessage(content=f'Generate a concise summary of the setting and visual details of the book')]).content 5 6CopySince we want our book to have a consistent style throughout, we will take the contents of base_atmosphere and append it to each individual prompt we generate later on. To further ensure our visuals stay consistent, we generate a concise summary of the visuals of the book. This summary will be sent to the model later on, accompanying each individual page, to generate our Stable Diffusion prompts.1 def generate_prompt(page, base_dict): 2 prompt = self.chat([HumanMessage(content=f'General book info: {base_dict}. Passage: {page}. Infer details about passage if they are missing, ' 3 f'use function with inferred detailsm as if you were illustrating the passage.')], 4 functions=get_visual_description_function) 5CopyThis method will be called for each individual page of the book. We send the model the info we just gathered along with a page from the book, and give it access to the get_visual_description_function function. The output of this will be a JSON object containing all the elements we need to form our prompts!1 for i, prompt in enumerate(prompt_list): 2 entry = f'{prompt['setting']}, {prompt['time_of_day']}, {prompt['weather']}, {prompt['key_elements']}, {prompt['specific_details']}, ' \ 3 f'{base_dict['lighting']}, {base_dict['mood']}, {base_dict['color_palette']}, in the style of {style}' 4CopyHere, we combine everything. Now that we have our prompts, we can send them to Replicate's Stable Diffusion API and get our images. Once those are downloaded, we can move on to the next step.",activeloop Building FableForge,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"FableForge's open-sourced code is located here.FableForge consists of four main components: The generation of the text and images The combining of the text and images to create the book Saving the images and prompts to the Deep Lake dataset The UI Let's take a look at each component individually, starting with the generation of the text and images. Here's a high-level overview of the architecture:",activeloop Final Component: Streamlit UI,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"To create a quick and simple UI, we used Streamlit. The complete code can be found in main.py.Our UI has three main features: Prompt Format: In this text input box, we allow the user to specify the prompt to generate the book based on. This could be anything from a theme, a plot, a time, and so on. Book Generation: Once the user has input their prompt, they can click the Generate button to generate the book. The app will run through all of the steps outlined above until it completes the generation. The user will then have a button to download their finished book. Saving to Deep Lake: The user can click the Save to Deep Lake checkbox to save the prompts and images to their Deep Lake vector database. Once the book is generated, this will run in the background, filling the user's dataset with all their generated prompts and images. Streamlit is an excellent choice for quick prototyping and smaller projects like FableForge - the entire UI is less than 60 lines of code!",activeloop Conclusion: The Future of AI-Generated Picture Books with FableForge & Deep Lake,https://activeloop.ai/resources/ai-story-generator-open-ai-function-calling-lang-chain-stable-diffusion/,"Developing FableForge was a perfect example of how new AI tools and methodologies can be leveraged to overcome hurdles. By leveraging the power of LangChain, OpenAI's function calling feature, Stable Diffusion's image generation abilities, and Deep Lake's multimodal dataset storage and analysis capabilities, we created an app that opens up a new frontier in children's picture book creation.Everyone can create an app like this - we did it, too. What will matter for you in the end, however, is having the data as the moat - and using the data you gether from your users to finetune models - providing them personal, curated experiences as they immerse themsleves into fiction. This is where Deep Lake comes into place. With its ‘data lake' features of visualization of multimodal data and streaming capability, Deep Lake enables teams to finetune their LLM performance or train entirely new ML models in a cost-effective manner.",activeloop Getting the Data for the Song Recommendation Engine,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"To get our songs, we scraped https://www.disneyclips.com/lyrics/, a website containing all the lyrics for all Disney songs ever made. The code is here, and it relies on asyncio to speed up things. We won't focus too much on it, since it's not central to our story (plays Encanto music we don't talk about asyncio, no, no, no…).Then, we used Spotify Python APIs to get all the embedding URLs for each song into the 'Disney Hits' Playlist. We removed all the songs we had scraped but were not in this playlist. By doing so, we end up with 85 songs.We end up with a json looking like this.1json 2{ 3 'Aladdin': [ 4 { 5 'name': 'Arabian Nights', 6 'text': 'Oh, I come from a land, from a faraway place. Where the caravan camels roam... ', 7 'embed_url': 'https://open.spotify.com/embed/track/0CKmN3Wwk8W4zjU0pqq2cv?utm_source=generator' 8 }, 9 ... 10 ], 11Copy",activeloop Data Encoding for the Recommendation Engine,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"We were looking for the best way to retrieve the songs. We evaluated different approaches. We used Activeloop DeepLake Vector Database - more specifically, its implementation in LangChain.Creating the dataset is pretty straightforward. Given the previous json file, we proceed to embed the text field using langchain.embeddings.openai.OpenaAIEmbeddings and add all the rest of the keys/values as metadata1python 2from langchain.embeddings.openai import OpenAIEmbeddings 3from langchain.llms import OpenAI 4from langchain.vectorstores import DeepLake 5 6def create_db(dataset_path: str, json_filepath: str) -> DeepLake: 7 with open(json_filepath, 'r') as f: 8 data = json.load(f) 9 10 texts = [] 11 metadatas = [] 12 13 for movie, lyrics in data.items(): 14 for lyric in lyrics: 15 texts.append(lyric['text']) 16 metadatas.append( 17 { 18 'movie': movie, 19 'name': lyric['name'], 20 'embed_url': lyric['embed_url'], 21 } 22 ) 23 24 embeddings = OpenAIEmbeddings(model='text-embedding-ada-002') 25 26 db = DeepLake.from_texts( 27 texts, embeddings, metadatas=metadatas, dataset_path=dataset_path 28 ) 29 30 return db 31CopyTo load it, we can simply:1def load_db(dataset_path: str, *args, **kwargs) -> DeepLake: 2 db = DeepLake(dataset_path, *args, **kwargs) 3 return db 4CopyMy dataset_path is hub://<ACTIVELOOP_ORGANIZATION_ID>/<DATASET_NAME>, but you can also store it locally. To store Deep Lake datasets locally, check out this doc here.",activeloop Similarity Search of Direct Embeddings,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"This approach was straightforward. We create embeddings for the lyrics and the user input with gpt3 and do a similarity search. Unfortunately, we noticed terrible suggestions because we want to match the user's emotions to the songs rather than precisely what it says.For example, if we search for similar songs using 'I am Sad', we will see very similar scores across all documents:1 db.similarity_search_with_score('I am happy', distance_metric='cos', k=100) 2CopyIf we plot the scores using a box plot, we will see they mostly are around 0.74.While the first ten songs do not match so well1The World Es Mi Familia 0.7777353525161743 2Go the Distance 0.7724394202232361 3Waiting on a Miracle 0.7692896127700806 4Happy Working Song 0.7679054141044617 5In Summer 0.7620900273323059 6So Close 0.7601353526115417 7When I Am Older 0.7582702040672302 8How Far I'll Go 0.7560539245605469 9You're Welcome 0.7539903521537781 10What Else Can I Do? 0.7535801529884338 11Copy",activeloop Using ChatGPT as a Retrieval System,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"We also tried to nuke the whole lyrics into ChatGPT and asked it to return matching songs with the user input. We had first to create a one-sentence summary of each lyric to fit into 4096 tokens. It resulted in around 3k tokens per request (0.006$). It follows the prompt template, which is very simple but very long. The {songs} variable holds the JSON with all the songs.1You act like a song retrieval system. We want to propose three songs based on the user input. We provide you a list of songs with their themes in the format <MOVIE_NAME>;<SONG_TITLE>:<SONG_THEMES>. To match the user input to the song, try to find themes/emotions from it and imagine what emotions the user may have and what song may be lovely to listen to. Add a bit of randomness to your decision. 2If you don't find a match, provide your best guess. Try to look at each song's themes to offer more variations in the match. Please only output songs contained in the following list. 3 4{songs} 5 6Given an input, output three songs as a list that goes well with the input. The list of songs will be used to retrieve them from our database. The type of reply is List[str, str, str]. Please follow the following example formats. 7 8Examples: 9Input: 'Today I am not feeling great.' 10['<MOVIE_NAME>;<SONG_TITLE>', '<MOVIE_NAME>;<SONG_TITLE>', '<MOVIE_NAME>;<SONG_TITLE>'] 11Input: 'I am great today' 12['<MOVIE_NAME>;<SONG_TITLE>', '<MOVIE_NAME>;<SONG_TITLE>', '<MOVIE_NAME>;<SONG_TITLE>'] 13 14The user input is {user_input} 15CopyThat did work okayish but was overkill. Later on, we also tried emotional encoding we will discuss in the next section, which had comparable performance.",activeloop What Did Work: Similarity Search of Emotions Embeddings,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/," Finally, we arrived at an inexpensive approach to run, which gives good results. We convert each lyric to a list of 8 emotions using ChatGPT. The prompt is the following1I am building a retrieval system. Given the following song lyric 2 3{song} 4 5You are tasked to produce a list of 8 emotions that I will later use to retrieve the song. 6 7Please provide only a list of comma-separated emotions. 8CopyFor example, using the 'Arabian Nights' from Aladdin (shown in the previous section), we obtained 'nostalgic, adventurous, exotic, intense, romantic, mysterious, whimsical, passionate'.We then embedded each emotion for each song with GPT3.5-turbo and stored it with Deep Lake.The entire script is hereThen, we need to convert the user input to a list of emotions. We used ChatGPT again with a custom prompt.1text 2We have a simple song retrieval system. It accepts eight emotions. You are tasked to suggest between 1 and 4 emotions to match the users' feelings. Suggest more emotions for longer sentences and just one or two for small ones, trying to condense the central theme of the input. 3 4Examples: 5 6Input: 'I had a great day!' 7'Joy' 8Input: 'I am exhausted today and not feeling well.' 9'Exhaustion, Discomfort, and Fatigue' 10Input: 'I am in Love' 11'Love' 12 13Please, suggest emotions for input = '{user_input}', and reply ONLY with a list of emotions/feelings/vibes. 14CopyHere we tasked the model to provide between one and four emotions. This worked best empirically, given the fact that most inputs are short.Let's see some examples:1'I'm happy and sad today' -> 'Happiness, Sadness' 2'hey, rock you' -> 'Energy, excitement, enthusiasm.' 3'I need to cry' -> 'Sadness, Grief, Sorrow, Despair.' 4CopyThen we used these emotions to perform the similarity search on the vector database.1python 2user_input = 'I am happy' 3# We use chatGPT to get emotions from a user's input 4emotions = chain.run(user_input=user_input) 5# We find the k more similar song 6matches = db.similarity_search_with_score(emotions, distance_metric='cos', k=k) 7CopyThese are the scores obtained from that search (k=100). They are more spread apart.And the songs make more sense.1Down in New Orleans (Finale) 0.9068354368209839 2Happy Working Song 0.9066014885902405 3Love is an Open Door 0.8957026600837708 4Circle of Life 0.8907418251037598 5Where You Are 0.8890194892883301 6In Summer 0.8889626264572144 7Dig a Little Deeper 0.8887585401535034 8When We're Human 0.8860496282577515 9Hakuna Matata 0.8856213688850403 10The World Es Mi Familia 0.884093165397644 11CopyWe also implement some postprocessing. We first filter out the low-scoring ones.1python 2def filter_scores(matches: Matches, th: float = 0.8) -> Matches: 3 return [(doc, score) for (doc, score) in matches if score > th] 4 5matches = filter_scores(matches, 0.8) 6CopyTo add more variations, aka only sometimes recommend the first one, we need to sample from the list of candidate matches",activeloop What Did Work: Similarity Search of Emotions Embeddings,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"To do so, we first ensure the scores sum to one by dividing by their sum.1python 2def normalize_scores_by_sum(matches: Matches) -> Matches: 3 scores = [score for _, score in matches] 4 tot = sum(scores) 5 return [(doc, (score / tot)) for doc, score in matches] 6CopyThen we sample n songs using a modified version of np.random.choice(..., p=scores), basically everything we sample we remove the element we have sampled . This ensures we don't sample two times the same element.1python 2docs, scores = zip(*matches) 3docs = weighted_random_sample( 4 np.array(docs), np.array(scores), n=number_of_displayed_songs 5).tolist() 6for doc in docs: 7 print(doc.metadata['name']) 8CopyAnd finally, we have our songs. Then, we created a web app using Streamlit, and we hosted the app on an Hugging Face space. Go give it a try! :)",activeloop Conclusion: Technology Choice Matters When Building a Recommendation Engine with LangChain and Deep Lake,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"While we explained how to mix these technologies to create a song recommendation system, you can apply the same principles to more use cases. With Deep Lake's multi-modality, you can embed store multiple embeddings to the same set of lyrics, or even incorporate additional factors such as embeddings based on song tempo, instruments used, and more!The main takeaway is understanding how to leverage LLMs to make the data work for you by transforming it to fit your task better. This was crucial for us since only after we converted both users' inputs and songs' lyrics to a list of emotions were we able to have suitable matches.That's all, folks 🎉Thanks for reading, and see you in the next one 💜 Francesco",activeloop 3 Approaches to Matching Moods to Songs,https://activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/,"The next step was to find a way to match our songs with a given user input. In this tutorial, we tried 3 approaches so you don't have to! Ultimately, we found a cheap way that worked qualitatively well. So let's start with the failures 😅",activeloop