response
stringlengths
1
33.1k
instruction
stringlengths
22
582k
Configures the default settings of the page. .. note:: This must be the first Streamlit command used on an app page, and must only be set once per page. Parameters ---------- page_title: str or None The page title, shown in the browser tab. If None, defaults to the filename of the script ("app.py" would show "app • Streamlit"). page_icon : Anything supported by st.image or str or None The page favicon. Besides the types supported by `st.image` (like URLs or numpy arrays), you can pass in an emoji as a string ("🦈") or a shortcode (":shark:"). If you're feeling lucky, try "random" for a random emoji! Emoji icons are courtesy of Twemoji and loaded from MaxCDN. layout: "centered" or "wide" How the page content should be laid out. Defaults to "centered", which constrains the elements into a centered column of fixed width; "wide" uses the entire screen. initial_sidebar_state: "auto", "expanded", or "collapsed" How the sidebar should start out. Defaults to "auto", which hides the sidebar on small devices and shows it otherwise. "expanded" shows the sidebar initially; "collapsed" hides it. In most cases, you should just use "auto", otherwise the app will look bad when embedded and viewed on mobile. menu_items: dict Configure the menu that appears on the top-right side of this app. The keys in this dict denote the menu item you'd like to configure: - "Get help": str or None The URL this menu item should point to. If None, hides this menu item. - "Report a Bug": str or None The URL this menu item should point to. If None, hides this menu item. - "About": str or None A markdown string to show in the About dialog. If None, only shows Streamlit's default About text. The URL may also refer to an email address e.g. ``mailto:[email protected]``. Example ------- >>> import streamlit as st >>> >>> st.set_page_config( ... page_title="Ex-stream-ly Cool App", ... page_icon="🧊", ... layout="wide", ... initial_sidebar_state="expanded", ... menu_items={ ... 'Get Help': 'https://www.extremelycoolapp.com/help', ... 'Report a bug': "https://www.extremelycoolapp.com/bug", ... 'About': "# This is a header. This is an *extremely* cool app!" ... } ... )
def set_page_config( page_title: str | None = None, page_icon: PageIcon | None = None, layout: Layout = "centered", initial_sidebar_state: InitialSideBarState = "auto", menu_items: MenuItems | None = None, ) -> None: """ Configures the default settings of the page. .. note:: This must be the first Streamlit command used on an app page, and must only be set once per page. Parameters ---------- page_title: str or None The page title, shown in the browser tab. If None, defaults to the filename of the script ("app.py" would show "app • Streamlit"). page_icon : Anything supported by st.image or str or None The page favicon. Besides the types supported by `st.image` (like URLs or numpy arrays), you can pass in an emoji as a string ("🦈") or a shortcode (":shark:"). If you're feeling lucky, try "random" for a random emoji! Emoji icons are courtesy of Twemoji and loaded from MaxCDN. layout: "centered" or "wide" How the page content should be laid out. Defaults to "centered", which constrains the elements into a centered column of fixed width; "wide" uses the entire screen. initial_sidebar_state: "auto", "expanded", or "collapsed" How the sidebar should start out. Defaults to "auto", which hides the sidebar on small devices and shows it otherwise. "expanded" shows the sidebar initially; "collapsed" hides it. In most cases, you should just use "auto", otherwise the app will look bad when embedded and viewed on mobile. menu_items: dict Configure the menu that appears on the top-right side of this app. The keys in this dict denote the menu item you'd like to configure: - "Get help": str or None The URL this menu item should point to. If None, hides this menu item. - "Report a Bug": str or None The URL this menu item should point to. If None, hides this menu item. - "About": str or None A markdown string to show in the About dialog. If None, only shows Streamlit's default About text. The URL may also refer to an email address e.g. ``mailto:[email protected]``. Example ------- >>> import streamlit as st >>> >>> st.set_page_config( ... page_title="Ex-stream-ly Cool App", ... page_icon="🧊", ... layout="wide", ... initial_sidebar_state="expanded", ... menu_items={ ... 'Get Help': 'https://www.extremelycoolapp.com/help', ... 'Report a bug': "https://www.extremelycoolapp.com/bug", ... 'About': "# This is a header. This is an *extremely* cool app!" ... } ... ) """ msg = ForwardProto() if page_title is not None: msg.page_config_changed.title = page_title if page_icon is not None: msg.page_config_changed.favicon = _get_favicon_string(page_icon) pb_layout: PageConfigProto.Layout.ValueType if layout == "centered": pb_layout = PageConfigProto.CENTERED elif layout == "wide": pb_layout = PageConfigProto.WIDE else: raise StreamlitAPIException( f'`layout` must be "centered" or "wide" (got "{layout}")' ) msg.page_config_changed.layout = pb_layout pb_sidebar_state: PageConfigProto.SidebarState.ValueType if initial_sidebar_state == "auto": pb_sidebar_state = PageConfigProto.AUTO elif initial_sidebar_state == "expanded": pb_sidebar_state = PageConfigProto.EXPANDED elif initial_sidebar_state == "collapsed": pb_sidebar_state = PageConfigProto.COLLAPSED else: raise StreamlitAPIException( "`initial_sidebar_state` must be " '"auto" or "expanded" or "collapsed" ' f'(got "{initial_sidebar_state}")' ) msg.page_config_changed.initial_sidebar_state = pb_sidebar_state if menu_items is not None: lowercase_menu_items = cast(MenuItems, lower_clean_dict_keys(menu_items)) validate_menu_items(lowercase_menu_items) menu_items_proto = msg.page_config_changed.menu_items set_menu_items_proto(lowercase_menu_items, menu_items_proto) ctx = get_script_run_ctx() if ctx is None: return ctx.enqueue(msg)
Marshall data into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. data : pandas.DataFrame, pandas.Styler, numpy.ndarray, Iterable, dict, or None Something that is or can be converted to a dataframe.
def marshall( proto: ArrowTableProto, data: Any, default_uuid: str | None = None ) -> None: """Marshall data into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. data : pandas.DataFrame, pandas.Styler, numpy.ndarray, Iterable, dict, or None Something that is or can be converted to a dataframe. """ if type_util.is_pandas_styler(data): pandas_styler_utils.marshall_styler(proto, data, default_uuid) # type: ignore df = type_util.convert_anything_to_df(data) _marshall_index(proto, df.index) _marshall_columns(proto, df.columns) _marshall_data(proto, df)
Marshall pandas.DataFrame index into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. index : pd.Index Index to use for resulting frame. Will default to RangeIndex (0, 1, 2, ..., n) if no index is provided.
def _marshall_index(proto: ArrowTableProto, index: Index) -> None: """Marshall pandas.DataFrame index into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. index : pd.Index Index to use for resulting frame. Will default to RangeIndex (0, 1, 2, ..., n) if no index is provided. """ import pandas as pd index = map(type_util.maybe_tuple_to_list, index.values) index_df = pd.DataFrame(index) proto.index = type_util.data_frame_to_bytes(index_df)
Marshall pandas.DataFrame columns into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. columns : Series Column labels to use for resulting frame. Will default to RangeIndex (0, 1, 2, ..., n) if no column labels are provided.
def _marshall_columns(proto: ArrowTableProto, columns: Series) -> None: """Marshall pandas.DataFrame columns into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. columns : Series Column labels to use for resulting frame. Will default to RangeIndex (0, 1, 2, ..., n) if no column labels are provided. """ import pandas as pd columns = map(type_util.maybe_tuple_to_list, columns.values) columns_df = pd.DataFrame(columns) proto.columns = type_util.data_frame_to_bytes(columns_df)
Marshall pandas.DataFrame data into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. df : pandas.DataFrame A dataframe to marshall.
def _marshall_data(proto: ArrowTableProto, df: DataFrame) -> None: """Marshall pandas.DataFrame data into an ArrowTable proto. Parameters ---------- proto : proto.ArrowTable Output. The protobuf for a Streamlit ArrowTable proto. df : pandas.DataFrame A dataframe to marshall. """ proto.data = type_util.data_frame_to_bytes(df)
Convert ArrowTable proto to pandas.DataFrame. Parameters ---------- proto : proto.ArrowTable Output. pandas.DataFrame
def arrow_proto_to_dataframe(proto: ArrowTableProto) -> DataFrame: """Convert ArrowTable proto to pandas.DataFrame. Parameters ---------- proto : proto.ArrowTable Output. pandas.DataFrame """ if type_util.is_pyarrow_version_less_than("14.0.1"): raise RuntimeError( "The installed pyarrow version is not compatible with this component. " "Please upgrade to 14.0.1 or higher: pip install -U pyarrow" ) import pandas as pd data = type_util.bytes_to_data_frame(proto.data) index = type_util.bytes_to_data_frame(proto.index) columns = type_util.bytes_to_data_frame(proto.columns) return pd.DataFrame( data.values, index=index.values.T.tolist(), columns=columns.values.T.tolist() )
Create and register a custom component. Parameters ---------- name: str A short, descriptive name for the component. Like, "slider". path: str or None The path to serve the component's frontend files from. Either `path` or `url` must be specified, but not both. url: str or None The URL that the component is served from. Either `path` or `url` must be specified, but not both. Returns ------- CustomComponent A CustomComponent that can be called like a function. Calling the component will create a new instance of the component in the Streamlit app.
def declare_component( name: str, path: str | None = None, url: str | None = None, ) -> CustomComponent: """Create and register a custom component. Parameters ---------- name: str A short, descriptive name for the component. Like, "slider". path: str or None The path to serve the component's frontend files from. Either `path` or `url` must be specified, but not both. url: str or None The URL that the component is served from. Either `path` or `url` must be specified, but not both. Returns ------- CustomComponent A CustomComponent that can be called like a function. Calling the component will create a new instance of the component in the Streamlit app. """ # Get our stack frame. current_frame: FrameType | None = inspect.currentframe() assert current_frame is not None # Get the stack frame of our calling function. caller_frame = current_frame.f_back assert caller_frame is not None module_name = _get_module_name(caller_frame) # Build the component name. component_name = f"{module_name}.{name}" # Create our component object, and register it. component = CustomComponent( name=component_name, path=path, url=url, module_name=module_name ) get_instance().component_registry.register_component(component) return component
Extract the specified keys from source_dict and return them in a new dict. Parameters ---------- keys : Collection[str] The keys to extract from source_dict. source_dict : Dict[str, Any] The dict to extract keys from. Note that this function mutates source_dict. Returns ------- Dict[str, Any] A new dict containing the keys/values extracted from source_dict.
def extract_from_dict( keys: Collection[str], source_dict: dict[str, Any] ) -> dict[str, Any]: """Extract the specified keys from source_dict and return them in a new dict. Parameters ---------- keys : Collection[str] The keys to extract from source_dict. source_dict : Dict[str, Any] The dict to extract keys from. Note that this function mutates source_dict. Returns ------- Dict[str, Any] A new dict containing the keys/values extracted from source_dict. """ d = {} for k in keys: if k in source_dict: d[k] = source_dict.pop(k) return d
Loads the dictionary from snowsql config file.
def load_from_snowsql_config_file(connection_name: str) -> dict[str, Any]: """Loads the dictionary from snowsql config file.""" snowsql_config_file = os.path.expanduser(SNOWSQL_CONNECTION_FILE) if not os.path.exists(snowsql_config_file): return {} # Lazy-load config parser for better import / startup performance import configparser config = configparser.ConfigParser(inline_comment_prefixes="#") config.read(snowsql_config_file) if f"connections.{connection_name}" in config: raw_conn_params = config[f"connections.{connection_name}"] elif "connections" in config: raw_conn_params = config["connections"] else: return {} conn_params = { k.replace("name", ""): v.strip('"') for k, v in raw_conn_params.items() } if "db" in conn_params: conn_params["database"] = conn_params["db"] del conn_params["db"] return conn_params
Return whether this app is running in SiS.
def running_in_sis() -> bool: """Return whether this app is running in SiS.""" try: from snowflake.snowpark._internal.utils import ( # type: ignore[import] # isort: skip is_in_stored_procedure, ) return cast(bool, is_in_stored_procedure()) except ModuleNotFoundError: return False
Marshall pandas.DataFrame into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. data : pandas.DataFrame, pandas.Styler, pyarrow.Table, numpy.ndarray, pyspark.sql.DataFrame, snowflake.snowpark.DataFrame, Iterable, dict, or None Something that is or can be converted to a dataframe. default_uuid : str | None If pandas.Styler UUID is not provided, this value will be used. This attribute is optional and only used for pandas.Styler, other elements (e.g. charts) can ignore it.
def marshall(proto: ArrowProto, data: Data, default_uuid: str | None = None) -> None: """Marshall pandas.DataFrame into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. data : pandas.DataFrame, pandas.Styler, pyarrow.Table, numpy.ndarray, pyspark.sql.DataFrame, snowflake.snowpark.DataFrame, Iterable, dict, or None Something that is or can be converted to a dataframe. default_uuid : str | None If pandas.Styler UUID is not provided, this value will be used. This attribute is optional and only used for pandas.Styler, other elements (e.g. charts) can ignore it. """ import pyarrow as pa if type_util.is_pandas_styler(data): # default_uuid is a string only if the data is a `Styler`, # and `None` otherwise. assert isinstance( default_uuid, str ), "Default UUID must be a string for Styler data." marshall_styler(proto, data, default_uuid) if isinstance(data, pa.Table): proto.data = type_util.pyarrow_table_to_bytes(data) else: df = type_util.convert_anything_to_df(data) proto.data = type_util.data_frame_to_bytes(df)
True if the column with the given name stores datetime.date values. This function just checks the first value in the given column, so it's meaningful only for columns whose values all share the same type. Parameters ---------- df : pd.DataFrame name : str The column name Returns ------- bool
def _is_date_column(df: pd.DataFrame, name: str | None) -> bool: """True if the column with the given name stores datetime.date values. This function just checks the first value in the given column, so it's meaningful only for columns whose values all share the same type. Parameters ---------- df : pd.DataFrame name : str The column name Returns ------- bool """ if name is None: return False column = df[name] if column.size == 0: return False return isinstance(column.iloc[0], date)
Converts a wide-format dataframe to a long-format dataframe.
def _melt_data( df: pd.DataFrame, columns_to_leave_alone: list[str], columns_to_melt: list[str] | None, new_y_column_name: str, new_color_column_name: str, ) -> pd.DataFrame: """Converts a wide-format dataframe to a long-format dataframe.""" import pandas as pd from pandas.api.types import infer_dtype melted_df = pd.melt( df, id_vars=columns_to_leave_alone, value_vars=columns_to_melt, var_name=new_color_column_name, value_name=new_y_column_name, ) y_series = melted_df[new_y_column_name] if ( y_series.dtype == "object" and "mixed" in infer_dtype(y_series) and len(y_series.unique()) > 100 ): raise StreamlitAPIException( "The columns used for rendering the chart contain too many values with mixed types. Please select the columns manually via the y parameter." ) # Arrow has problems with object types after melting two different dtypes # pyarrow.lib.ArrowTypeError: "Expected a <TYPE> object, got a object" fixed_df = type_util.fix_arrow_incompatible_column_types( melted_df, selected_columns=[ *columns_to_leave_alone, new_color_column_name, new_y_column_name, ], ) return fixed_df
Prepares the data for charting. This is also used in add_rows. Returns the prepared dataframe and the new names of the x column (taking the index reset into consideration) and y, color, and size columns.
def prep_data( df: pd.DataFrame, x_column: str | None, y_column_list: list[str], color_column: str | None, size_column: str | None, ) -> tuple[pd.DataFrame, str | None, str | None, str | None, str | None]: """Prepares the data for charting. This is also used in add_rows. Returns the prepared dataframe and the new names of the x column (taking the index reset into consideration) and y, color, and size columns. """ # If y is provided, but x is not, we'll use the index as x. # So we need to pull the index into its own column. x_column = _maybe_reset_index_in_place(df, x_column, y_column_list) # Drop columns we're not using. selected_data = _drop_unused_columns( df, x_column, color_column, size_column, *y_column_list ) # Maybe convert color to Vega colors. _maybe_convert_color_column_in_place(selected_data, color_column) # Make sure all columns have string names. ( x_column, y_column_list, color_column, size_column, ) = _convert_col_names_to_str_in_place( selected_data, x_column, y_column_list, color_column, size_column ) # Maybe melt data from wide format into long format. melted_data, y_column, color_column = _maybe_melt( selected_data, x_column, y_column_list, color_column, size_column ) # Return the data, but also the new names to use for x, y, and color. return melted_data, x_column, y_column, color_column, size_column
Function to use the chart's type, data columns and indices to figure out the chart's spec.
def _generate_chart( chart_type: ChartType, data: Data | None, x_from_user: str | None = None, y_from_user: str | Sequence[str] | None = None, color_from_user: str | Color | list[Color] | None = None, size_from_user: str | float | None = None, width: int = 0, height: int = 0, ) -> tuple[alt.Chart, AddRowsMetadata]: """Function to use the chart's type, data columns and indices to figure out the chart's spec.""" import altair as alt df = type_util.convert_anything_to_df(data, ensure_copy=True) # From now on, use "df" instead of "data". Deleting "data" to guarantee we follow this. del data # Convert arguments received from the user to things Vega-Lite understands. # Get name of column to use for x. x_column = _parse_x_column(df, x_from_user) # Get name of columns to use for y. y_column_list = _parse_y_columns(df, y_from_user, x_column) # Get name of column to use for color, or constant value to use. Any/both could be None. color_column, color_value = _parse_generic_column(df, color_from_user) # Get name of column to use for size, or constant value to use. Any/both could be None. size_column, size_value = _parse_generic_column(df, size_from_user) # Store some info so we can use it in add_rows. add_rows_metadata = AddRowsMetadata( # The last index of df so we can adjust the input df in add_rows: last_index=last_index_for_melted_dataframes(df), # This is the input to prep_data (except for the df): columns=dict( x_column=x_column, y_column_list=y_column_list, color_column=color_column, size_column=size_column, ), ) # At this point, all foo_column variables are either None/empty or contain actual # columns that are guaranteed to exist. df, x_column, y_column, color_column, size_column = prep_data( df, x_column, y_column_list, color_column, size_column ) # At this point, x_column is only None if user did not provide one AND df is empty. # Create a Chart with x and y encodings. chart = alt.Chart( data=df, mark=chart_type.value["mark_type"], width=width, height=height, ).encode( x=_get_x_encoding(df, x_column, x_from_user, chart_type), y=_get_y_encoding(df, y_column, y_from_user), ) # Set up opacity encoding. opacity_enc = _get_opacity_encoding(chart_type, color_column) if opacity_enc is not None: chart = chart.encode(opacity=opacity_enc) # Set up color encoding. color_enc = _get_color_encoding( df, color_value, color_column, y_column_list, color_from_user ) if color_enc is not None: chart = chart.encode(color=color_enc) # Set up size encoding. size_enc = _get_size_encoding(chart_type, size_column, size_value) if size_enc is not None: chart = chart.encode(size=size_enc) # Set up tooltip encoding. if x_column is not None and y_column is not None: chart = chart.encode( tooltip=_get_tooltip_encoding( x_column, y_column, size_column, color_column, color_enc, ) ) return chart.interactive(), add_rows_metadata
Returns a subset of df, selecting only column_names that aren't None.
def _drop_unused_columns(df: pd.DataFrame, *column_names: str | None) -> pd.DataFrame: """Returns a subset of df, selecting only column_names that aren't None.""" # We can't just call set(col_names) because sets don't have stable ordering, # which means tests that depend on ordering will fail. # Performance-wise, it's not a problem, though, since this function is only ever # used on very small lists. seen = set() keep = [] for x in column_names: if x is None: continue if x in seen: continue seen.add(x) keep.append(x) return df[keep]
If needed, convert color column to a format Vega understands.
def _maybe_convert_color_column_in_place(df: pd.DataFrame, color_column: str | None): """If needed, convert color column to a format Vega understands.""" if color_column is None or len(df[color_column]) == 0: return first_color_datum = df[color_column].iat[0] if is_hex_color_like(first_color_datum): # Hex is already CSS-valid. pass elif is_color_tuple_like(first_color_datum): # Tuples need to be converted to CSS-valid. df[color_column] = df[color_column].map(to_css_color) else: # Other kinds of colors columns (i.e. pure numbers or nominal strings) shouldn't # be converted since they are treated by Vega-Lite as sequential or categorical colors. pass
Converts column names to strings, since Vega-Lite does not accept ints, etc.
def _convert_col_names_to_str_in_place( df: pd.DataFrame, x_column: str | None, y_column_list: list[str], color_column: str | None, size_column: str | None, ) -> tuple[str | None, list[str], str | None, str | None]: """Converts column names to strings, since Vega-Lite does not accept ints, etc.""" import pandas as pd column_names = list(df.columns) # list() converts RangeIndex, etc, to regular list. str_column_names = [str(c) for c in column_names] df.columns = pd.Index(str_column_names) return ( None if x_column is None else str(x_column), [str(c) for c in y_column_list], None if color_column is None else str(color_column), None if size_column is None else str(size_column), )
If multiple columns are set for y, melt the dataframe into long format.
def _maybe_melt( df: pd.DataFrame, x_column: str | None, y_column_list: list[str], color_column: str | None, size_column: str | None, ) -> tuple[pd.DataFrame, str | None, str | None]: """If multiple columns are set for y, melt the dataframe into long format.""" y_column: str | None if len(y_column_list) == 0: y_column = None elif len(y_column_list) == 1: y_column = y_column_list[0] elif x_column is not None: # Pick column names that are unlikely to collide with user-given names. y_column = MELTED_Y_COLUMN_NAME color_column = MELTED_COLOR_COLUMN_NAME columns_to_leave_alone = [x_column] if size_column: columns_to_leave_alone.append(size_column) df = _melt_data( df=df, columns_to_leave_alone=columns_to_leave_alone, columns_to_melt=y_column_list, new_y_column_name=y_column, new_color_column_name=color_column, ) return df, y_column, color_column
Marshall chart's data into proto.
def marshall( vega_lite_chart: ArrowVegaLiteChartProto, altair_chart: alt.Chart, use_container_width: bool = False, theme: None | Literal["streamlit"] = "streamlit", **kwargs: Any, ) -> None: """Marshall chart's data into proto.""" import altair as alt # Normally altair_chart.to_dict() would transform the dataframe used by the # chart into an array of dictionaries. To avoid that, we install a # transformer that replaces datasets with a reference by the object id of # the dataframe. We then fill in the dataset manually later on. datasets = {} def id_transform(data) -> dict[str, str]: """Altair data transformer that returns a fake named dataset with the object id. """ name = str(id(data)) datasets[name] = data return {"name": name} alt.data_transformers.register("id", id_transform) # type: ignore[attr-defined,unused-ignore] # The default altair theme has some width/height defaults defined # which are not useful for Streamlit. Therefore, we change the theme to # "none" to avoid those defaults. with alt.themes.enable("none") if alt.themes.active == "default" else nullcontext(): # type: ignore[attr-defined,unused-ignore] with alt.data_transformers.enable("id"): # type: ignore[attr-defined,unused-ignore] chart_dict = altair_chart.to_dict() # Put datasets back into the chart dict but note how they weren't # transformed. chart_dict["datasets"] = datasets arrow_vega_lite.marshall( vega_lite_chart, chart_dict, use_container_width=use_container_width, theme=theme, **kwargs, )
Construct a Vega-Lite chart object. See DeltaGenerator.vega_lite_chart for docs.
def marshall( proto: ArrowVegaLiteChartProto, data: Data = None, spec: dict[str, Any] | None = None, use_container_width: bool = False, theme: None | Literal["streamlit"] = "streamlit", **kwargs, ): """Construct a Vega-Lite chart object. See DeltaGenerator.vega_lite_chart for docs. """ # Support passing data inside spec['datasets'] and spec['data']. # (The data gets pulled out of the spec dict later on.) if isinstance(data, dict) and spec is None: spec = data data = None # Support passing no spec arg, but filling it with kwargs. # Example: # marshall(proto, baz='boz') if spec is None: spec = dict() else: # Clone the spec dict, since we may be mutating it. spec = dict(spec) # Support passing in kwargs. Example: # marshall(proto, {foo: 'bar'}, baz='boz') if len(kwargs): # Merge spec with unflattened kwargs, where kwargs take precedence. # This only works for string keys, but kwarg keys are strings anyways. spec = dict(spec, **dicttools.unflatten(kwargs, _CHANNELS)) if len(spec) == 0: raise ValueError("Vega-Lite charts require a non-empty spec dict.") if "autosize" not in spec: # type fit does not work for many chart types. This change focuses # on vconcat with use_container_width=True as there are unintended # consequences of changing the default autosize for all charts. # fit-x fits the width and height can be adjusted. if "vconcat" in spec and use_container_width: spec["autosize"] = {"type": "fit-x", "contains": "padding"} else: spec["autosize"] = {"type": "fit", "contains": "padding"} # Pull data out of spec dict when it's in a 'datasets' key: # marshall(proto, {datasets: {foo: df1, bar: df2}, ...}) if "datasets" in spec: for k, v in spec["datasets"].items(): dataset = proto.datasets.add() dataset.name = str(k) dataset.has_name = True arrow.marshall(dataset.data, v) del spec["datasets"] # Pull data out of spec dict when it's in a top-level 'data' key: # marshall(proto, {data: df}) # marshall(proto, {data: {values: df, ...}}) # marshall(proto, {data: {url: 'url'}}) # marshall(proto, {data: {name: 'foo'}}) if "data" in spec: data_spec = spec["data"] if isinstance(data_spec, dict): if "values" in data_spec: data = data_spec["values"] del spec["data"] else: data = data_spec del spec["data"] proto.spec = json.dumps(spec) proto.use_container_width = use_container_width proto.theme = theme or "" if data is not None: arrow.marshall(proto.data, data)
Construct a Bokeh chart object. See DeltaGenerator.bokeh_chart for docs.
def marshall( proto: BokehChartProto, figure: Figure, use_container_width: bool, element_id: str, ) -> None: """Construct a Bokeh chart object. See DeltaGenerator.bokeh_chart for docs. """ from bokeh.embed import json_item data = json_item(figure) proto.figure = json.dumps(data) proto.use_container_width = use_container_width proto.element_id = element_id
Check the current stack for existing DeltaGenerator's of type 'dialog'. Note that the check like this only works when Dialog is called as a context manager, as this populates the dg_stack in delta_generator correctly. This does not detect the edge case in which someone calls, for example, `with st.sidebar` inside of a dialog function and opens a dialog in there, as `with st.sidebar` pushes the new DeltaGenerator to the stack. In order to check for that edge case, we could try to check all DeltaGenerators in the stack, and not only the last one. Since we deem this to be an edge case, we lean towards simplicity here. Raises ------ StreamlitAPIException Raised if the user tries to nest dialogs inside of each other.
def _assert_no_nested_dialogs() -> None: """Check the current stack for existing DeltaGenerator's of type 'dialog'. Note that the check like this only works when Dialog is called as a context manager, as this populates the dg_stack in delta_generator correctly. This does not detect the edge case in which someone calls, for example, `with st.sidebar` inside of a dialog function and opens a dialog in there, as `with st.sidebar` pushes the new DeltaGenerator to the stack. In order to check for that edge case, we could try to check all DeltaGenerators in the stack, and not only the last one. Since we deem this to be an edge case, we lean towards simplicity here. Raises ------ StreamlitAPIException Raised if the user tries to nest dialogs inside of each other. """ last_dg_in_current_context = get_last_dg_added_to_context_stack() if last_dg_in_current_context and "dialog" in set( last_dg_in_current_context._ancestor_block_types ): raise StreamlitAPIException("Dialogs may not be nested inside other dialogs.")
Decorate a function to mark it as a Streamlit dialog. When the decorated function is called, a dialog element is inserted with the function's body as the content. The decorated function can hold multiple elements which are rendered inside of a modal when the decorated function is called. The decorated function is using `st.experimental_fragment`, which means that interacting with elements inside of the dialog will only re-run the dialog function. The decorated function can accept arguments that can be passed when it is called. Dismissing a dialog does not cause an app re-run. You can close the dialog programmatically by executing `st.rerun()` explicitly inside of the decorated function. In order to pass state from dialog widgets to the app, you can leverage `st.session_state`. .. warning:: Currently, a dialog may not open another dialog. Also, only one dialog-decorated function may be called in a script run, which means that only one dialog can be open at any given time. Parameters ---------- title : str A string that will be used as the dialog's title. It cannot be empty. width : "small", "large" The width of the dialog. Defaults to "small". Returns ------- A decorated function that, when called, inserts a dialog element context container. The container itself contains the decorated function's elements. Examples -------- You can annotate a function to mark it as a Streamlit dialog function and pass arguments to it. You can either dismiss the dialog via the ESC-key or the X-button or close it programmatically and trigger a re-run by using `st.rerun()`. Leverage `st.session_state` if you want to pass dialog widget states to the overall app: >>> import streamlit as st >>> >>> @st.experimental_dialog("Streamlit Example Dialog") >>> def example_dialog(some_arg: str, some_other_arg: int): >>> st.write(f"You passed following args: {some_arg} | {some_other_arg}") >>> # interacting with the text_input only re-runs `example_dialog` >>> some_text_input = st.text_input("Type something:", key="example_dialog_some_text_input") >>> # following write is updated when chaning the text_input inside the dialog >>> st.write(f"You wrote '{some_text_input}' in the dialog") >>> if st.button("Close the dialog"): >>> st.rerun() >>> >>> if st.button("Open dialog"): >>> example_dialog("Some string arg", 42) >>> >>> # following write is updated with the dialog's text input when the dialog was opened, the text input was interacted with and a re-run was triggered, e.g. by clicking the Close-button defined in `example_dialog` >>> st.write(f"You wrote '{st.session_state.get('example_dialog_some_text_input', '')}' in the dialog")
def dialog_decorator( title: F | None | str = "", *, width: DialogWidth = "small" ) -> F | Callable[[F], F]: r"""Decorate a function to mark it as a Streamlit dialog. When the decorated function is called, a dialog element is inserted with the function's body as the content. The decorated function can hold multiple elements which are rendered inside of a modal when the decorated function is called. The decorated function is using `st.experimental_fragment`, which means that interacting with elements inside of the dialog will only re-run the dialog function. The decorated function can accept arguments that can be passed when it is called. Dismissing a dialog does not cause an app re-run. You can close the dialog programmatically by executing `st.rerun()` explicitly inside of the decorated function. In order to pass state from dialog widgets to the app, you can leverage `st.session_state`. .. warning:: Currently, a dialog may not open another dialog. Also, only one dialog-decorated function may be called in a script run, which means that only one dialog can be open at any given time. Parameters ---------- title : str A string that will be used as the dialog's title. It cannot be empty. width : "small", "large" The width of the dialog. Defaults to "small". Returns ------- A decorated function that, when called, inserts a dialog element context container. The container itself contains the decorated function's elements. Examples -------- You can annotate a function to mark it as a Streamlit dialog function and pass arguments to it. You can either dismiss the dialog via the ESC-key or the X-button or close it programmatically and trigger a re-run by using `st.rerun()`. Leverage `st.session_state` if you want to pass dialog widget states to the overall app: >>> import streamlit as st >>> >>> @st.experimental_dialog("Streamlit Example Dialog") >>> def example_dialog(some_arg: str, some_other_arg: int): >>> st.write(f"You passed following args: {some_arg} | {some_other_arg}") >>> # interacting with the text_input only re-runs `example_dialog` >>> some_text_input = st.text_input("Type something:", key="example_dialog_some_text_input") >>> # following write is updated when chaning the text_input inside the dialog >>> st.write(f"You wrote '{some_text_input}' in the dialog") >>> if st.button("Close the dialog"): >>> st.rerun() >>> >>> if st.button("Open dialog"): >>> example_dialog("Some string arg", 42) >>> >>> # following write is updated with the dialog's text input when the dialog was opened, the text input was interacted with and a re-run was triggered, e.g. by clicking the Close-button defined in `example_dialog` >>> st.write(f"You wrote '{st.session_state.get('example_dialog_some_text_input', '')}' in the dialog") """ func_or_title = title if func_or_title is None: # Support passing the params via function decorator def wrapper(f: F) -> F: return _dialog_decorator(non_optional_func=f, title="", width=width) return wrapper elif type(func_or_title) is str: # Support passing the params via function decorator def wrapper(f: F) -> F: title: str = func_or_title return _dialog_decorator(non_optional_func=f, title=title, width=width) return wrapper func: F = cast(F, func_or_title) return _dialog_decorator(func, "", width=width)
Construct a DocString object. See DeltaGenerator.help for docs.
def _marshall(doc_string_proto: DocStringProto, obj: Any) -> None: """Construct a DocString object. See DeltaGenerator.help for docs. """ var_name = _get_variable_name() if var_name is not None: doc_string_proto.name = var_name obj_type = _get_type_as_str(obj) doc_string_proto.type = obj_type obj_docs = _get_docstring(obj) if obj_docs is not None: doc_string_proto.doc_string = obj_docs obj_value = _get_value(obj, var_name) if obj_value is not None: doc_string_proto.value = obj_value doc_string_proto.members.extend(_get_members(obj))
Try to get the name of the variable in the current line, as set by the user. For example: foo = bar.Baz(123) st.help(foo) The name is "foo"
def _get_variable_name(): """Try to get the name of the variable in the current line, as set by the user. For example: foo = bar.Baz(123) st.help(foo) The name is "foo" """ code = _get_current_line_of_code_as_str() if code is None: return None return _get_variable_name_from_code_str(code)
Checks whether the AST in tree is a call for command_name.
def _is_stcommand(tree, command_name): """Checks whether the AST in tree is a call for command_name.""" root_node = tree.body[0].value if not type(root_node) is ast.Call: return False return ( # st call called without module. E.g. "help()" getattr(root_node.func, "id", None) == command_name or # st call called with module. E.g. "foo.help()" (where usually "foo" is "st") getattr(root_node.func, "attr", None) == command_name )
Gets the argument node for the st command in tree (AST).
def _get_stcommand_arg(tree): """Gets the argument node for the st command in tree (AST).""" root_node = tree.body[0].value if root_node.args: return root_node.args[0] return None
Marshalls an Exception.proto message. Parameters ---------- exception_proto : Exception.proto The Exception protobuf to fill out exception : BaseException The exception whose data we're extracting
def marshall(exception_proto: ExceptionProto, exception: BaseException) -> None: """Marshalls an Exception.proto message. Parameters ---------- exception_proto : Exception.proto The Exception protobuf to fill out exception : BaseException The exception whose data we're extracting """ # If this is a StreamlitAPIException, we prune all Streamlit entries # from the exception's stack trace. is_api_exception = isinstance(exception, StreamlitAPIException) is_deprecation_exception = isinstance(exception, StreamlitDeprecationWarning) is_markdown_exception = isinstance(exception, MarkdownFormattedException) is_uncaught_app_exception = isinstance(exception, UncaughtAppException) stack_trace = ( [] if is_deprecation_exception else _get_stack_trace_str_list( exception, strip_streamlit_stack_entries=is_api_exception ) ) # Some exceptions (like UserHashError) have an alternate_name attribute so # we can pretend to the user that the exception is called something else. if getattr(exception, "alternate_name", None) is not None: exception_proto.type = getattr(exception, "alternate_name") else: exception_proto.type = type(exception).__name__ exception_proto.stack_trace.extend(stack_trace) exception_proto.is_warning = isinstance(exception, Warning) try: if isinstance(exception, SyntaxError): # SyntaxErrors have additional fields (filename, text, lineno, # offset) that we can use for a nicely-formatted message telling # the user what to fix. exception_proto.message = _format_syntax_error_message(exception) else: exception_proto.message = str(exception).strip() exception_proto.message_is_markdown = is_markdown_exception except Exception as str_exception: # Sometimes the exception's __str__/__unicode__ method itself # raises an error. exception_proto.message = "" _LOGGER.warning( """ Streamlit was unable to parse the data from an exception in the user's script. This is usually due to a bug in the Exception object itself. Here is some info about that Exception object, so you can report a bug to the original author: Exception type: %(etype)s Problem: %(str_exception)s Traceback: %(str_exception_tb)s """ % { "etype": type(exception).__name__, "str_exception": str_exception, "str_exception_tb": "\n".join(_get_stack_trace_str_list(str_exception)), } ) if is_uncaught_app_exception: uae = cast(UncaughtAppException, exception) exception_proto.message = _GENERIC_UNCAUGHT_EXCEPTION_TEXT type_str = str(type(uae.exc)) exception_proto.type = type_str.replace("<class '", "").replace("'>", "")
Returns a nicely formatted SyntaxError message that emulates what the Python interpreter outputs, e.g.: > File "raven.py", line 3 > st.write('Hello world!!')) > ^ > SyntaxError: invalid syntax
def _format_syntax_error_message(exception: SyntaxError) -> str: """Returns a nicely formatted SyntaxError message that emulates what the Python interpreter outputs, e.g.: > File "raven.py", line 3 > st.write('Hello world!!')) > ^ > SyntaxError: invalid syntax """ if exception.text: if exception.offset is not None: caret_indent = " " * max(exception.offset - 1, 0) else: caret_indent = "" return ( 'File "%(filename)s", line %(lineno)s\n' " %(text)s\n" " %(caret_indent)s^\n" "%(errname)s: %(msg)s" % { "filename": exception.filename, "lineno": exception.lineno, "text": exception.text.rstrip(), "caret_indent": caret_indent, "errname": type(exception).__name__, "msg": exception.msg, } ) # If a few edge cases, SyntaxErrors don't have all these nice fields. So we # have a fall back here. # Example edge case error message: encoding declaration in Unicode string return str(exception)
Get the stack trace for the given exception. Parameters ---------- exception : BaseException The exception to extract the traceback from strip_streamlit_stack_entries : bool If True, all traceback entries that are in the Streamlit package will be removed from the list. We do this for exceptions that result from incorrect usage of Streamlit APIs, so that the user doesn't see a bunch of noise about ScriptRunner, DeltaGenerator, etc. Returns ------- list The exception traceback as a list of strings
def _get_stack_trace_str_list( exception: BaseException, strip_streamlit_stack_entries: bool = False ) -> list[str]: """Get the stack trace for the given exception. Parameters ---------- exception : BaseException The exception to extract the traceback from strip_streamlit_stack_entries : bool If True, all traceback entries that are in the Streamlit package will be removed from the list. We do this for exceptions that result from incorrect usage of Streamlit APIs, so that the user doesn't see a bunch of noise about ScriptRunner, DeltaGenerator, etc. Returns ------- list The exception traceback as a list of strings """ extracted_traceback: traceback.StackSummary | None = None if isinstance(exception, StreamlitAPIWarning): extracted_traceback = exception.tacked_on_stack elif hasattr(exception, "__traceback__"): extracted_traceback = traceback.extract_tb(exception.__traceback__) if isinstance(exception, UncaughtAppException): extracted_traceback = traceback.extract_tb(exception.exc.__traceback__) # Format the extracted traceback and add it to the protobuf element. if extracted_traceback is None: stack_trace_str_list = [ "Cannot extract the stack trace for this exception. " "Try calling exception() within the `catch` block." ] else: if strip_streamlit_stack_entries: extracted_frames = _get_nonstreamlit_traceback(extracted_traceback) stack_trace_str_list = traceback.format_list(extracted_frames) else: stack_trace_str_list = traceback.format_list(extracted_traceback) stack_trace_str_list = [item.strip() for item in stack_trace_str_list] return stack_trace_str_list
True if the given file is part of the streamlit package.
def _is_in_streamlit_package(file: str) -> bool: """True if the given file is part of the streamlit package.""" try: common_prefix = os.path.commonprefix([os.path.realpath(file), _STREAMLIT_DIR]) except ValueError: # Raised if paths are on different drives. return False return common_prefix == _STREAMLIT_DIR
Find the FormData for the given DeltaGenerator. Forms are blocks, and can have other blocks nested inside them. To find the current form, we walk up the dg_stack until we find a DeltaGenerator that has FormData.
def _current_form(this_dg: DeltaGenerator) -> FormData | None: """Find the FormData for the given DeltaGenerator. Forms are blocks, and can have other blocks nested inside them. To find the current form, we walk up the dg_stack until we find a DeltaGenerator that has FormData. """ # Avoid circular imports. from streamlit.delta_generator import dg_stack if not runtime.exists(): return None if this_dg._form_data is not None: return this_dg._form_data if this_dg == this_dg._main_dg: # We were created via an `st.foo` call. # Walk up the dg_stack to see if we're nested inside a `with st.form` statement. for dg in reversed(dg_stack.get()): if dg._form_data is not None: return dg._form_data else: # We were created via an `dg.foo` call. # Take a look at our parent's form data to see if we're nested inside a form. parent = this_dg._parent if parent is not None and parent._form_data is not None: return parent._form_data return None
Return the form_id for the current form, or the empty string if we're not inside an `st.form` block. (We return the empty string, instead of None, because this value is assigned to protobuf message fields, and None is not valid.)
def current_form_id(dg: DeltaGenerator) -> str: """Return the form_id for the current form, or the empty string if we're not inside an `st.form` block. (We return the empty string, instead of None, because this value is assigned to protobuf message fields, and None is not valid.) """ form_data = _current_form(dg) if form_data is None: return "" return form_data.form_id
True if the DeltaGenerator is inside an st.form block.
def is_in_form(dg: DeltaGenerator) -> bool: """True if the DeltaGenerator is inside an st.form block.""" return current_form_id(dg) != ""
Construct a GraphViz chart object. See DeltaGenerator.graphviz_chart for docs.
def marshall( proto: GraphVizChartProto, figure_or_dot: FigureOrDot, use_container_width: bool, element_id: str, ) -> None: """Construct a GraphViz chart object. See DeltaGenerator.graphviz_chart for docs. """ if type_util.is_graphviz_chart(figure_or_dot): dot = figure_or_dot.source engine = figure_or_dot.engine elif isinstance(figure_or_dot, str): dot = figure_or_dot engine = "dot" else: raise StreamlitAPIException( "Unhandled type for graphviz chart: %s" % type(figure_or_dot) ) proto.spec = dot proto.engine = engine proto.use_container_width = use_container_width proto.element_id = element_id
Marshalls data into an IFrame proto. These parameters correspond directly to <iframe> attributes, which are described in more detail at https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe. Parameters ---------- proto : IFrame protobuf The protobuf object to marshall data into. src : str The URL of the page to embed. srcdoc : str Inline HTML to embed. Overrides src. width : int The width of the frame in CSS pixels. Defaults to the app's default element width. height : int The height of the frame in CSS pixels. Defaults to 150. scrolling : bool If true, show a scrollbar when the content is larger than the iframe. Otherwise, never show a scrollbar.
def marshall( proto: IFrameProto, src: str | None = None, srcdoc: str | None = None, width: int | None = None, height: int | None = None, scrolling: bool = False, ) -> None: """Marshalls data into an IFrame proto. These parameters correspond directly to <iframe> attributes, which are described in more detail at https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe. Parameters ---------- proto : IFrame protobuf The protobuf object to marshall data into. src : str The URL of the page to embed. srcdoc : str Inline HTML to embed. Overrides src. width : int The width of the frame in CSS pixels. Defaults to the app's default element width. height : int The height of the frame in CSS pixels. Defaults to 150. scrolling : bool If true, show a scrollbar when the content is larger than the iframe. Otherwise, never show a scrollbar. """ if src is not None: proto.src = src if srcdoc is not None: proto.srcdoc = srcdoc if width is not None: proto.width = width proto.has_width = True if height is not None: proto.height = height else: proto.height = 150 proto.scrolling = scrolling
Return either "JPEG", "PNG", or "GIF", based on the input `format` string. - If `format` is "JPEG" or "JPG" (or any capitalization thereof), return "JPEG" - If `format` is "PNG" (or any capitalization thereof), return "PNG" - For all other strings, return "PNG" if the image has an alpha channel, "GIF" if the image is a GIF, and "JPEG" otherwise.
def _validate_image_format_string( image_data: bytes | PILImage, format: str ) -> ImageFormat: """Return either "JPEG", "PNG", or "GIF", based on the input `format` string. - If `format` is "JPEG" or "JPG" (or any capitalization thereof), return "JPEG" - If `format` is "PNG" (or any capitalization thereof), return "PNG" - For all other strings, return "PNG" if the image has an alpha channel, "GIF" if the image is a GIF, and "JPEG" otherwise. """ format = format.upper() if format == "JPEG" or format == "PNG": return cast(ImageFormat, format) # We are forgiving on the spelling of JPEG if format == "JPG": return "JPEG" if isinstance(image_data, bytes): from PIL import Image pil_image = Image.open(io.BytesIO(image_data)) else: pil_image = image_data if _image_is_gif(pil_image): return "GIF" if _image_may_have_alpha_channel(pil_image): return "PNG" return "JPEG"
Convert a PIL image to bytes.
def _PIL_to_bytes( image: PILImage, format: ImageFormat = "JPEG", quality: int = 100, ) -> bytes: """Convert a PIL image to bytes.""" tmp = io.BytesIO() # User must have specified JPEG, so we must convert it if format == "JPEG" and _image_may_have_alpha_channel(image): image = image.convert("RGB") image.save(tmp, format=format, quality=quality) return tmp.getvalue()
Get the mimetype string for the given ImageFormat.
def _get_image_format_mimetype(image_format: ImageFormat) -> str: """Get the mimetype string for the given ImageFormat.""" return f"image/{image_format.lower()}"
Resize an image if it exceeds the given width, or if exceeds MAXIMUM_CONTENT_WIDTH. Ensure the image's format corresponds to the given ImageFormat. Return the (possibly resized and reformatted) image bytes.
def _ensure_image_size_and_format( image_data: bytes, width: int, image_format: ImageFormat ) -> bytes: """Resize an image if it exceeds the given width, or if exceeds MAXIMUM_CONTENT_WIDTH. Ensure the image's format corresponds to the given ImageFormat. Return the (possibly resized and reformatted) image bytes. """ from PIL import Image pil_image = Image.open(io.BytesIO(image_data)) actual_width, actual_height = pil_image.size if width < 0 and actual_width > MAXIMUM_CONTENT_WIDTH: width = MAXIMUM_CONTENT_WIDTH if width > 0 and actual_width > width: # We need to resize the image. new_height = int(1.0 * actual_height * width / actual_width) # pillow reexports Image.Resampling.BILINEAR as Image.BILINEAR for backwards # compatibility reasons, so we use the reexport to support older pillow # versions. The types don't seem to reflect this, though, hence the type: ignore # below. pil_image = pil_image.resize((width, new_height), resample=Image.BILINEAR) # type: ignore[attr-defined] return _PIL_to_bytes(pil_image, format=image_format, quality=90) if pil_image.format != image_format: # We need to reformat the image. return _PIL_to_bytes(pil_image, format=image_format, quality=90) # No resizing or reformatting necessary - return the original bytes. return image_data
Return a URL that an image can be served from. If `image` is already a URL, return it unmodified. Otherwise, add the image to the MediaFileManager and return the URL. (When running in "raw" mode, we won't actually load data into the MediaFileManager, and we'll return an empty URL.)
def image_to_url( image: AtomicImage, width: int, clamp: bool, channels: Channels, output_format: ImageFormatOrAuto, image_id: str, ) -> str: """Return a URL that an image can be served from. If `image` is already a URL, return it unmodified. Otherwise, add the image to the MediaFileManager and return the URL. (When running in "raw" mode, we won't actually load data into the MediaFileManager, and we'll return an empty URL.) """ import numpy as np from PIL import Image, ImageFile image_data: bytes # Strings if isinstance(image, str): if not os.path.isfile(image) and url_util.is_url( image, allowed_schemas=("http", "https", "data") ): # If it's a url, return it directly. return image if image.endswith(".svg") and os.path.isfile(image): # Unpack local SVG image file to an SVG string with open(image) as textfile: image = textfile.read() # Following regex allows svg image files to start either via a "<?xml...>" tag # eventually followed by a "<svg...>" tag or directly starting with a "<svg>" tag if re.search(r"(^\s?(<\?xml[\s\S]*<svg\s)|^\s?<svg\s|^\s?<svg>\s)", image): if "xmlns" not in image: # The xmlns attribute is required for SVGs to render in an img tag. # If it's not present, we add to the first SVG tag: image = image.replace( "<svg", '<svg xmlns="http://www.w3.org/2000/svg" ', 1 ) # Convert to base64 to prevent issues with encoding: import base64 image_b64_encoded = base64.b64encode(image.encode("utf-8")).decode("utf-8") # Return SVG as data URI: return f"data:image/svg+xml;base64,{image_b64_encoded}" # Otherwise, try to open it as a file. try: with open(image, "rb") as f: image_data = f.read() except Exception: # When we aren't able to open the image file, we still pass the path to # the MediaFileManager - its storage backend may have access to files # that Streamlit does not. import mimetypes mimetype, _ = mimetypes.guess_type(image) if mimetype is None: mimetype = "application/octet-stream" url = runtime.get_instance().media_file_mgr.add(image, mimetype, image_id) caching.save_media_data(image, mimetype, image_id) return url # PIL Images elif isinstance(image, (ImageFile.ImageFile, Image.Image)): format = _validate_image_format_string(image, output_format) image_data = _PIL_to_bytes(image, format) # BytesIO # Note: This doesn't support SVG. We could convert to png (cairosvg.svg2png) # or just decode BytesIO to string and handle that way. elif isinstance(image, io.BytesIO): image_data = _BytesIO_to_bytes(image) # Numpy Arrays (ie opencv) elif isinstance(image, np.ndarray): image = _clip_image( _verify_np_shape(image), clamp, ) if channels == "BGR": if len(image.shape) == 3: image = image[:, :, [2, 1, 0]] else: raise StreamlitAPIException( 'When using `channels="BGR"`, the input image should ' "have exactly 3 color channels" ) # Depending on the version of numpy that the user has installed, the # typechecker may not be able to deduce that indexing into a # `npt.NDArray[Any]` returns a `npt.NDArray[Any]`, so we need to # ignore redundant casts below. image_data = _np_array_to_bytes( array=cast("npt.NDArray[Any]", image), # type: ignore[redundant-cast] output_format=output_format, ) # Raw bytes else: image_data = image # Determine the image's format, resize it, and get its mimetype image_format = _validate_image_format_string(image_data, output_format) image_data = _ensure_image_size_and_format(image_data, width, image_format) mimetype = _get_image_format_mimetype(image_format) if runtime.exists(): url = runtime.get_instance().media_file_mgr.add(image_data, mimetype, image_id) caching.save_media_data(image_data, mimetype, image_id) return url else: # When running in "raw mode", we can't access the MediaFileManager. return ""
Fill an ImageListProto with a list of images and their captions. The images will be resized and reformatted as necessary. Parameters ---------- coordinates A string indentifying the images' location in the frontend. image The image or images to include in the ImageListProto. caption Image caption. If displaying multiple images, caption should be a list of captions (one for each image). width The desired width of the image or images. This parameter will be passed to the frontend. Positive values set the image width explicitly. Negative values has some special. For details, see: `WidthBehaviour` proto_imgs The ImageListProto to fill in. clamp Clamp image pixel values to a valid range ([0-255] per channel). This is only meaningful for byte array images; the parameter is ignored for image URLs. If this is not set, and an image has an out-of-range value, an error will be thrown. channels If image is an nd.array, this parameter denotes the format used to represent color information. Defaults to 'RGB', meaning `image[:, :, 0]` is the red channel, `image[:, :, 1]` is green, and `image[:, :, 2]` is blue. For images coming from libraries like OpenCV you should set this to 'BGR', instead. output_format This parameter specifies the format to use when transferring the image data. Photos should use the JPEG format for lossy compression while diagrams should use the PNG format for lossless compression. Defaults to 'auto' which identifies the compression type based on the type and format of the image argument.
def marshall_images( coordinates: str, image: ImageOrImageList, caption: str | npt.NDArray[Any] | list[str] | None, width: int | WidthBehaviour, proto_imgs: ImageListProto, clamp: bool, channels: Channels = "RGB", output_format: ImageFormatOrAuto = "auto", ) -> None: """Fill an ImageListProto with a list of images and their captions. The images will be resized and reformatted as necessary. Parameters ---------- coordinates A string indentifying the images' location in the frontend. image The image or images to include in the ImageListProto. caption Image caption. If displaying multiple images, caption should be a list of captions (one for each image). width The desired width of the image or images. This parameter will be passed to the frontend. Positive values set the image width explicitly. Negative values has some special. For details, see: `WidthBehaviour` proto_imgs The ImageListProto to fill in. clamp Clamp image pixel values to a valid range ([0-255] per channel). This is only meaningful for byte array images; the parameter is ignored for image URLs. If this is not set, and an image has an out-of-range value, an error will be thrown. channels If image is an nd.array, this parameter denotes the format used to represent color information. Defaults to 'RGB', meaning `image[:, :, 0]` is the red channel, `image[:, :, 1]` is green, and `image[:, :, 2]` is blue. For images coming from libraries like OpenCV you should set this to 'BGR', instead. output_format This parameter specifies the format to use when transferring the image data. Photos should use the JPEG format for lossy compression while diagrams should use the PNG format for lossless compression. Defaults to 'auto' which identifies the compression type based on the type and format of the image argument. """ import numpy as np channels = cast(Channels, channels.upper()) # Turn single image and caption into one element list. images: Sequence[AtomicImage] if isinstance(image, list): images = image elif isinstance(image, np.ndarray) and len(image.shape) == 4: images = _4d_to_list_3d(image) else: images = [image] if type(caption) is list: captions: Sequence[str | None] = caption else: if isinstance(caption, str): captions = [caption] # You can pass in a 1-D Numpy array as captions. elif isinstance(caption, np.ndarray) and len(caption.shape) == 1: captions = caption.tolist() # If there are no captions then make the captions list the same size # as the images list. elif caption is None: captions = [None] * len(images) else: captions = [str(caption)] assert type(captions) == list, "If image is a list then caption should be as well" assert len(captions) == len(images), "Cannot pair %d captions with %d images." % ( len(captions), len(images), ) proto_imgs.width = int(width) # Each image in an image list needs to be kept track of at its own coordinates. for coord_suffix, (image, caption) in enumerate(zip(images, captions)): proto_img = proto_imgs.imgs.add() if caption is not None: proto_img.caption = str(caption) # We use the index of the image in the input image list to identify this image inside # MediaFileManager. For this, we just add the index to the image's "coordinates". image_id = "%s-%i" % (coordinates, coord_suffix) proto_img.url = image_to_url( image, width, clamp, channels, output_format, image_id )
A repr function for json.dumps default arg, which tries to serialize sets as lists
def _ensure_serialization(o: object) -> str | list[Any]: """A repr function for json.dumps default arg, which tries to serialize sets as lists""" if isinstance(o, set): return list(o) return repr(o)
Returns the column name to be used for latitude or longitude.
def _get_lat_or_lon_col_name( data: DataFrame, human_readable_name: str, col_name_from_user: str | None, default_col_names: set[str], ) -> str: """Returns the column name to be used for latitude or longitude.""" if isinstance(col_name_from_user, str) and col_name_from_user in data.columns: col_name = col_name_from_user else: # Try one of the default col_names: candidate_col_name = None for c in default_col_names: if c in data.columns: candidate_col_name = c break if candidate_col_name is None: formatted_allowed_col_name = ", ".join(map(repr, sorted(default_col_names))) formmated_col_names = ", ".join(map(repr, list(data.columns))) raise StreamlitAPIException( f"Map data must contain a {human_readable_name} column named: " f"{formatted_allowed_col_name}. Existing columns: {formmated_col_names}" ) else: col_name = candidate_col_name # Check that the column is well-formed. # IMPLEMENTATION NOTE: We can't use isnull().values.any() because .values can return # ExtensionArrays, which don't have a .any() method. # (Read about ExtensionArrays here: # https://pandas.pydata.org/community/blog/extension-arrays.html) # However, after a performance test I found the solution below runs basically as # fast as .values.any(). if any(data[col_name].isnull().array): raise StreamlitAPIException( f"Column {col_name} is not allowed to contain null values, such " "as NaN, NaT, or None." ) return col_name
Take a value_or_name passed in by the Streamlit developer and return a PyDeck argument and column name for that property. This is used for the size and color properties of the chart. Example: - If the user passes size=None, this returns the default size value and no column. - If the user passes size=42, this returns 42 and no column. - If the user passes size="my_col_123", this returns "@@=my_col_123" and "my_col_123".
def _get_value_and_col_name( data: DataFrame, value_or_name: Any, default_value: Any, ) -> tuple[Any, str | None]: """Take a value_or_name passed in by the Streamlit developer and return a PyDeck argument and column name for that property. This is used for the size and color properties of the chart. Example: - If the user passes size=None, this returns the default size value and no column. - If the user passes size=42, this returns 42 and no column. - If the user passes size="my_col_123", this returns "@@=my_col_123" and "my_col_123". """ pydeck_arg: str | float if isinstance(value_or_name, str) and value_or_name in data.columns: col_name = value_or_name pydeck_arg = f"@@={col_name}" else: col_name = None if value_or_name is None: pydeck_arg = default_value else: pydeck_arg = value_or_name return pydeck_arg, col_name
Converts color to a format accepted by PyDeck. For example: - If color_arg is "#fff", then returns (255, 255, 255, 255). - If color_col_name is "my_col_123", then it converts everything in column my_col_123 to an accepted color format such as (0, 100, 200, 255). NOTE: This function mutates the data argument.
def _convert_color_arg_or_column( data: DataFrame, color_arg: str | Color, color_col_name: str | None, ) -> None | str | IntColorTuple: """Converts color to a format accepted by PyDeck. For example: - If color_arg is "#fff", then returns (255, 255, 255, 255). - If color_col_name is "my_col_123", then it converts everything in column my_col_123 to an accepted color format such as (0, 100, 200, 255). NOTE: This function mutates the data argument. """ color_arg_out: None | str | IntColorTuple = None if color_col_name is not None: # Convert color column to the right format. if len(data[color_col_name]) > 0 and is_color_like(data[color_col_name].iat[0]): # Use .loc[] to avoid a SettingWithCopyWarning in some cases. data.loc[:, color_col_name] = data.loc[:, color_col_name].map( to_int_color_tuple ) else: raise StreamlitAPIException( f'Column "{color_col_name}" does not appear to contain valid colors.' ) # This is guaranteed to be a str because of _get_value_and_col_name assert isinstance(color_arg, str) color_arg_out = color_arg elif color_arg is not None: color_arg_out = to_int_color_tuple(color_arg) return color_arg_out
Auto-set viewport when not fully specified by user.
def _get_viewport_details( data: DataFrame, lat_col_name: str, lon_col_name: str, zoom: int | None ) -> tuple[int, float, float]: """Auto-set viewport when not fully specified by user.""" min_lat = data[lat_col_name].min() max_lat = data[lat_col_name].max() min_lon = data[lon_col_name].min() max_lon = data[lon_col_name].max() center_lat = (max_lat + min_lat) / 2.0 center_lon = (max_lon + min_lon) / 2.0 range_lon = abs(max_lon - min_lon) range_lat = abs(max_lat - min_lat) if zoom is None: if range_lon > range_lat: longitude_distance = range_lon else: longitude_distance = range_lat zoom = _get_zoom_level(longitude_distance) return zoom, center_lat, center_lon
Get the zoom level for a given distance in degrees. See https://wiki.openstreetmap.org/wiki/Zoom_levels for reference. Parameters ---------- distance : float How many degrees of longitude should fit in the map. Returns ------- int The zoom level, from 0 to 20.
def _get_zoom_level(distance: float) -> int: """Get the zoom level for a given distance in degrees. See https://wiki.openstreetmap.org/wiki/Zoom_levels for reference. Parameters ---------- distance : float How many degrees of longitude should fit in the map. Returns ------- int The zoom level, from 0 to 20. """ for i in range(len(_ZOOM_LEVELS) - 1): if _ZOOM_LEVELS[i + 1] < distance <= _ZOOM_LEVELS[i]: return i # For small number of points the default zoom level will be used. return _DEFAULT_ZOOM_LEVEL
Return whether URL is any kind of YouTube embed or watch link. If so, reshape URL into an embed link suitable for use in an iframe. If not a YouTube URL, return None. Parameters ---------- url : str Example ------- >>> print(_reshape_youtube_url('https://youtu.be/_T8LGqJtuGc')) .. output:: https://www.youtube.com/embed/_T8LGqJtuGc
def _reshape_youtube_url(url: str) -> str | None: """Return whether URL is any kind of YouTube embed or watch link. If so, reshape URL into an embed link suitable for use in an iframe. If not a YouTube URL, return None. Parameters ---------- url : str Example ------- >>> print(_reshape_youtube_url('https://youtu.be/_T8LGqJtuGc')) .. output:: https://www.youtube.com/embed/_T8LGqJtuGc """ match = re.match(YOUTUBE_RE, url) if match: code = ( match.group("video_id_1") or match.group("video_id_2") or match.group("video_id_3") ) return f"https://www.youtube.com/embed/{code}" return None
Fill audio or video proto based on contents of data. Given a string, check if it's a url; if so, send it out without modification. Otherwise assume strings are filenames and let any OS errors raise. Load data either from file or through bytes-processing methods into a MediaFile object. Pack proto with generated Tornado-based URL. (When running in "raw" mode, we won't actually load data into the MediaFileManager, and we'll return an empty URL.)
def _marshall_av_media( coordinates: str, proto: AudioProto | VideoProto, data: MediaData, mimetype: str, ) -> None: """Fill audio or video proto based on contents of data. Given a string, check if it's a url; if so, send it out without modification. Otherwise assume strings are filenames and let any OS errors raise. Load data either from file or through bytes-processing methods into a MediaFile object. Pack proto with generated Tornado-based URL. (When running in "raw" mode, we won't actually load data into the MediaFileManager, and we'll return an empty URL.) """ # Audio and Video methods have already checked if this is a URL by this point. if data is None: # Allow empty values so media players can be shown without media. return data_or_filename: bytes | str if isinstance(data, (str, bytes)): # Pass strings and bytes through unchanged data_or_filename = data elif isinstance(data, io.BytesIO): data.seek(0) data_or_filename = data.getvalue() elif isinstance(data, io.RawIOBase) or isinstance(data, io.BufferedReader): data.seek(0) read_data = data.read() if read_data is None: return else: data_or_filename = read_data elif type_util.is_type(data, "numpy.ndarray"): data_or_filename = data.tobytes() else: raise RuntimeError("Invalid binary data format: %s" % type(data)) if runtime.exists(): file_url = runtime.get_instance().media_file_mgr.add( data_or_filename, mimetype, coordinates ) caching.save_media_data(data_or_filename, mimetype, coordinates) else: # When running in "raw mode", we can't access the MediaFileManager. file_url = "" proto.url = file_url
Marshalls a video proto, using url processors as needed. Parameters ---------- coordinates : str proto : the proto to fill. Must have a string field called "data". data : str, bytes, BytesIO, numpy.ndarray, or file opened with io.open(). Raw video data or a string with a URL pointing to the video to load. Includes support for YouTube URLs. If passing the raw data, this must include headers and any other bytes required in the actual file. mimetype : str The mime type for the video file. Defaults to 'video/mp4'. See https://tools.ietf.org/html/rfc4281 for more info. start_time : int The time from which this element should start playing. (default: 0) subtitles: str, dict, or io.BytesIO Optional subtitle data for the video, supporting several input types: * None (default): No subtitles. * A string: File path to a subtitle file in '.vtt' or '.srt' formats, or the raw content of subtitles conforming to these formats. If providing raw content, the string must adhere to the WebVTT or SRT format specifications. * A dictionary: Pairs of labels and file paths or raw subtitle content in '.vtt' or '.srt' formats. Enables multiple subtitle tracks. The label will be shown in the video player. Example: {'English': 'path/to/english.vtt', 'French': 'path/to/french.srt'} * io.BytesIO: A BytesIO stream that contains valid '.vtt' or '.srt' formatted subtitle data. When provided, subtitles are displayed by default. For multiple tracks, the first one is displayed by default. Not supported for YouTube videos. end_time: int The time at which this element should stop playing loop: bool Whether the video should loop playback. autoplay: bool Whether the video should start playing automatically. Browsers will not autoplay video files if the user has not interacted with the page yet, for example by clicking on the page while it loads. To enable autoplay without user interaction, you can set muted=True. Defaults to False. muted: bool Whether the video should play with the audio silenced. This can be used to enable autoplay without user interaction. Defaults to False.
def marshall_video( coordinates: str, proto: VideoProto, data: MediaData, mimetype: str = "video/mp4", start_time: int = 0, subtitles: SubtitleData = None, end_time: int | None = None, loop: bool = False, autoplay: bool = False, muted: bool = False, ) -> None: """Marshalls a video proto, using url processors as needed. Parameters ---------- coordinates : str proto : the proto to fill. Must have a string field called "data". data : str, bytes, BytesIO, numpy.ndarray, or file opened with io.open(). Raw video data or a string with a URL pointing to the video to load. Includes support for YouTube URLs. If passing the raw data, this must include headers and any other bytes required in the actual file. mimetype : str The mime type for the video file. Defaults to 'video/mp4'. See https://tools.ietf.org/html/rfc4281 for more info. start_time : int The time from which this element should start playing. (default: 0) subtitles: str, dict, or io.BytesIO Optional subtitle data for the video, supporting several input types: * None (default): No subtitles. * A string: File path to a subtitle file in '.vtt' or '.srt' formats, or the raw content of subtitles conforming to these formats. If providing raw content, the string must adhere to the WebVTT or SRT format specifications. * A dictionary: Pairs of labels and file paths or raw subtitle content in '.vtt' or '.srt' formats. Enables multiple subtitle tracks. The label will be shown in the video player. Example: {'English': 'path/to/english.vtt', 'French': 'path/to/french.srt'} * io.BytesIO: A BytesIO stream that contains valid '.vtt' or '.srt' formatted subtitle data. When provided, subtitles are displayed by default. For multiple tracks, the first one is displayed by default. Not supported for YouTube videos. end_time: int The time at which this element should stop playing loop: bool Whether the video should loop playback. autoplay: bool Whether the video should start playing automatically. Browsers will not autoplay video files if the user has not interacted with the page yet, for example by clicking on the page while it loads. To enable autoplay without user interaction, you can set muted=True. Defaults to False. muted: bool Whether the video should play with the audio silenced. This can be used to enable autoplay without user interaction. Defaults to False. """ if start_time < 0 or (end_time is not None and end_time <= start_time): raise StreamlitAPIException("Invalid start_time and end_time combination.") proto.start_time = start_time proto.muted = muted if end_time is not None: proto.end_time = end_time proto.loop = loop # "type" distinguishes between YouTube and non-YouTube links proto.type = VideoProto.Type.NATIVE if isinstance(data, str) and url_util.is_url( data, allowed_schemas=("http", "https", "data") ): if youtube_url := _reshape_youtube_url(data): proto.url = youtube_url proto.type = VideoProto.Type.YOUTUBE_IFRAME if subtitles: raise StreamlitAPIException( "Subtitles are not supported for YouTube videos." ) else: proto.url = data else: _marshall_av_media(coordinates, proto, data, mimetype) if subtitles: subtitle_items: list[tuple[str, str | Path | bytes | io.BytesIO]] = [] # Single subtitle if isinstance(subtitles, (str, bytes, io.BytesIO, Path)): subtitle_items.append(("default", subtitles)) # Multiple subtitles elif isinstance(subtitles, dict): subtitle_items.extend(subtitles.items()) else: raise StreamlitAPIException( f"Unsupported data type for subtitles: {type(subtitles)}. " f"Only str (file paths) and dict are supported." ) for label, subtitle_data in subtitle_items: sub = proto.subtitles.add() sub.label = label or "" # Coordinates used in media_file_manager to identify the place of # element, in case of subtitle, we use same video coordinates # with suffix. # It is not aligned with common coordinates format, but in # media_file_manager we use it just as unique identifier, so it is fine. subtitle_coordinates = f"{coordinates}[subtitle{label}]" try: sub.url = process_subtitle_data( subtitle_coordinates, subtitle_data, label ) except (TypeError, ValueError) as original_err: raise StreamlitAPIException( f"Failed to process the provided subtitle: {label}" ) from original_err if autoplay: ctx = get_script_run_ctx() proto.autoplay = autoplay id = compute_widget_id( "video", url=proto.url, mimetype=mimetype, start_time=start_time, end_time=end_time, loop=loop, autoplay=autoplay, muted=muted, page=ctx.page_script_hash if ctx else None, ) proto.id = id
Parse start_time and end_time and return them as int.
def _parse_start_time_end_time( start_time: MediaTime, end_time: MediaTime | None ) -> tuple[int, int | None]: """Parse start_time and end_time and return them as int.""" try: maybe_start_time = time_to_seconds(start_time, coerce_none_to_inf=False) if maybe_start_time is None: raise ValueError start_time = int(maybe_start_time) except (StreamlitAPIException, ValueError): error_msg = TIMEDELTA_PARSE_ERROR_MESSAGE.format( param_name="start_time", param_value=start_time ) raise StreamlitAPIException(error_msg) from None try: end_time = time_to_seconds(end_time, coerce_none_to_inf=False) if end_time is not None: end_time = int(end_time) except StreamlitAPIException: error_msg = TIMEDELTA_PARSE_ERROR_MESSAGE.format( param_name="end_time", param_value=end_time ) raise StreamlitAPIException(error_msg) from None return start_time, end_time
Validates and normalizes numpy array data. We validate numpy array shape (should be 1d or 2d) We normalize input data to int16 [-32768, 32767] range. Parameters ---------- data : numpy array numpy array to be validated and normalized Returns ------- Tuple of (bytes, int) (bytes, nchan) where - bytes : bytes of normalized numpy array converted to int16 - nchan : number of channels for audio signal. 1 for mono, or 2 for stereo.
def _validate_and_normalize(data: npt.NDArray[Any]) -> tuple[bytes, int]: """Validates and normalizes numpy array data. We validate numpy array shape (should be 1d or 2d) We normalize input data to int16 [-32768, 32767] range. Parameters ---------- data : numpy array numpy array to be validated and normalized Returns ------- Tuple of (bytes, int) (bytes, nchan) where - bytes : bytes of normalized numpy array converted to int16 - nchan : number of channels for audio signal. 1 for mono, or 2 for stereo. """ # we import numpy here locally to import it only when needed (when numpy array given # to st.audio data) import numpy as np data: npt.NDArray[Any] = np.array(data, dtype=float) if len(data.shape) == 1: nchan = 1 elif len(data.shape) == 2: # In wave files,channels are interleaved. E.g., # "L1R1L2R2..." for stereo. See # http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx # for channel ordering nchan = data.shape[0] data = data.T.ravel() else: raise StreamlitAPIException("Numpy array audio input must be a 1D or 2D array.") if data.size == 0: return data.astype(np.int16).tobytes(), nchan max_abs_value = np.max(np.abs(data)) # 16-bit samples are stored as 2's-complement signed integers, # ranging from -32768 to 32767. # scaled_data is PCM 16 bit numpy array, that's why we multiply [-1, 1] float # values to 32_767 == 2 ** 15 - 1. np_array = (data / max_abs_value) * 32767 scaled_data = np_array.astype(np.int16) return scaled_data.tobytes(), nchan
Transform a numpy array to a PCM bytestring We use code from IPython display module to convert numpy array to wave bytes https://github.com/ipython/ipython/blob/1015c392f3d50cf4ff3e9f29beede8c1abfdcb2a/IPython/lib/display.py#L146
def _make_wav(data: npt.NDArray[Any], sample_rate: int) -> bytes: """ Transform a numpy array to a PCM bytestring We use code from IPython display module to convert numpy array to wave bytes https://github.com/ipython/ipython/blob/1015c392f3d50cf4ff3e9f29beede8c1abfdcb2a/IPython/lib/display.py#L146 """ # we import wave here locally to import it only when needed (when numpy array given # to st.audio data) import wave scaled, nchan = _validate_and_normalize(data) with io.BytesIO() as fp, wave.open(fp, mode="wb") as waveobj: waveobj.setnchannels(nchan) waveobj.setframerate(sample_rate) waveobj.setsampwidth(2) waveobj.setcomptype("NONE", "NONE") waveobj.writeframes(scaled) return fp.getvalue()
Convert data to wav bytes if the data type is numpy array.
def _maybe_convert_to_wav_bytes(data: MediaData, sample_rate: int | None) -> MediaData: """Convert data to wav bytes if the data type is numpy array.""" if type_util.is_type(data, "numpy.ndarray") and sample_rate is not None: data = _make_wav(cast("npt.NDArray[Any]", data), sample_rate) return data
Marshalls an audio proto, using data and url processors as needed. Parameters ---------- coordinates : str proto : The proto to fill. Must have a string field called "url". data : str, bytes, BytesIO, numpy.ndarray, or file opened with io.open() Raw audio data or a string with a URL pointing to the file to load. If passing the raw data, this must include headers and any other bytes required in the actual file. mimetype : str The mime type for the audio file. Defaults to "audio/wav". See https://tools.ietf.org/html/rfc4281 for more info. start_time : int The time from which this element should start playing. (default: 0) sample_rate: int or None Optional param to provide sample_rate in case of numpy array end_time: int The time at which this element should stop playing loop: bool Whether the audio should loop playback. autoplay : bool Whether the audio should start playing automatically. Browsers will not autoplay audio files if the user has not interacted with the page yet.
def marshall_audio( coordinates: str, proto: AudioProto, data: MediaData, mimetype: str = "audio/wav", start_time: int = 0, sample_rate: int | None = None, end_time: int | None = None, loop: bool = False, autoplay: bool = False, ) -> None: """Marshalls an audio proto, using data and url processors as needed. Parameters ---------- coordinates : str proto : The proto to fill. Must have a string field called "url". data : str, bytes, BytesIO, numpy.ndarray, or file opened with io.open() Raw audio data or a string with a URL pointing to the file to load. If passing the raw data, this must include headers and any other bytes required in the actual file. mimetype : str The mime type for the audio file. Defaults to "audio/wav". See https://tools.ietf.org/html/rfc4281 for more info. start_time : int The time from which this element should start playing. (default: 0) sample_rate: int or None Optional param to provide sample_rate in case of numpy array end_time: int The time at which this element should stop playing loop: bool Whether the audio should loop playback. autoplay : bool Whether the audio should start playing automatically. Browsers will not autoplay audio files if the user has not interacted with the page yet. """ proto.start_time = start_time if end_time is not None: proto.end_time = end_time proto.loop = loop if isinstance(data, str) and url_util.is_url( data, allowed_schemas=("http", "https", "data") ): proto.url = data else: data = _maybe_convert_to_wav_bytes(data, sample_rate) _marshall_av_media(coordinates, proto, data, mimetype) if autoplay: ctx = get_script_run_ctx() proto.autoplay = autoplay id = compute_widget_id( "audio", url=proto.url, mimetype=mimetype, start_time=start_time, sample_rate=sample_rate, end_time=end_time, loop=loop, autoplay=autoplay, page=ctx.page_script_hash if ctx else None, ) proto.id = id
Parse and check the user provided selection modes.
def parse_selection_mode( selection_mode: SelectionMode | Iterable[SelectionMode], ) -> set[PlotlyChartProto.SelectionMode.ValueType]: """Parse and check the user provided selection modes.""" if isinstance(selection_mode, str): # Only a single selection mode was passed selection_mode_set = {selection_mode} else: # Multiple selection modes were passed selection_mode_set = set(selection_mode) if not selection_mode_set.issubset(_SELECTION_MODES): raise StreamlitAPIException( f"Invalid selection mode: {selection_mode}. " f"Valid options are: {_SELECTION_MODES}" ) parsed_selection_modes = [] for selection_mode in selection_mode_set: if selection_mode == "points": parsed_selection_modes.append(PlotlyChartProto.SelectionMode.POINTS) elif selection_mode == "lasso": parsed_selection_modes.append(PlotlyChartProto.SelectionMode.LASSO) elif selection_mode == "box": parsed_selection_modes.append(PlotlyChartProto.SelectionMode.BOX) return set(parsed_selection_modes)
Checks given value is 'between' the bounds of [low, high], considering close values around bounds are acceptable input Notes ----- This check is required for handling values that are slightly above or below the acceptable range, for example -0.0000000000021, 1.0000000000000013. These values are little off the conventional 0.0 <= x <= 1.0 condition due to floating point operations, but should still be considered acceptable input. Parameters ---------- value : float low : float high : float
def _check_float_between(value: float, low: float = 0.0, high: float = 1.0) -> bool: """ Checks given value is 'between' the bounds of [low, high], considering close values around bounds are acceptable input Notes ----- This check is required for handling values that are slightly above or below the acceptable range, for example -0.0000000000021, 1.0000000000000013. These values are little off the conventional 0.0 <= x <= 1.0 condition due to floating point operations, but should still be considered acceptable input. Parameters ---------- value : float low : float high : float """ return ( (low <= value <= high) or math.isclose(value, low, rel_tol=1e-9, abs_tol=1e-9) or math.isclose(value, high, rel_tol=1e-9, abs_tol=1e-9) )
Temporarily displays a message while executing a block of code. Parameters ---------- text : str A message to display while executing that block Example ------- >>> import time >>> import streamlit as st >>> >>> with st.spinner('Wait for it...'): >>> time.sleep(5) >>> st.success('Done!')
def spinner(text: str = "In progress...", *, _cache: bool = False) -> Iterator[None]: """Temporarily displays a message while executing a block of code. Parameters ---------- text : str A message to display while executing that block Example ------- >>> import time >>> import streamlit as st >>> >>> with st.spinner('Wait for it...'): >>> time.sleep(5) >>> st.success('Done!') """ from streamlit.proto.Spinner_pb2 import Spinner as SpinnerProto from streamlit.string_util import clean_text message = st.empty() # Set the message 0.5 seconds in the future to avoid annoying # flickering if this spinner runs too quickly. DELAY_SECS = 0.5 display_message = True display_message_lock = threading.Lock() try: def set_message(): with display_message_lock: if display_message: spinner_proto = SpinnerProto() spinner_proto.text = clean_text(text) spinner_proto.cache = _cache message._enqueue("spinner", spinner_proto) add_script_run_ctx(threading.Timer(DELAY_SECS, set_message)).start() # Yield control back to the context. yield finally: if display_message_lock: with display_message_lock: display_message = False if "chat_message" in set(message._active_dg._ancestor_block_types): # Temporary stale element fix: # For chat messages, we are resetting the spinner placeholder to an # empty container instead of an empty placeholder (st.empty) to have # it removed from the delta path. Empty containers are ignored in the # frontend since they are configured with allow_empty=False. This # prevents issues with stale elements caused by the spinner being # rendered only in some situations (e.g. for caching). message.container() else: message.empty()
Check if a widget is allowed to be used in the current context. More specifically, this checks if the current context is inside a cached function that disallows widget usage. If so, it raises a warning. If there are other similar checks in the future, we could extend this function to check for those as well. And rename it to check_widget_usage_rules.
def check_cache_replay_rules() -> None: """Check if a widget is allowed to be used in the current context. More specifically, this checks if the current context is inside a cached function that disallows widget usage. If so, it raises a warning. If there are other similar checks in the future, we could extend this function to check for those as well. And rename it to check_widget_usage_rules. """ if runtime.exists(): from streamlit.runtime.scriptrunner.script_run_context import get_script_run_ctx ctx = get_script_run_ctx() if ctx and ctx.disallow_cached_widget_usage: # We use an exception here to show a proper stack trace # that indicates to the user where the issue is. streamlit.exception(CachedWidgetWarning())
Returns one of LabelVisibilityMessage enum constants.py based on string value.
def get_label_visibility_proto_value( label_visibility_string: type_util.LabelVisibility, ) -> LabelVisibilityMessage.LabelVisibilityOptions.ValueType: """Returns one of LabelVisibilityMessage enum constants.py based on string value.""" if label_visibility_string == "visible": return LabelVisibilityMessage.LabelVisibilityOptions.VISIBLE elif label_visibility_string == "hidden": return LabelVisibilityMessage.LabelVisibilityOptions.HIDDEN elif label_visibility_string == "collapsed": return LabelVisibilityMessage.LabelVisibilityOptions.COLLAPSED raise ValueError(f"Unknown label visibility value: {label_visibility_string}")
Maybe Coerce a RegisterWidgetResult with an Enum member value to RegisterWidgetResult[option] if option is an EnumType, otherwise just return the original RegisterWidgetResult.
def maybe_coerce_enum(register_widget_result, options, opt_sequence): """Maybe Coerce a RegisterWidgetResult with an Enum member value to RegisterWidgetResult[option] if option is an EnumType, otherwise just return the original RegisterWidgetResult.""" # If the value is not a Enum, return early if not isinstance(register_widget_result.value, Enum): return register_widget_result coerce_class: EnumMeta | None if isinstance(options, EnumMeta): coerce_class = options else: coerce_class = _extract_common_class_from_iter(opt_sequence) if coerce_class is None: return register_widget_result return RegisterWidgetResult( type_util.coerce_enum(register_widget_result.value, coerce_class), register_widget_result.value_changed, )
Maybe Coerce a RegisterWidgetResult with a sequence of Enum members as value to RegisterWidgetResult[Sequence[option]] if option is an EnumType, otherwise just return the original RegisterWidgetResult.
def maybe_coerce_enum_sequence(register_widget_result, options, opt_sequence): """Maybe Coerce a RegisterWidgetResult with a sequence of Enum members as value to RegisterWidgetResult[Sequence[option]] if option is an EnumType, otherwise just return the original RegisterWidgetResult.""" # If not all widget values are Enums, return early if not all(isinstance(val, Enum) for val in register_widget_result.value): return register_widget_result # Extract the class to coerce coerce_class: EnumMeta | None if isinstance(options, EnumMeta): coerce_class = options else: coerce_class = _extract_common_class_from_iter(opt_sequence) if coerce_class is None: return register_widget_result # Return a new RegisterWidgetResult with the coerced enum values sequence return RegisterWidgetResult( type(register_widget_result.value)( type_util.coerce_enum(val, coerce_class) for val in register_widget_result.value ), register_widget_result.value_changed, )
Return the common class of all elements in a iterable if they share one. Otherwise, return None.
def _extract_common_class_from_iter(iterable: Iterable[Any]) -> Any: """Return the common class of all elements in a iterable if they share one. Otherwise, return None.""" try: inner_iter = iter(iterable) first_class = type(next(inner_iter)) except StopIteration: return None if all(type(item) is first_class for item in inner_iter): return first_class return None
Check if the column type is compatible with the underlying data kind. This check only applies to editable column types (e.g. number or text). Non-editable column types (e.g. bar_chart or image) can be configured for all data kinds (this might change in the future). Parameters ---------- column_type : ColumnType The column type to check. data_kind : ColumnDataKind The data kind to check. Returns ------- bool True if the column type is compatible with the data kind, False otherwise.
def is_type_compatible(column_type: ColumnType, data_kind: ColumnDataKind) -> bool: """Check if the column type is compatible with the underlying data kind. This check only applies to editable column types (e.g. number or text). Non-editable column types (e.g. bar_chart or image) can be configured for all data kinds (this might change in the future). Parameters ---------- column_type : ColumnType The column type to check. data_kind : ColumnDataKind The data kind to check. Returns ------- bool True if the column type is compatible with the data kind, False otherwise. """ if column_type not in _EDITING_COMPATIBILITY_MAPPING: return True return data_kind in _EDITING_COMPATIBILITY_MAPPING[column_type]
Determine the data kind via the arrow type information. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- field : pa.Field The arrow field from the arrow table schema. Returns ------- ColumnDataKind The data kind of the field.
def _determine_data_kind_via_arrow(field: pa.Field) -> ColumnDataKind: """Determine the data kind via the arrow type information. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- field : pa.Field The arrow field from the arrow table schema. Returns ------- ColumnDataKind The data kind of the field. """ import pyarrow as pa field_type = field.type if pa.types.is_integer(field_type): return ColumnDataKind.INTEGER if pa.types.is_floating(field_type): return ColumnDataKind.FLOAT if pa.types.is_boolean(field_type): return ColumnDataKind.BOOLEAN if pa.types.is_string(field_type): return ColumnDataKind.STRING if pa.types.is_date(field_type): return ColumnDataKind.DATE if pa.types.is_time(field_type): return ColumnDataKind.TIME if pa.types.is_timestamp(field_type): return ColumnDataKind.DATETIME if pa.types.is_duration(field_type): return ColumnDataKind.TIMEDELTA if pa.types.is_list(field_type): return ColumnDataKind.LIST if pa.types.is_decimal(field_type): return ColumnDataKind.DECIMAL if pa.types.is_null(field_type): return ColumnDataKind.EMPTY # Interval does not seem to work correctly: # if pa.types.is_interval(field_type): # return ColumnDataKind.INTERVAL if pa.types.is_binary(field_type): return ColumnDataKind.BYTES if pa.types.is_struct(field_type): return ColumnDataKind.DICT return ColumnDataKind.UNKNOWN
Determine the data kind by using the pandas dtype. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column for which the data kind should be determined. Returns ------- ColumnDataKind The data kind of the column.
def _determine_data_kind_via_pandas_dtype( column: Series | Index, ) -> ColumnDataKind: """Determine the data kind by using the pandas dtype. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column for which the data kind should be determined. Returns ------- ColumnDataKind The data kind of the column. """ import pandas as pd column_dtype = column.dtype if pd.api.types.is_bool_dtype(column_dtype): return ColumnDataKind.BOOLEAN if pd.api.types.is_integer_dtype(column_dtype): return ColumnDataKind.INTEGER if pd.api.types.is_float_dtype(column_dtype): return ColumnDataKind.FLOAT if pd.api.types.is_datetime64_any_dtype(column_dtype): return ColumnDataKind.DATETIME if pd.api.types.is_timedelta64_dtype(column_dtype): return ColumnDataKind.TIMEDELTA if isinstance(column_dtype, pd.PeriodDtype): return ColumnDataKind.PERIOD if isinstance(column_dtype, pd.IntervalDtype): return ColumnDataKind.INTERVAL if pd.api.types.is_complex_dtype(column_dtype): return ColumnDataKind.COMPLEX if pd.api.types.is_object_dtype( column_dtype ) is False and pd.api.types.is_string_dtype(column_dtype): # The is_string_dtype return ColumnDataKind.STRING return ColumnDataKind.UNKNOWN
Determine the data kind by inferring it from the underlying data. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column to determine the data kind for. Returns ------- ColumnDataKind The data kind of the column.
def _determine_data_kind_via_inferred_type( column: Series | Index, ) -> ColumnDataKind: """Determine the data kind by inferring it from the underlying data. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column to determine the data kind for. Returns ------- ColumnDataKind The data kind of the column. """ from pandas.api.types import infer_dtype inferred_type = infer_dtype(column) if inferred_type == "string": return ColumnDataKind.STRING if inferred_type == "bytes": return ColumnDataKind.BYTES if inferred_type in ["floating", "mixed-integer-float"]: return ColumnDataKind.FLOAT if inferred_type == "integer": return ColumnDataKind.INTEGER if inferred_type == "decimal": return ColumnDataKind.DECIMAL if inferred_type == "complex": return ColumnDataKind.COMPLEX if inferred_type == "boolean": return ColumnDataKind.BOOLEAN if inferred_type in ["datetime64", "datetime"]: return ColumnDataKind.DATETIME if inferred_type == "date": return ColumnDataKind.DATE if inferred_type in ["timedelta64", "timedelta"]: return ColumnDataKind.TIMEDELTA if inferred_type == "time": return ColumnDataKind.TIME if inferred_type == "period": return ColumnDataKind.PERIOD if inferred_type == "interval": return ColumnDataKind.INTERVAL if inferred_type == "empty": return ColumnDataKind.EMPTY # Unused types: mixed, unknown-array, categorical, mixed-integer return ColumnDataKind.UNKNOWN
Determine the data kind of a column. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column to determine the data kind for. field : pa.Field, optional The arrow field from the arrow table schema. Returns ------- ColumnDataKind The data kind of the column.
def _determine_data_kind( column: Series | Index, field: pa.Field | None = None ) -> ColumnDataKind: """Determine the data kind of a column. The column data kind refers to the shared data type of the values in the column (e.g. int, float, str, bool). Parameters ---------- column : pd.Series, pd.Index The column to determine the data kind for. field : pa.Field, optional The arrow field from the arrow table schema. Returns ------- ColumnDataKind The data kind of the column. """ import pandas as pd if isinstance(column.dtype, pd.CategoricalDtype): # Categorical columns can have different underlying data kinds # depending on the categories. return _determine_data_kind_via_inferred_type(column.dtype.categories) if field is not None: data_kind = _determine_data_kind_via_arrow(field) if data_kind != ColumnDataKind.UNKNOWN: return data_kind if column.dtype.name == "object": # If dtype is object, we need to infer the type from the column return _determine_data_kind_via_inferred_type(column) return _determine_data_kind_via_pandas_dtype(column)
Determine the schema of a dataframe. Parameters ---------- data_df : pd.DataFrame The dataframe to determine the schema of. arrow_schema : pa.Schema The Arrow schema of the dataframe. Returns ------- DataframeSchema A mapping that contains the detected data type for the index and columns. The key is the column name in the underlying dataframe or ``_index`` for index columns.
def determine_dataframe_schema( data_df: DataFrame, arrow_schema: pa.Schema ) -> DataframeSchema: """Determine the schema of a dataframe. Parameters ---------- data_df : pd.DataFrame The dataframe to determine the schema of. arrow_schema : pa.Schema The Arrow schema of the dataframe. Returns ------- DataframeSchema A mapping that contains the detected data type for the index and columns. The key is the column name in the underlying dataframe or ``_index`` for index columns. """ dataframe_schema: DataframeSchema = {} # Add type of index: # TODO(lukasmasuch): We need to apply changes here to support multiindex. dataframe_schema[INDEX_IDENTIFIER] = _determine_data_kind(data_df.index) # Add types for all columns: for i, column in enumerate(data_df.items()): column_name, column_data = column dataframe_schema[column_name] = _determine_data_kind( column_data, arrow_schema.field(i) ) return dataframe_schema
Transforms a user-provided column config mapping into a valid column config mapping that can be used by the frontend. Parameters ---------- column_config: dict or None The user-provided column config mapping. Returns ------- dict The transformed column config mapping.
def process_config_mapping( column_config: ColumnConfigMappingInput | None = None, ) -> ColumnConfigMapping: """Transforms a user-provided column config mapping into a valid column config mapping that can be used by the frontend. Parameters ---------- column_config: dict or None The user-provided column config mapping. Returns ------- dict The transformed column config mapping. """ if column_config is None: return {} transformed_column_config: ColumnConfigMapping = {} for column, config in column_config.items(): if config is None: transformed_column_config[column] = ColumnConfig(hidden=True) elif isinstance(config, str): transformed_column_config[column] = ColumnConfig(label=config) elif isinstance(config, dict): transformed_column_config[column] = config else: raise StreamlitAPIException( f"Invalid column config for column `{column}`. " f"Expected `None`, `str` or `dict`, but got `{type(config)}`." ) return transformed_column_config
Updates the column config value for a single column within the mapping. Parameters ---------- column_config_mapping : ColumnConfigMapping The column config mapping to update. column : str The column to update the config value for. column_config : ColumnConfig The column config to update.
def update_column_config( column_config_mapping: ColumnConfigMapping, column: str, column_config: ColumnConfig ) -> None: """Updates the column config value for a single column within the mapping. Parameters ---------- column_config_mapping : ColumnConfigMapping The column config mapping to update. column : str The column to update the config value for. column_config : ColumnConfig The column config to update. """ if column not in column_config_mapping: column_config_mapping[column] = {} column_config_mapping[column].update(column_config)
Apply data specific configurations to the provided dataframe. This will apply inplace changes to the dataframe and the column configurations depending on the data format. Parameters ---------- columns_config : ColumnConfigMapping A mapping of column names/ids to column configurations. data_df : pd.DataFrame The dataframe to apply the configurations to. data_format : DataFormat The format of the data. check_arrow_compatibility : bool Whether to check if the data is compatible with arrow.
def apply_data_specific_configs( columns_config: ColumnConfigMapping, data_df: DataFrame, data_format: DataFormat, check_arrow_compatibility: bool = False, ) -> None: """Apply data specific configurations to the provided dataframe. This will apply inplace changes to the dataframe and the column configurations depending on the data format. Parameters ---------- columns_config : ColumnConfigMapping A mapping of column names/ids to column configurations. data_df : pd.DataFrame The dataframe to apply the configurations to. data_format : DataFormat The format of the data. check_arrow_compatibility : bool Whether to check if the data is compatible with arrow. """ import pandas as pd # Deactivate editing for columns that are not compatible with arrow if check_arrow_compatibility: for column_name, column_data in data_df.items(): if is_colum_type_arrow_incompatible(column_data): update_column_config(columns_config, column_name, {"disabled": True}) # Convert incompatible type to string data_df[column_name] = column_data.astype("string") # Pandas adds a range index as default to all datastructures # but for most of the non-pandas data objects it is unnecessary # to show this index to the user. Therefore, we will hide it as default. if data_format in [ DataFormat.SET_OF_VALUES, DataFormat.TUPLE_OF_VALUES, DataFormat.LIST_OF_VALUES, DataFormat.NUMPY_LIST, DataFormat.NUMPY_MATRIX, DataFormat.LIST_OF_RECORDS, DataFormat.LIST_OF_ROWS, DataFormat.COLUMN_VALUE_MAPPING, ]: update_column_config(columns_config, INDEX_IDENTIFIER, {"hidden": True}) # Rename the first column to "value" for some of the data formats if data_format in [ DataFormat.SET_OF_VALUES, DataFormat.TUPLE_OF_VALUES, DataFormat.LIST_OF_VALUES, DataFormat.NUMPY_LIST, DataFormat.KEY_VALUE_DICT, ]: # Pandas automatically names the first column "0" # We rename it to "value" in selected cases to make it more descriptive data_df.rename(columns={0: "value"}, inplace=True) if not isinstance(data_df.index, pd.RangeIndex): # If the index is not a range index, we will configure it as required # since the user is required to provide a (unique) value for editing. update_column_config(columns_config, INDEX_IDENTIFIER, {"required": True})
Marshall the column config into the Arrow proto. Parameters ---------- proto : ArrowProto The proto to marshall into. column_config_mapping : ColumnConfigMapping The column config to marshall.
def marshall_column_config( proto: ArrowProto, column_config_mapping: ColumnConfigMapping ) -> None: """Marshall the column config into the Arrow proto. Parameters ---------- proto : ArrowProto The proto to marshall into. column_config_mapping : ColumnConfigMapping The column config to marshall. """ proto.columns = _convert_column_config_to_json(column_config_mapping)
Configure a generic column in ``st.dataframe`` or ``st.data_editor``. The type of the column will be automatically inferred from the data type. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. To change the type of the column and enable type-specific configuration options, use one of the column types in the ``st.column_config`` namespace, e.g. ``st.column_config.NumberColumn``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "widgets": st.column_config.Column( >>> "Streamlit Widgets", >>> help="Streamlit **widget** commands 🎈", >>> width="medium", >>> required=True, >>> ) >>> }, >>> hide_index=True, >>> num_rows="dynamic", >>> ) .. output:: https://doc-column.streamlit.app/ height: 300px
def Column( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, ) -> ColumnConfig: """Configure a generic column in ``st.dataframe`` or ``st.data_editor``. The type of the column will be automatically inferred from the data type. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. To change the type of the column and enable type-specific configuration options, use one of the column types in the ``st.column_config`` namespace, e.g. ``st.column_config.NumberColumn``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "widgets": st.column_config.Column( >>> "Streamlit Widgets", >>> help="Streamlit **widget** commands 🎈", >>> width="medium", >>> required=True, >>> ) >>> }, >>> hide_index=True, >>> num_rows="dynamic", >>> ) .. output:: https://doc-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required )
Configure a number column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for integer and float values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a numeric input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: int, float, or None Specifies the default value in this column when a new row is added by the user. format : str or None A printf-style format string controlling how numbers are displayed. This does not impact the return value. Valid formatters: %d %e %f %g %i %u. You can also add prefixes and suffixes, e.g. ``"$ %.2f"`` to show a dollar prefix. min_value : int, float, or None The minimum value that can be entered. If None (default), there will be no minimum. max_value : int, float, or None The maximum value that can be entered. If None (default), there will be no maximum. step: int, float, or None The stepping interval. Specifies the precision of numbers that can be entered. If None (default), uses 1 for integers and unrestricted precision for floats. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "price": [20, 950, 250, 500], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "price": st.column_config.NumberColumn( >>> "Price (in USD)", >>> help="The price of the product in USD", >>> min_value=0, >>> max_value=1000, >>> step=1, >>> format="$%d", >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-number-column.streamlit.app/ height: 300px
def NumberColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: int | float | None = None, format: str | None = None, min_value: int | float | None = None, max_value: int | float | None = None, step: int | float | None = None, ) -> ColumnConfig: """Configure a number column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for integer and float values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a numeric input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: int, float, or None Specifies the default value in this column when a new row is added by the user. format : str or None A printf-style format string controlling how numbers are displayed. This does not impact the return value. Valid formatters: %d %e %f %g %i %u. You can also add prefixes and suffixes, e.g. ``"$ %.2f"`` to show a dollar prefix. min_value : int, float, or None The minimum value that can be entered. If None (default), there will be no minimum. max_value : int, float, or None The maximum value that can be entered. If None (default), there will be no maximum. step: int, float, or None The stepping interval. Specifies the precision of numbers that can be entered. If None (default), uses 1 for integers and unrestricted precision for floats. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "price": [20, 950, 250, 500], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "price": st.column_config.NumberColumn( >>> "Price (in USD)", >>> help="The price of the product in USD", >>> min_value=0, >>> max_value=1000, >>> step=1, >>> format="$%d", >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-number-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=default, type_config=NumberColumnConfig( type="number", min_value=min_value, max_value=max_value, format=format, step=step, ), )
Configure a text column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for string values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a text input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str or None Specifies the default value in this column when a new row is added by the user. max_chars: int or None The maximum number of characters that can be entered. If None (default), there will be no maximum. validate: str or None A regular expression (JS flavor, e.g. ``"^[a-z]+$"``) that edited values are validated against. If the input is invalid, it will not be submitted. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "widgets": st.column_config.TextColumn( >>> "Widgets", >>> help="Streamlit **widget** commands 🎈", >>> default="st.", >>> max_chars=50, >>> validate="^st\.[a-z_]+$", >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-text-column.streamlit.app/ height: 300px
def TextColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: str | None = None, max_chars: int | None = None, validate: str | None = None, ) -> ColumnConfig: r"""Configure a text column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for string values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a text input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str or None Specifies the default value in this column when a new row is added by the user. max_chars: int or None The maximum number of characters that can be entered. If None (default), there will be no maximum. validate: str or None A regular expression (JS flavor, e.g. ``"^[a-z]+$"``) that edited values are validated against. If the input is invalid, it will not be submitted. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "widgets": st.column_config.TextColumn( >>> "Widgets", >>> help="Streamlit **widget** commands 🎈", >>> default="st.", >>> max_chars=50, >>> validate="^st\.[a-z_]+$", >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-text-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=default, type_config=TextColumnConfig( type="text", max_chars=max_chars, validate=validate ), )
Configure a link column in ``st.dataframe`` or ``st.data_editor``. The cell values need to be string and will be shown as clickable links. This command needs to be used in the column_config parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a text input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str or None Specifies the default value in this column when a new row is added by the user. max_chars: int or None The maximum number of characters that can be entered. If None (default), there will be no maximum. validate: str or None A regular expression (JS flavor, e.g. ``"^https://.+$"``) that edited values are validated against. If the input is invalid, it will not be submitted. display_text: str or None The text that is displayed in the cell. Can be one of: * ``None`` (default) to display the URL itself. * A string that is displayed in every cell, e.g. ``"Open link"``. * A regular expression (JS flavor, detected by usage of parentheses) to extract a part of the URL via a capture group, e.g. ``"https://(.*?)\.example\.com"`` to extract the display text "foo" from the URL "https://foo.example.com". For more complex cases, you may use `Pandas Styler's format <https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html>`_ function on the underlying dataframe. Note that this makes the app slow, doesn't work with editable columns, and might be removed in the future. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "apps": [ >>> "https://roadmap.streamlit.app", >>> "https://extras.streamlit.app", >>> "https://issues.streamlit.app", >>> "https://30days.streamlit.app", >>> ], >>> "creator": [ >>> "https://github.com/streamlit", >>> "https://github.com/arnaudmiribel", >>> "https://github.com/streamlit", >>> "https://github.com/streamlit", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "apps": st.column_config.LinkColumn( >>> "Trending apps", >>> help="The top trending Streamlit apps", >>> validate="^https://[a-z]+\.streamlit\.app$", >>> max_chars=100, >>> display_text="https://(.*?)\.streamlit\.app" >>> ), >>> "creator": st.column_config.LinkColumn( >>> "App Creator", display_text="Open profile" >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-link-column.streamlit.app/ height: 300px
def LinkColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: str | None = None, max_chars: int | None = None, validate: str | None = None, display_text: str | None = None, ) -> ColumnConfig: """Configure a link column in ``st.dataframe`` or ``st.data_editor``. The cell values need to be string and will be shown as clickable links. This command needs to be used in the column_config parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a text input widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str or None Specifies the default value in this column when a new row is added by the user. max_chars: int or None The maximum number of characters that can be entered. If None (default), there will be no maximum. validate: str or None A regular expression (JS flavor, e.g. ``"^https://.+$"``) that edited values are validated against. If the input is invalid, it will not be submitted. display_text: str or None The text that is displayed in the cell. Can be one of: * ``None`` (default) to display the URL itself. * A string that is displayed in every cell, e.g. ``"Open link"``. * A regular expression (JS flavor, detected by usage of parentheses) to extract a part of the URL via a capture group, e.g. ``"https://(.*?)\\.example\\.com"`` to extract the display text "foo" from the URL "https://foo.example.com". For more complex cases, you may use `Pandas Styler's format \ <https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html>`_ function on the underlying dataframe. Note that this makes the app slow, doesn't work with editable columns, and might be removed in the future. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "apps": [ >>> "https://roadmap.streamlit.app", >>> "https://extras.streamlit.app", >>> "https://issues.streamlit.app", >>> "https://30days.streamlit.app", >>> ], >>> "creator": [ >>> "https://github.com/streamlit", >>> "https://github.com/arnaudmiribel", >>> "https://github.com/streamlit", >>> "https://github.com/streamlit", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "apps": st.column_config.LinkColumn( >>> "Trending apps", >>> help="The top trending Streamlit apps", >>> validate="^https://[a-z]+\\.streamlit\\.app$", >>> max_chars=100, >>> display_text="https://(.*?)\\.streamlit\\.app" >>> ), >>> "creator": st.column_config.LinkColumn( >>> "App Creator", display_text="Open profile" >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-link-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=default, type_config=LinkColumnConfig( type="link", max_chars=max_chars, validate=validate, display_text=display_text, ), )
Configure a checkbox column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for boolean values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a checkbox widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: bool or None Specifies the default value in this column when a new row is added by the user. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> "favorite": [True, False, False, True], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "favorite": st.column_config.CheckboxColumn( >>> "Your favorite?", >>> help="Select your **favorite** widgets", >>> default=False, >>> ) >>> }, >>> disabled=["widgets"], >>> hide_index=True, >>> ) .. output:: https://doc-checkbox-column.streamlit.app/ height: 300px
def CheckboxColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: bool | None = None, ) -> ColumnConfig: """Configure a checkbox column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for boolean values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a checkbox widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: bool or None Specifies the default value in this column when a new row is added by the user. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "widgets": ["st.selectbox", "st.number_input", "st.text_area", "st.button"], >>> "favorite": [True, False, False, True], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "favorite": st.column_config.CheckboxColumn( >>> "Your favorite?", >>> help="Select your **favorite** widgets", >>> default=False, >>> ) >>> }, >>> disabled=["widgets"], >>> hide_index=True, >>> ) .. output:: https://doc-checkbox-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=default, type_config=CheckboxColumnConfig(type="checkbox"), )
Configure a selectbox column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for Pandas categorical values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a selectbox widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str, int, float, bool, or None Specifies the default value in this column when a new row is added by the user. options: Iterable of str or None The options that can be selected during editing. If None (default), this will be inferred from the underlying dataframe column if its dtype is "category" (`see Pandas docs on categorical data <https://pandas.pydata.org/docs/user_guide/categorical.html>`_). Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "category": [ >>> "📊 Data Exploration", >>> "📈 Data Visualization", >>> "🤖 LLM", >>> "📊 Data Exploration", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "category": st.column_config.SelectboxColumn( >>> "App Category", >>> help="The category of the app", >>> width="medium", >>> options=[ >>> "📊 Data Exploration", >>> "📈 Data Visualization", >>> "🤖 LLM", >>> ], >>> required=True, >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-selectbox-column.streamlit.app/ height: 300px
def SelectboxColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: str | int | float | None = None, options: Iterable[str | int | float] | None = None, ) -> ColumnConfig: """Configure a selectbox column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for Pandas categorical values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a selectbox widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: str, int, float, bool, or None Specifies the default value in this column when a new row is added by the user. options: Iterable of str or None The options that can be selected during editing. If None (default), this will be inferred from the underlying dataframe column if its dtype is "category" (`see Pandas docs on categorical data <https://pandas.pydata.org/docs/user_guide/categorical.html>`_). Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "category": [ >>> "📊 Data Exploration", >>> "📈 Data Visualization", >>> "🤖 LLM", >>> "📊 Data Exploration", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "category": st.column_config.SelectboxColumn( >>> "App Category", >>> help="The category of the app", >>> width="medium", >>> options=[ >>> "📊 Data Exploration", >>> "📈 Data Visualization", >>> "🤖 LLM", >>> ], >>> required=True, >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-selectbox-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=default, type_config=SelectboxColumnConfig( type="selectbox", options=list(options) if options is not None else None ), )
Configure a bar chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.BarChartColumn( >>> "Sales (last 6 months)", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-barchart-column.streamlit.app/ height: 300px
def BarChartColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, y_min: int | float | None = None, y_max: int | float | None = None, ) -> ColumnConfig: """Configure a bar chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.BarChartColumn( >>> "Sales (last 6 months)", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-barchart-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=BarChartColumnConfig(type="bar_chart", y_min=y_min, y_max=y_max), )
Configure a line chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.LineChartColumn( >>> "Sales (last 6 months)", >>> width="medium", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-linechart-column.streamlit.app/ height: 300px
def LineChartColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, y_min: int | float | None = None, y_max: int | float | None = None, ) -> ColumnConfig: """Configure a line chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.LineChartColumn( >>> "Sales (last 6 months)", >>> width="medium", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-linechart-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=LineChartColumnConfig(type="line_chart", y_min=y_min, y_max=y_max), )
Configure an area chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.AreaChartColumn( >>> "Sales (last 6 months)", >>> width="medium", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-areachart-column.streamlit.app/ height: 300px
def AreaChartColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, y_min: int | float | None = None, y_max: int | float | None = None, ) -> ColumnConfig: """Configure an area chart column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a list of numbers. Chart columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. y_min: int, float, or None The minimum value on the y-axis for all cells in the column. If None (default), every cell will use the minimum of its data. y_max: int, float, or None The maximum value on the y-axis for all cells in the column. If None (default), every cell will use the maximum of its data. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.AreaChartColumn( >>> "Sales (last 6 months)", >>> width="medium", >>> help="The sales volume in the last 6 months", >>> y_min=0, >>> y_max=100, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-areachart-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=AreaChartColumnConfig(type="area_chart", y_min=y_min, y_max=y_max), )
Configure an image column in ``st.dataframe`` or ``st.data_editor``. The cell values need to be one of: * A URL to fetch the image from. This can also be a relative URL of an image deployed via `static file serving <https://docs.streamlit.io/library/advanced-features/static-file-serving>`_. Note that you can NOT use an arbitrary local image if it is not available through a public URL. * A data URL containing an SVG XML like ``data:image/svg+xml;utf8,<svg xmlns=...</svg>``. * A data URL containing a Base64 encoded image like ``data:image/png;base64,iVBO...``. Image columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "apps": [ >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/5435b8cb-6c6c-490b-9608-799b543655d3/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/ef9a7627-13f2-47e5-8f65-3f69bb38a5c2/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/31b99099-8eae-4ff8-aa89-042895ed3843/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/6a399b09-241e-4ae7-a31f-7640dc1d181e/Home_Page.png", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "apps": st.column_config.ImageColumn( >>> "Preview Image", help="Streamlit app preview screenshots" >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-image-column.streamlit.app/ height: 300px
def ImageColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, ): """Configure an image column in ``st.dataframe`` or ``st.data_editor``. The cell values need to be one of: * A URL to fetch the image from. This can also be a relative URL of an image deployed via `static file serving <https://docs.streamlit.io/library/advanced-features/static-file-serving>`_. Note that you can NOT use an arbitrary local image if it is not available through a public URL. * A data URL containing an SVG XML like ``data:image/svg+xml;utf8,<svg xmlns=...</svg>``. * A data URL containing a Base64 encoded image like ``data:image/png;base64,iVBO...``. Image columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "apps": [ >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/5435b8cb-6c6c-490b-9608-799b543655d3/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/ef9a7627-13f2-47e5-8f65-3f69bb38a5c2/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/31b99099-8eae-4ff8-aa89-042895ed3843/Home_Page.png", >>> "https://storage.googleapis.com/s4a-prod-share-preview/default/st_app_screenshot_image/6a399b09-241e-4ae7-a31f-7640dc1d181e/Home_Page.png", >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "apps": st.column_config.ImageColumn( >>> "Preview Image", help="Streamlit app preview screenshots" >>> ) >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-image-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=ImageColumnConfig(type="image") )
Configure a list column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for list-like values. List columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.ListColumn( >>> "Sales (last 6 months)", >>> help="The sales volume in the last 6 months", >>> width="medium", >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-list-column.streamlit.app/ height: 300px
def ListColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, ): """Configure a list column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for list-like values. List columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [ >>> [0, 4, 26, 80, 100, 40], >>> [80, 20, 80, 35, 40, 100], >>> [10, 20, 80, 80, 70, 0], >>> [10, 100, 20, 100, 30, 100], >>> ], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.ListColumn( >>> "Sales (last 6 months)", >>> help="The sales volume in the last 6 months", >>> width="medium", >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-list-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=ListColumnConfig(type="list") )
Configure a datetime column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for datetime values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a datetime picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.datetime or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how datetimes are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``YYYY-MM-DD HH:mm:ss``. min_value: datetime.datetime or None The minimum datetime that can be entered. If None (default), there will be no minimum. max_value: datetime.datetime or None The maximum datetime that can be entered. If None (default), there will be no maximum. step: int, float, datetime.timedelta, or None The stepping interval in seconds. If None (default), the step will be 1 second. timezone: str or None The timezone of this column. If None (default), the timezone is inferred from the underlying data. Examples -------- >>> from datetime import datetime >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "appointment": [ >>> datetime(2024, 2, 5, 12, 30), >>> datetime(2023, 11, 10, 18, 0), >>> datetime(2024, 3, 11, 20, 10), >>> datetime(2023, 9, 12, 3, 0), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "appointment": st.column_config.DatetimeColumn( >>> "Appointment", >>> min_value=datetime(2023, 6, 1), >>> max_value=datetime(2025, 1, 1), >>> format="D MMM YYYY, h:mm a", >>> step=60, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-datetime-column.streamlit.app/ height: 300px
def DatetimeColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: datetime.datetime | None = None, format: str | None = None, min_value: datetime.datetime | None = None, max_value: datetime.datetime | None = None, step: int | float | datetime.timedelta | None = None, timezone: str | None = None, ) -> ColumnConfig: """Configure a datetime column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for datetime values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a datetime picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.datetime or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how datetimes are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``YYYY-MM-DD HH:mm:ss``. min_value: datetime.datetime or None The minimum datetime that can be entered. If None (default), there will be no minimum. max_value: datetime.datetime or None The maximum datetime that can be entered. If None (default), there will be no maximum. step: int, float, datetime.timedelta, or None The stepping interval in seconds. If None (default), the step will be 1 second. timezone: str or None The timezone of this column. If None (default), the timezone is inferred from the underlying data. Examples -------- >>> from datetime import datetime >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "appointment": [ >>> datetime(2024, 2, 5, 12, 30), >>> datetime(2023, 11, 10, 18, 0), >>> datetime(2024, 3, 11, 20, 10), >>> datetime(2023, 9, 12, 3, 0), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "appointment": st.column_config.DatetimeColumn( >>> "Appointment", >>> min_value=datetime(2023, 6, 1), >>> max_value=datetime(2025, 1, 1), >>> format="D MMM YYYY, h:mm a", >>> step=60, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-datetime-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=None if default is None else default.isoformat(), type_config=DatetimeColumnConfig( type="datetime", format=format, min_value=None if min_value is None else min_value.isoformat(), max_value=None if max_value is None else max_value.isoformat(), step=step.total_seconds() if isinstance(step, datetime.timedelta) else step, timezone=timezone, ), )
Configure a time column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for time values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a time picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.time or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how times are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``HH:mm:ss``. min_value: datetime.time or None The minimum time that can be entered. If None (default), there will be no minimum. max_value: datetime.time or None The maximum time that can be entered. If None (default), there will be no maximum. step: int, float, datetime.timedelta, or None The stepping interval in seconds. If None (default), the step will be 1 second. Examples -------- >>> from datetime import time >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "appointment": [ >>> time(12, 30), >>> time(18, 0), >>> time(9, 10), >>> time(16, 25), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "appointment": st.column_config.TimeColumn( >>> "Appointment", >>> min_value=time(8, 0, 0), >>> max_value=time(19, 0, 0), >>> format="hh:mm a", >>> step=60, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-time-column.streamlit.app/ height: 300px
def TimeColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: datetime.time | None = None, format: str | None = None, min_value: datetime.time | None = None, max_value: datetime.time | None = None, step: int | float | datetime.timedelta | None = None, ) -> ColumnConfig: """Configure a time column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for time values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a time picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.time or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how times are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``HH:mm:ss``. min_value: datetime.time or None The minimum time that can be entered. If None (default), there will be no minimum. max_value: datetime.time or None The maximum time that can be entered. If None (default), there will be no maximum. step: int, float, datetime.timedelta, or None The stepping interval in seconds. If None (default), the step will be 1 second. Examples -------- >>> from datetime import time >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "appointment": [ >>> time(12, 30), >>> time(18, 0), >>> time(9, 10), >>> time(16, 25), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "appointment": st.column_config.TimeColumn( >>> "Appointment", >>> min_value=time(8, 0, 0), >>> max_value=time(19, 0, 0), >>> format="hh:mm a", >>> step=60, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-time-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=None if default is None else default.isoformat(), type_config=TimeColumnConfig( type="time", format=format, min_value=None if min_value is None else min_value.isoformat(), max_value=None if max_value is None else max_value.isoformat(), step=step.total_seconds() if isinstance(step, datetime.timedelta) else step, ), )
Configure a date column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for date values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a date picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.date or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how times are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``YYYY-MM-DD``. min_value: datetime.date or None The minimum date that can be entered. If None (default), there will be no minimum. max_value: datetime.date or None The maximum date that can be entered. If None (default), there will be no maximum. step: int or None The stepping interval in days. If None (default), the step will be 1 day. Examples -------- >>> from datetime import date >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "birthday": [ >>> date(1980, 1, 1), >>> date(1990, 5, 3), >>> date(1974, 5, 19), >>> date(2001, 8, 17), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "birthday": st.column_config.DateColumn( >>> "Birthday", >>> min_value=date(1900, 1, 1), >>> max_value=date(2005, 1, 1), >>> format="DD.MM.YYYY", >>> step=1, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-date-column.streamlit.app/ height: 300px
def DateColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, disabled: bool | None = None, required: bool | None = None, default: datetime.date | None = None, format: str | None = None, min_value: datetime.date | None = None, max_value: datetime.date | None = None, step: int | None = None, ) -> ColumnConfig: """Configure a date column in ``st.dataframe`` or ``st.data_editor``. This is the default column type for date values. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. When used with ``st.data_editor``, editing will be enabled with a date picker widget. Parameters ---------- label: str or None The label shown at the top of the column. If None (default), the column name is used. width: "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help: str or None An optional tooltip that gets displayed when hovering over the column label. disabled: bool or None Whether editing should be disabled for this column. Defaults to False. required: bool or None Whether edited cells in the column need to have a value. If True, an edited cell can only be submitted if it has a value other than None. Defaults to False. default: datetime.date or None Specifies the default value in this column when a new row is added by the user. format: str or None A momentJS format string controlling how times are displayed. See `momentJS docs <https://momentjs.com/docs/#/displaying/format/>`_ for available formats. If None (default), uses ``YYYY-MM-DD``. min_value: datetime.date or None The minimum date that can be entered. If None (default), there will be no minimum. max_value: datetime.date or None The maximum date that can be entered. If None (default), there will be no maximum. step: int or None The stepping interval in days. If None (default), the step will be 1 day. Examples -------- >>> from datetime import date >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "birthday": [ >>> date(1980, 1, 1), >>> date(1990, 5, 3), >>> date(1974, 5, 19), >>> date(2001, 8, 17), >>> ] >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "birthday": st.column_config.DateColumn( >>> "Birthday", >>> min_value=date(1900, 1, 1), >>> max_value=date(2005, 1, 1), >>> format="DD.MM.YYYY", >>> step=1, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-date-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, disabled=disabled, required=required, default=None if default is None else default.isoformat(), type_config=DateColumnConfig( type="date", format=format, min_value=None if min_value is None else min_value.isoformat(), max_value=None if max_value is None else max_value.isoformat(), step=step, ), )
Configure a progress column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a number. Progress columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label : str or None The label shown at the top of the column. If None (default), the column name is used. width : "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help : str or None An optional tooltip that gets displayed when hovering over the column label. format : str or None A printf-style format string controlling how numbers are displayed. Valid formatters: %d %e %f %g %i %u. You can also add prefixes and suffixes, e.g. ``"$ %.2f"`` to show a dollar prefix. min_value : int, float, or None The minimum value of the progress bar. If None (default), will be 0. max_value : int, float, or None The minimum value of the progress bar. If None (default), will be 100 for integer values and 1 for float values. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [200, 550, 1000, 80], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.ProgressColumn( >>> "Sales volume", >>> help="The sales volume in USD", >>> format="$%f", >>> min_value=0, >>> max_value=1000, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-progress-column.streamlit.app/ height: 300px
def ProgressColumn( label: str | None = None, *, width: ColumnWidth | None = None, help: str | None = None, format: str | None = None, min_value: int | float | None = None, max_value: int | float | None = None, ) -> ColumnConfig: """Configure a progress column in ``st.dataframe`` or ``st.data_editor``. Cells need to contain a number. Progress columns are not editable at the moment. This command needs to be used in the ``column_config`` parameter of ``st.dataframe`` or ``st.data_editor``. Parameters ---------- label : str or None The label shown at the top of the column. If None (default), the column name is used. width : "small", "medium", "large", or None The display width of the column. Can be one of "small", "medium", or "large". If None (default), the column will be sized to fit the cell contents. help : str or None An optional tooltip that gets displayed when hovering over the column label. format : str or None A printf-style format string controlling how numbers are displayed. Valid formatters: %d %e %f %g %i %u. You can also add prefixes and suffixes, e.g. ``"$ %.2f"`` to show a dollar prefix. min_value : int, float, or None The minimum value of the progress bar. If None (default), will be 0. max_value : int, float, or None The minimum value of the progress bar. If None (default), will be 100 for integer values and 1 for float values. Examples -------- >>> import pandas as pd >>> import streamlit as st >>> >>> data_df = pd.DataFrame( >>> { >>> "sales": [200, 550, 1000, 80], >>> } >>> ) >>> >>> st.data_editor( >>> data_df, >>> column_config={ >>> "sales": st.column_config.ProgressColumn( >>> "Sales volume", >>> help="The sales volume in USD", >>> format="$%f", >>> min_value=0, >>> max_value=1000, >>> ), >>> }, >>> hide_index=True, >>> ) .. output:: https://doc-progress-column.streamlit.app/ height: 300px """ return ColumnConfig( label=label, width=width, help=help, type_config=ProgressColumnConfig( type="progress", format=format, min_value=min_value, max_value=max_value, ), )
Maps the user-provided literal to a value of the DialogWidth proto enum. Returns the mapped enum field for "small" by default and otherwise the mapped type.
def _process_dialog_width_input( width: DialogWidth, ) -> BlockProto.Dialog.DialogWidth.ValueType: """Maps the user-provided literal to a value of the DialogWidth proto enum. Returns the mapped enum field for "small" by default and otherwise the mapped type. """ if width == "large": return BlockProto.Dialog.DialogWidth.LARGE return BlockProto.Dialog.DialogWidth.SMALL
Check whether a dialog has already been opened in the same script run. Only one dialog is supposed to be opened. The check is implemented in a way that for a script run, the open function can only be called once. One dialog at a time is a product decision and not a technical one. Raises ------ StreamlitAPIException Raised when a dialog has already been opened in the current script run.
def _assert_first_dialog_to_be_opened(should_open: bool) -> None: """Check whether a dialog has already been opened in the same script run. Only one dialog is supposed to be opened. The check is implemented in a way that for a script run, the open function can only be called once. One dialog at a time is a product decision and not a technical one. Raises ------ StreamlitAPIException Raised when a dialog has already been opened in the current script run. """ script_run_ctx = get_script_run_ctx() # We don't reset the ctx.has_dialog_opened when the flag is False because # it is reset in a new scriptrun anyways. If the execution model ever changes, # this might need to change. if should_open and script_run_ctx: if script_run_ctx.has_dialog_opened: raise StreamlitAPIException( "Only one dialog is allowed to be opened at the same time. Please make sure to not call a dialog-decorated function more than once in a script run." ) script_run_ctx.has_dialog_opened = True
Convert a flat dict of key-value pairs to dict tree. Example ------- _unflatten_single_dict({ foo_bar_baz: 123, foo_bar_biz: 456, x_bonks: 'hi', }) # Returns: # { # foo: { # bar: { # baz: 123, # biz: 456, # }, # }, # x: { # bonks: 'hi' # } # } Parameters ---------- flat_dict : dict A one-level dict where keys are fully-qualified paths separated by underscores. Returns ------- dict A tree made of dicts inside of dicts.
def _unflatten_single_dict(flat_dict: dict[Any, Any]) -> dict[Any, Any]: """Convert a flat dict of key-value pairs to dict tree. Example ------- _unflatten_single_dict({ foo_bar_baz: 123, foo_bar_biz: 456, x_bonks: 'hi', }) # Returns: # { # foo: { # bar: { # baz: 123, # biz: 456, # }, # }, # x: { # bonks: 'hi' # } # } Parameters ---------- flat_dict : dict A one-level dict where keys are fully-qualified paths separated by underscores. Returns ------- dict A tree made of dicts inside of dicts. """ out: dict[str, Any] = dict() for pathstr, v in flat_dict.items(): path = pathstr.split("_") prev_dict: dict[str, Any] | None = None curr_dict = out for k in path: if k not in curr_dict: curr_dict[k] = dict() prev_dict = curr_dict curr_dict = curr_dict[k] if prev_dict is not None: prev_dict[k] = v return out
Converts a flat dict of key-value pairs to a spec tree. Example ------- unflatten({ foo_bar_baz: 123, foo_bar_biz: 456, x_bonks: 'hi', }, ['x']) # Returns: # { # foo: { # bar: { # baz: 123, # biz: 456, # }, # }, # encoding: { # This gets added automatically # x: { # bonks: 'hi' # } # } # } Args ---- flat_dict: dict A flat dict where keys are fully-qualified paths separated by underscores. encodings: set Key names that should be automatically moved into the 'encoding' key. Returns ------- A tree made of dicts inside of dicts.
def unflatten( flat_dict: dict[Any, Any], encodings: set[str] | None = None ) -> dict[Any, Any]: """Converts a flat dict of key-value pairs to a spec tree. Example ------- unflatten({ foo_bar_baz: 123, foo_bar_biz: 456, x_bonks: 'hi', }, ['x']) # Returns: # { # foo: { # bar: { # baz: 123, # biz: 456, # }, # }, # encoding: { # This gets added automatically # x: { # bonks: 'hi' # } # } # } Args ---- flat_dict: dict A flat dict where keys are fully-qualified paths separated by underscores. encodings: set Key names that should be automatically moved into the 'encoding' key. Returns ------- A tree made of dicts inside of dicts. """ if encodings is None: encodings = set() out_dict = _unflatten_single_dict(flat_dict) for k, v in list(out_dict.items()): # Unflatten child dicts: if isinstance(v, dict): v = unflatten(v, encodings) elif hasattr(v, "__iter__"): for i, child in enumerate(v): if isinstance(child, dict): v[i] = unflatten(child, encodings) # Move items into 'encoding' if needed: if k in encodings: if "encoding" not in out_dict: out_dict["encoding"] = dict() out_dict["encoding"][k] = v out_dict.pop(k) return out_dict
Remove all keys with None values from a dict.
def remove_none_values(input_dict: Mapping[Any, Any]) -> dict[Any, Any]: """Remove all keys with None values from a dict.""" new_dict = {} for key, val in input_dict.items(): if isinstance(val, dict): val = remove_none_values(val) if val is not None: new_dict[key] = val return new_dict
Marshall pandas.Styler into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. default_uuid : str If pandas.Styler uuid is not provided, this value will be used.
def marshall_styler(proto: ArrowProto, styler: Styler, default_uuid: str) -> None: """Marshall pandas.Styler into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. default_uuid : str If pandas.Styler uuid is not provided, this value will be used. """ import pandas as pd styler_data_df: pd.DataFrame = styler.data if styler_data_df.size > int(pd.options.styler.render.max_elements): raise StreamlitAPIException( f"The dataframe has `{styler_data_df.size}` cells, but the maximum number " "of cells allowed to be rendered by Pandas Styler is configured to " f"`{pd.options.styler.render.max_elements}`. To allow more cells to be " 'styled, you can change the `"styler.render.max_elements"` config. For example: ' f'`pd.set_option("styler.render.max_elements", {styler_data_df.size})`' ) # pandas.Styler uuid should be set before _compute is called. _marshall_uuid(proto, styler, default_uuid) # We're using protected members of pandas.Styler to get styles, # which is not ideal and could break if the interface changes. styler._compute() pandas_styles = styler._translate(False, False) _marshall_caption(proto, styler) _marshall_styles(proto, styler, pandas_styles) _marshall_display_values(proto, styler_data_df, pandas_styles)
Marshall pandas.Styler uuid into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. default_uuid : str If pandas.Styler uuid is not provided, this value will be used.
def _marshall_uuid(proto: ArrowProto, styler: Styler, default_uuid: str) -> None: """Marshall pandas.Styler uuid into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. default_uuid : str If pandas.Styler uuid is not provided, this value will be used. """ if styler.uuid is None: styler.set_uuid(default_uuid) proto.styler.uuid = str(styler.uuid)
Marshall pandas.Styler caption into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS.
def _marshall_caption(proto: ArrowProto, styler: Styler) -> None: """Marshall pandas.Styler caption into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. """ if styler.caption is not None: proto.styler.caption = styler.caption
Marshall pandas.Styler styles into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. styles : dict pandas.Styler translated styles.
def _marshall_styles( proto: ArrowProto, styler: Styler, styles: Mapping[str, Any] ) -> None: """Marshall pandas.Styler styles into an Arrow proto. Parameters ---------- proto : proto.Arrow Output. The protobuf for Streamlit Arrow proto. styler : pandas.Styler Helps style a DataFrame or Series according to the data with HTML and CSS. styles : dict pandas.Styler translated styles. """ css_rules = [] if "table_styles" in styles: table_styles = styles["table_styles"] table_styles = _trim_pandas_styles(table_styles) for style in table_styles: # styles in "table_styles" have a space # between the uuid and selector. rule = _pandas_style_to_css( "table_styles", style, styler.uuid, separator=" " ) css_rules.append(rule) if "cellstyle" in styles: cellstyle = styles["cellstyle"] cellstyle = _trim_pandas_styles(cellstyle) for style in cellstyle: rule = _pandas_style_to_css("cell_style", style, styler.uuid) css_rules.append(rule) if len(css_rules) > 0: proto.styler.styles = "\n".join(css_rules)
Filter out empty styles. Every cell will have a class, but the list of props may just be [['', '']]. Parameters ---------- styles : list pandas.Styler translated styles.
def _trim_pandas_styles(styles: list[M]) -> list[M]: """Filter out empty styles. Every cell will have a class, but the list of props may just be [['', '']]. Parameters ---------- styles : list pandas.Styler translated styles. """ return [x for x in styles if any(any(y) for y in x["props"])]