File size: 3,929 Bytes
45154b4
 
 
 
 
 
 
16fcc83
45154b4
 
 
 
 
 
 
836bd51
45154b4
 
 
16fcc83
45154b4
 
16fcc83
45154b4
 
 
 
 
16fcc83
 
45154b4
 
 
 
 
 
 
 
 
 
 
16fcc83
 
 
 
 
656770c
 
 
 
 
 
 
836bd51
 
656770c
 
 
 
 
 
836bd51
656770c
 
 
 
16fcc83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
656770c
 
 
 
836bd51
 
656770c
 
 
 
 
 
 
 
836bd51
 
656770c
836bd51
656770c
 
 
 
836bd51
656770c
 
 
 
 
 
 
 
 
 
 
 
16fcc83
 
 
 
 
 
836bd51
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# How to run the UI

We set this up so it is hosted as a huggingface space. Each commit to `main` triggers a push and a rebuild on their servers.

For local testing, assuming you have all the required packages installed in a
conda env or virtualenv, and that env is activated:

```bash
cd src
streamlit run main.py
```
Then use a web browser to view the site indiciated, by default: http://localhost:8501

# How to build and view docs locally

We have a CI action to present the docs on github.io. 
To validate locally, you need the deps listed in `requirements.txt` installed. 

Run
```bash
mkdocs serve
```

And navigate to the wish server running locally, by default: http://127.0.0.1:8888/

This automatically watches for changes in the markdown files, but if you edit the 
something else like the docstrings in py files, triggering a rebuild in another terminal
refreshes the site, without having to quit and restart the server.

```bash
mkdocs build -c
```



# Set up a venv

(standard stuff)

# Set up a conda env

(Standard stuff)


# Testing

## use of markers

The CI runs with `--strict-markers` so any new marker must be registered in
`pytest.ini`. 

- the basic CI action runs the fast tests only, skipping all tests marked
  `visual` and `slow`
- the CI action on PR runs the `slow` tests, but still excluding `visual`. 
- a second action for the visual tests runs on PR.

Check all tests are marked ok, and that they are filtered correctly by the 
groupings used in CI:
```bash
pytest --collect-only -m "not slow and not visual" --strict-markers --ignore=tests/visual_selenium
pytest --collect-only -m "not visual" --strict-markers --ignore=tests/visual_selenium
pytest --collect-only -m "visual" --strict-markers tests/visual_selenium/ -s --demo
```



## local testing
To run the tests locally, we have the standard dependencies of the project, plus the test runner dependencies. 

```bash
pip install -r tests/requirements.txt
```

(If we migrate to using toml config, the test reqs could be consolidated into an optional section)


**Running tests**
from the project root, simply run:

```bash
pytest
# or pick a specific test file to run
pytest tests/test_whale_viewer.py
```

To generate a coverage report to screen (also run the tests):
```bash
pytest --cov=src 
```

To generate reports on pass rate and coverage, to files:
```bash
pytest --junit-xml=test-results.xml
pytest --cov-report=lcov --cov=src
```

## local testing for visual tests 

We use seleniumbase to test the visual appearance of the app, including the
presence of elements that appear through the workflow.  This testing takes quite
a long time to execute. It is configured in a separate CI action
(`python-visualtests.yml`).

```bash
# install packages for app and for visual testing
pip install ./requirements.txt
pip install -r tests/visual_selenium/requirements_visual.txt
```

**Running tests**
The execution of these tests requires that the site/app is running already, which
is handled by a fixture (that starts the app in another thread).

Alternatively, in one tab, run: 
```bash
streamlit run src/main.py
```

In another tab, run:
```bash
# run just the visual tests
pytest -m "visual" --strict-markers
# run in demo mode, using firefox (default is chrome)
pytest -m "visual" --strict-markers -s browser=firefox --demo

# the inverse set:
pytest -m "not slow and not visual" --strict-markers --ignore=tests/visual_selenium

```



## CI testing

Initially we have an action setup that runs all tests in the `tests` directory, within the `test/tests` branch.

TODO: Add some test report & coverage badges to the README.


## Environment flags used in development 

- `DEBUG_AUTOPOPULATE_METADATA=True` : Set this env variable to have the text
  inputs autopopulated, to make stepping through the workflow faster during
  development work.

Typical usage:

```bash
DEBUG_AUTOPOPULATE_METADATA=True streamlit run src/main.py
```