repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
zulip/zulip | 14,942 | zulip__zulip-14942 | [
"14963"
] | 937930cc90bc815a5e8c125b3e99aaf17abe30c7 | diff --git a/zerver/data_import/slack.py b/zerver/data_import/slack.py
--- a/zerver/data_import/slack.py
+++ b/zerver/data_import/slack.py
@@ -1033,6 +1033,8 @@ def do_convert_data(slack_zip_file: str, output_dir: str, token: str, threads: i
realm_id = 0
domain_name = settings.EXTERNAL_HOST
+ log_token_warning(token)
+
slack_data_dir = slack_zip_file.replace('.zip', '')
if not os.path.exists(slack_data_dir):
os.makedirs(slack_data_dir)
@@ -1100,15 +1102,16 @@ def get_data_file(path: str) -> Any:
data = ujson.load(fp)
return data
+def log_token_warning(token: str) -> None:
+ if not token.startswith("xoxp-"):
+ logging.info('Not a Slack legacy token.\n'
+ ' This token might not have all the needed scopes. We need the following scopes:\n'
+ ' - emoji:read\n - users:read\n - users:read.email\n - team:read')
+
+
def get_slack_api_data(slack_api_url: str, get_param: str, **kwargs: Any) -> Any:
if not kwargs.get("token"):
raise AssertionError("Slack token missing in kwargs")
- token = kwargs["token"]
- if not token.startswith("xoxp-"):
- raise Exception('Invalid Slack legacy token.\n'
- ' You must pass a Slack "legacy token" starting with "xoxp-".\n'
- ' Create one at https://api.slack.com/custom-integrations/legacy-tokens')
-
data = requests.get("{}?{}".format(slack_api_url, urlencode(kwargs)))
if data.status_code == requests.codes.ok:
| diff --git a/zerver/tests/test_slack_importer.py b/zerver/tests/test_slack_importer.py
--- a/zerver/tests/test_slack_importer.py
+++ b/zerver/tests/test_slack_importer.py
@@ -96,11 +96,6 @@ def test_get_slack_api_data(self, mock_get: mock.Mock) -> None:
get_slack_api_data(slack_user_list_url, "members", token=token)
self.assertEqual(invalid.exception.args, ('Error accessing Slack API: invalid_auth',),)
- token = 'xoxe-invalid-token'
- with self.assertRaises(Exception) as invalid:
- get_slack_api_data(slack_user_list_url, "members", token=token)
- self.assertTrue(invalid.exception.args[0].startswith("Invalid Slack legacy token.\n"))
-
with self.assertRaises(Exception) as invalid:
get_slack_api_data(slack_user_list_url, "members")
self.assertEqual(invalid.exception.args, ('Slack token missing in kwargs',),)
| Redirect portico pages to the root domain
Redirect almost all pages affected by #14898 to `ROOT_DOMAIN_URI` with permanent redirects:
* /api
* /apps
* /atlassian
* /features
* /for/companies
* /for/mystery-hunt
* /for-open-source
* /for/working-groups-and-communities
* /hello
* /help
* /history
* /integrations
* /privacy
* /security
* /team
* /terms
* /why-zulip
Not /plans, /plans is up to something weird.
**Testing Plan:** Dev server (where we actually redirect to `localhost:9991` rather than `ROOT_DOMAIN_URI` under the same logic we used for `STATIC_ROOT`).
| 2020-05-11T18:40:21 |
|
zulip/zulip | 15,125 | zulip__zulip-15125 | [
"15087"
] | 4d2b1673f8ce9925cf786fec16f4cff4ab036a4f | diff --git a/zerver/forms.py b/zerver/forms.py
--- a/zerver/forms.py
+++ b/zerver/forms.py
@@ -1,6 +1,6 @@
from django import forms
from django.conf import settings
-from django.contrib.auth import authenticate
+from django.contrib.auth import authenticate, password_validation
from django.contrib.auth.forms import SetPasswordForm, AuthenticationForm, \
PasswordResetForm
from django.core.exceptions import ValidationError
@@ -194,6 +194,20 @@ class RealmCreationForm(forms.Form):
email_is_not_disposable])
class LoggingSetPasswordForm(SetPasswordForm):
+ new_password1 = forms.CharField(
+ label=_("New password"),
+ widget=forms.PasswordInput(attrs={'autocomplete': 'new-password'}),
+ strip=False,
+ help_text=password_validation.password_validators_help_text_html(),
+ max_length=RegistrationForm.MAX_PASSWORD_LENGTH,
+ )
+ new_password2 = forms.CharField(
+ label=_("New password confirmation"),
+ strip=False,
+ widget=forms.PasswordInput(attrs={'autocomplete': 'new-password'}),
+ max_length=RegistrationForm.MAX_PASSWORD_LENGTH,
+ )
+
def clean_new_password1(self) -> str:
new_password = self.cleaned_data['new_password1']
if not check_password_strength(new_password):
| Limitation on password length?
I tried to use a 128 characters password on my Zulip profile. The password was accepted when I set it up, but then I was constantly getting rejected logins. I reset it three times via the 'forgot password' functionality.
When shortening it to 49 characters, it was accepted (like the 128 chards one) but then also functional (I could log in).
This would suggest a limit on the password length, which is a red light form a security perspective (there should be no limit because there is not reason for having one when the password is stored correctly). Is that the case?
| I hope you discussed this in the [forum](https://chat.zulip.org/) first, @wsw70?
@PawBud no I did not. Is it expected to raise a bug / issue in the community first before (or instead) of here?
If so I will move it there. (not sure by what you mean when you say that you hope)
Well keep this issue open and go to the community page. thank you 😄
Hello @zulip/server-authentication members, this issue was labeled with the "area: authentication" label, so you may want to check it out!
<!-- areaLabelAddition -->
It would appear we set a maximum password length of 100 characters.
```
class RegistrationForm(forms.Form):
MAX_PASSWORD_LENGTH = 100
full_name = forms.CharField(max_length=UserProfile.MAX_NAME_LENGTH)
# The required-ness of the password field gets overridden if it isn't
# actually required for a realm
password = forms.CharField(widget=forms.PasswordInput, max_length=MAX_PASSWORD_LENGTH)
realm_subdomain = forms.CharField(max_length=Realm.MAX_REALM_SUBDOMAIN_LENGTH, required=False)
```
From https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#maximum-password-lengths:
```
Maximum Password Lengths
Some hashing algorithms such as Bcrypt have a maximum length for the input, which is 72 characters for most implementations (there are some reports that other implementations have lower maximum lengths, but none have been identified at the time of writing). Where Bcrypt is used, a maximum length of 64 characters should be enforced on the input, as this provides a sufficiently high limit, while still allowing for string termination issues and not revealing that the application uses Bcrypt.
Additionally, due to how computationally expensive modern hashing functions are, if a user can supply very long passwords then there is a potential denial of service vulnerability, such as the one published in Django in 2013.
In order to protect against both of these issues, a maximum password length should be enforced. This should be 64 characters for Bcrypt (due to limitations in the algorithm and implementations), and between 64 and 128 characters for other algorithms.
```
It seems like 100 characters is potentially reasonable; maybe 128 would feel more standard. But I think there's still a bug here; the login and registration flows seem to be using a different truncation approach for too-long passwords (or something). Needs investigation.
Thanks @timabbott for the response and the reminder about bcrypt (I was pre-hashing passwords in the implementation above 72 chars, which is a good or bad idea depending on the population and their passwords, but anyways).
> It seems like 100 characters is potentially reasonable
It is. What is indeed problematic as you are saying is that a >100 chars password is accepted but truncated - and that truncated version is then stored as the actual password. Which means that a check at login time will always fail for such passwords.
@zulipbot claim
On a sidenote, I've noticed that after I reset the password for a user in the development environment, if I try to reset it again the email containing the reset link won't be logged to `http://localhost:9991/emails/`. Is this intended? | 2020-05-28T09:28:44 |
|
zulip/zulip | 15,179 | zulip__zulip-15179 | [
"10271",
"10271"
] | 6950d8d76965371df70a735b499d17377e6db42b | diff --git a/zerver/lib/push_notifications.py b/zerver/lib/push_notifications.py
--- a/zerver/lib/push_notifications.py
+++ b/zerver/lib/push_notifications.py
@@ -3,7 +3,7 @@
import logging
import re
import time
-from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Tuple, Union
import gcm
import lxml.html
@@ -610,6 +610,19 @@ def get_apns_alert_subtitle(message: Message) -> str:
# For group PMs, or regular messages to a stream, just use a colon to indicate this is the sender.
return message.sender.full_name + ":"
+def get_apns_badge_count(user_profile: UserProfile, read_messages_ids: Optional[Sequence[int]]=[]) -> int:
+ return UserMessage.objects.filter(
+ user_profile=user_profile
+ ).extra(
+ where=[UserMessage.where_active_push_notification()]
+ ).exclude(
+ # If we've just marked some messages as read, they're still
+ # marked as having active notifications; we'll clear that flag
+ # only after we've sent that update to the devices. So we need
+ # to exclude them explicitly from the count.
+ message_id__in=read_messages_ids
+ ).count()
+
def get_message_payload_apns(user_profile: UserProfile, message: Message) -> Dict[str, Any]:
'''A `message` payload for iOS, via APNs.'''
zulip_data = get_message_payload(user_profile, message)
@@ -625,7 +638,7 @@ def get_message_payload_apns(user_profile: UserProfile, message: Message) -> Dic
'body': content,
},
'sound': 'default',
- 'badge': 0, # TODO: set badge count in a better way
+ 'badge': get_apns_badge_count(user_profile),
'custom': {'zulip': zulip_data},
}
return apns_data
@@ -664,6 +677,18 @@ def get_remove_payload_gcm(
gcm_options = {'priority': 'normal'}
return gcm_payload, gcm_options
+def get_remove_payload_apns(user_profile: UserProfile, message_ids: List[int]) -> Dict[str, Any]:
+ zulip_data = get_base_payload(user_profile)
+ zulip_data.update({
+ 'event': 'remove',
+ 'zulip_message_ids': ','.join(str(id) for id in message_ids),
+ })
+ apns_data = {
+ 'badge': get_apns_badge_count(user_profile, message_ids),
+ 'custom': {'zulip': zulip_data},
+ }
+ return apns_data
+
def handle_remove_push_notification(user_profile_id: int, message_ids: List[int]) -> None:
"""This should be called when a message that had previously had a
mobile push executed is read. This triggers a mobile push notifica
@@ -674,17 +699,22 @@ def handle_remove_push_notification(user_profile_id: int, message_ids: List[int]
user_profile = get_user_profile_by_id(user_profile_id)
message_ids = bulk_access_messages_expect_usermessage(user_profile_id, message_ids)
gcm_payload, gcm_options = get_remove_payload_gcm(user_profile, message_ids)
+ apns_payload = get_remove_payload_apns(user_profile, message_ids)
if uses_notification_bouncer():
send_notifications_to_bouncer(user_profile_id,
- {},
+ apns_payload,
gcm_payload,
gcm_options)
else:
android_devices = list(PushDeviceToken.objects.filter(
user=user_profile, kind=PushDeviceToken.GCM))
+ apple_devices = list(PushDeviceToken.objects.filter(
+ user=user_profile, kind=PushDeviceToken.APNS))
if android_devices:
send_android_push_notification(android_devices, gcm_payload, gcm_options)
+ if apple_devices:
+ send_apple_push_notification(user_profile_id, apple_devices, apns_payload)
UserMessage.objects.filter(
user_profile_id=user_profile_id,
| diff --git a/zerver/tests/test_push_notifications.py b/zerver/tests/test_push_notifications.py
--- a/zerver/tests/test_push_notifications.py
+++ b/zerver/tests/test_push_notifications.py
@@ -24,12 +24,14 @@
do_delete_messages,
do_mark_stream_messages_as_read,
do_regenerate_api_key,
+ do_update_message_flags,
)
from zerver.lib.push_notifications import (
DeviceToken,
absolute_avatar_url,
b64_to_hex,
datetime_to_timestamp,
+ get_apns_badge_count,
get_apns_client,
get_display_recipient,
get_message_payload_apns,
@@ -923,15 +925,28 @@ def test_send_remove_notifications_to_bouncer(self) -> None:
with self.settings(PUSH_NOTIFICATION_BOUNCER_URL=True), \
mock.patch('zerver.lib.push_notifications'
- '.send_notifications_to_bouncer') as mock_send_android, \
- mock.patch('zerver.lib.push_notifications.get_base_payload',
- return_value={'gcm': True}):
+ '.send_notifications_to_bouncer') as mock_send:
handle_remove_push_notification(user_profile.id, [message.id])
- mock_send_android.assert_called_with(
+ mock_send.assert_called_with(
user_profile.id,
- {},
{
- 'gcm': True,
+ 'badge': 0,
+ 'custom': {
+ 'zulip': {
+ 'server': 'testserver',
+ 'realm_id': self.sender.realm.id,
+ 'realm_uri': 'http://zulip.testserver',
+ 'user_id': self.user_profile.id,
+ 'event': 'remove',
+ 'zulip_message_ids': str(message.id),
+ },
+ },
+ },
+ {
+ 'server': 'testserver',
+ 'realm_id': self.sender.realm.id,
+ 'realm_uri': 'http://zulip.testserver',
+ 'user_id': self.user_profile.id,
'event': 'remove',
'zulip_message_ids': str(message.id),
'zulip_message_id': message.id,
@@ -955,20 +970,41 @@ def test_non_bouncer_push_remove(self) -> None:
PushDeviceToken.objects.filter(user=self.user_profile,
kind=PushDeviceToken.GCM))
+ apple_devices = list(
+ PushDeviceToken.objects.filter(user=self.user_profile,
+ kind=PushDeviceToken.APNS))
+
with mock.patch('zerver.lib.push_notifications'
'.send_android_push_notification') as mock_send_android, \
- mock.patch('zerver.lib.push_notifications.get_base_payload',
- return_value={'gcm': True}):
+ mock.patch('zerver.lib.push_notifications'
+ '.send_apple_push_notification') as mock_send_apple:
handle_remove_push_notification(self.user_profile.id, [message.id])
mock_send_android.assert_called_with(
android_devices,
{
- 'gcm': True,
+ 'server': 'testserver',
+ 'realm_id': self.sender.realm.id,
+ 'realm_uri': 'http://zulip.testserver',
+ 'user_id': self.user_profile.id,
'event': 'remove',
'zulip_message_ids': str(message.id),
'zulip_message_id': message.id,
},
{'priority': 'normal'})
+ mock_send_apple.assert_called_with(
+ self.user_profile.id,
+ apple_devices,
+ {'badge': 0,
+ 'custom': {
+ 'zulip': {
+ 'server': 'testserver',
+ 'realm_id': self.sender.realm.id,
+ 'realm_uri': 'http://zulip.testserver',
+ 'user_id': self.user_profile.id,
+ 'event': 'remove',
+ 'zulip_message_ids': str(message.id),
+ }
+ }})
user_message = UserMessage.objects.get(user_profile=self.user_profile,
message=message)
self.assertEqual(user_message.flags.active_mobile_push_notification, False)
@@ -1150,12 +1186,39 @@ def test_modernize_apns_payload(self) -> None:
self.assertEqual(
modernize_apns_payload(
{'alert': 'Message from Hamlet',
- 'message_ids': [3]}),
+ 'message_ids': [3],
+ 'badge': 0}),
payload)
self.assertEqual(
modernize_apns_payload(payload),
payload)
+ @mock.patch('zerver.lib.push_notifications.push_notifications_enabled', return_value = True)
+ def test_apns_badge_count(self, mock_push_notifications: mock.MagicMock) -> None:
+ user_profile = self.example_user('othello')
+ # Test APNs badge count for personal messages.
+ message_ids = [self.send_personal_message(self.sender,
+ user_profile,
+ 'Content of message')
+ for i in range(3)]
+ self.assertEqual(get_apns_badge_count(user_profile), 3)
+ # Similarly, test APNs badge count for stream mention.
+ stream = self.subscribe(user_profile, "Denmark")
+ message_ids += [self.send_stream_message(self.sender,
+ stream.name,
+ 'Hi, @**Othello, the Moor of Venice**')
+ for i in range(2)]
+ self.assertEqual(get_apns_badge_count(user_profile), 5)
+
+ num_messages = len(message_ids)
+ # Mark the messages as read and test whether
+ # the count decreases correctly.
+ for i, message_id in enumerate(message_ids):
+ do_update_message_flags(user_profile, get_client("website"), 'add', 'read', [message_id])
+ self.assertEqual(get_apns_badge_count(user_profile), num_messages - i - 1)
+
+ mock_push_notifications.assert_called()
+
class TestGetAPNsPayload(PushNotificationTest):
def test_get_message_payload_apns_personal_message(self) -> None:
user_profile = self.example_user("othello")
@@ -1206,7 +1269,7 @@ def test_get_message_payload_apns_huddle_message(self, mock_push_notifications:
'body': message.content,
},
'sound': 'default',
- 'badge': 0,
+ 'badge': 1,
'custom': {
'zulip': {
'message_ids': [message.id],
| Add support for setting counts in iOS push notifications
Following da8f4bc0e956e29b140fa89e963b67e3d23581df, we now have support for clearing old GCM push notifications as users read messages. We should be able to use this same infrastructure for iOS. It's possible that we can just grab the count from the `len(get_mobile_push_notification_ids(user_profile))` function (needs some refactoring if we do that, since it's a test function) and set `count` to that. But testing is required, and we might still need to trigger some sort of notification on-read (otherwise, how do counts go down?).
Add support for setting counts in iOS push notifications
Following da8f4bc0e956e29b140fa89e963b67e3d23581df, we now have support for clearing old GCM push notifications as users read messages. We should be able to use this same infrastructure for iOS. It's possible that we can just grab the count from the `len(get_mobile_push_notification_ids(user_profile))` function (needs some refactoring if we do that, since it's a test function) and set `count` to that. But testing is required, and we might still need to trigger some sort of notification on-read (otherwise, how do counts go down?).
| Would this approach include actually clearing the push notification (e.g the message from the Notification Center) or is this just about the badge count?
This approach will actually clear the push notifications.
For actually clearing the notifications, we have more details now in zulip/zulip-mobile#3119 . That will actually turn out to require some more work for iOS, so we might or might not ship it at the same time as we start setting badge counts. It's true that the essential information in the database for them is the same (and the same as we'll use for Android via GCM.)
I just closed https://github.com/zulip/zulip/pull/10683, which was the essentially useless seed for a possible backend change to set the badge counts. @gnprice, I'm assuming you're driving the question of when to invest in doing the notification counts stuff for iOS.
Is there a version that has badge icon unread message counts that we can use on mobile? I read a lot of issues about this, but no conclusions on if it will be implemented.
That's been fully implemented as works well on Android. I'm not sure how much plumbing is required to make this work on iOS; @gnprice would know. But probably not a ton.
Hello Greg @gnprice - would you have any insights here, or something I could try? My users are desperate for this feature.
Hi @maltokyo -- this isn't something we've worked on recently.
One thing we'll need to figure out before we build it is what exactly the count should be a count of. To help us with that, I'd be curious to understand better how you and your users would use the badge count. When you say y'all are desperate for it: what are you missing without it? What do you find yourself trying to do, and can't do because it isn't there?
Or: is there an app or two that manages badge counts really well, and in a similar way to how you'd like to see Zulip do so? What are those apps; and how do you use their badge counts?
@zulipbot claim
Hello Greg, thank you for replying.
Currently, most of my users are on iOS, and a few on android. They want
the same sort of unread message count that the desktop apps seem to have.
So when any stream has unread topics, then a counter for each topic would
be good.
On Wed, Sep 25, 2019 at 2:38 AM Greg Price <[email protected]> wrote:
> Hi @maltokyo <https://github.com/maltokyo> -- this isn't something we've
> worked on recently.
>
> One thing we'll need to figure out before we build it is what exactly the
> count should be a count of. To help us with that, I'd be curious to
> understand better how you and your users would use the badge count. When
> you say y'all are desperate for it: what are you missing without it? What
> do you find yourself trying to do, and can't do because it isn't there?
>
> Or: is there an app or two that manages badge counts really well, and in a
> similar way to how you'd like to see Zulip do so? What are those apps; and
> how do you use their badge counts?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/zulip/zulip/issues/10271?email_source=notifications&email_token=AAKS7WZF547D4L4Z5L6CH5DQLKXJTA5CNFSM4FPDIOQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7QGXUQ#issuecomment-534801362>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKS7W6RESNTQA67HXKOY4LQLKXJTANCNFSM4FPDIOQQ>
> .
>
Would this approach include actually clearing the push notification (e.g the message from the Notification Center) or is this just about the badge count?
This approach will actually clear the push notifications.
For actually clearing the notifications, we have more details now in zulip/zulip-mobile#3119 . That will actually turn out to require some more work for iOS, so we might or might not ship it at the same time as we start setting badge counts. It's true that the essential information in the database for them is the same (and the same as we'll use for Android via GCM.)
I just closed https://github.com/zulip/zulip/pull/10683, which was the essentially useless seed for a possible backend change to set the badge counts. @gnprice, I'm assuming you're driving the question of when to invest in doing the notification counts stuff for iOS.
Is there a version that has badge icon unread message counts that we can use on mobile? I read a lot of issues about this, but no conclusions on if it will be implemented.
That's been fully implemented as works well on Android. I'm not sure how much plumbing is required to make this work on iOS; @gnprice would know. But probably not a ton.
Hello Greg @gnprice - would you have any insights here, or something I could try? My users are desperate for this feature.
Hi @maltokyo -- this isn't something we've worked on recently.
One thing we'll need to figure out before we build it is what exactly the count should be a count of. To help us with that, I'd be curious to understand better how you and your users would use the badge count. When you say y'all are desperate for it: what are you missing without it? What do you find yourself trying to do, and can't do because it isn't there?
Or: is there an app or two that manages badge counts really well, and in a similar way to how you'd like to see Zulip do so? What are those apps; and how do you use their badge counts?
@zulipbot claim
Hello Greg, thank you for replying.
Currently, most of my users are on iOS, and a few on android. They want
the same sort of unread message count that the desktop apps seem to have.
So when any stream has unread topics, then a counter for each topic would
be good.
On Wed, Sep 25, 2019 at 2:38 AM Greg Price <[email protected]> wrote:
> Hi @maltokyo <https://github.com/maltokyo> -- this isn't something we've
> worked on recently.
>
> One thing we'll need to figure out before we build it is what exactly the
> count should be a count of. To help us with that, I'd be curious to
> understand better how you and your users would use the badge count. When
> you say y'all are desperate for it: what are you missing without it? What
> do you find yourself trying to do, and can't do because it isn't there?
>
> Or: is there an app or two that manages badge counts really well, and in a
> similar way to how you'd like to see Zulip do so? What are those apps; and
> how do you use their badge counts?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/zulip/zulip/issues/10271?email_source=notifications&email_token=AAKS7WZF547D4L4Z5L6CH5DQLKXJTA5CNFSM4FPDIOQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7QGXUQ#issuecomment-534801362>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKS7W6RESNTQA67HXKOY4LQLKXJTANCNFSM4FPDIOQQ>
> .
>
| 2020-06-02T16:22:30 |
zulip/zulip | 15,265 | zulip__zulip-15265 | [
"15207"
] | fb2aae1c02512368743e16d87c255ca7ae02ae2a | diff --git a/zerver/views/auth.py b/zerver/views/auth.py
--- a/zerver/views/auth.py
+++ b/zerver/views/auth.py
@@ -381,14 +381,15 @@ def remote_user_sso(
def remote_user_jwt(request: HttpRequest) -> HttpResponse:
subdomain = get_subdomain(request)
try:
- auth_key = settings.JWT_AUTH_KEYS[subdomain]
+ key = settings.JWT_AUTH_KEYS[subdomain]["key"]
+ algorithms = settings.JWT_AUTH_KEYS[subdomain]["algorithms"]
except KeyError:
raise JsonableError(_("Auth key for this subdomain not found."))
try:
json_web_token = request.POST["json_web_token"]
options = {'verify_signature': True}
- payload = jwt.decode(json_web_token, auth_key, options=options)
+ payload = jwt.decode(json_web_token, key, algorithms=algorithms, options=options)
except KeyError:
raise JsonableError(_("No JSON web token passed in request"))
except jwt.InvalidTokenError:
diff --git a/zproject/default_settings.py b/zproject/default_settings.py
--- a/zproject/default_settings.py
+++ b/zproject/default_settings.py
@@ -2,6 +2,7 @@
if TYPE_CHECKING:
from django_auth_ldap.config import LDAPSearch
+ from typing_extensions import TypedDict
from .config import PRODUCTION, DEVELOPMENT, get_secret
if PRODUCTION:
@@ -316,7 +317,14 @@
STATSD_HOST = ''
# Configuration for JWT auth.
-JWT_AUTH_KEYS: Dict[str, str] = {}
+if TYPE_CHECKING:
+ class JwtAuthKey(TypedDict):
+ key: str
+ # See https://pyjwt.readthedocs.io/en/latest/algorithms.html for a list
+ # of supported algorithms.
+ algorithms: List[str]
+
+JWT_AUTH_KEYS: Dict[str, "JwtAuthKey"] = {}
# https://docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
# Django setting for what from address to use in error emails.
| diff --git a/zerver/tests/test_auth_backends.py b/zerver/tests/test_auth_backends.py
--- a/zerver/tests/test_auth_backends.py
+++ b/zerver/tests/test_auth_backends.py
@@ -3356,11 +3356,12 @@ class TestJWTLogin(ZulipTestCase):
def test_login_success(self) -> None:
payload = {'user': 'hamlet', 'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
email = self.example_email("hamlet")
realm = get_realm('zulip')
- auth_key = settings.JWT_AUTH_KEYS['zulip']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ key = settings.JWT_AUTH_KEYS['zulip']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['zulip']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
user_profile = get_user_by_delivery_email(email, realm)
data = {'json_web_token': web_token}
@@ -3370,18 +3371,20 @@ def test_login_success(self) -> None:
def test_login_failure_when_user_is_missing(self) -> None:
payload = {'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
- auth_key = settings.JWT_AUTH_KEYS['zulip']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
+ key = settings.JWT_AUTH_KEYS['zulip']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['zulip']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
self.assert_json_error_contains(result, "No user specified in JSON web token claims", 400)
def test_login_failure_when_realm_is_missing(self) -> None:
payload = {'user': 'hamlet'}
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
- auth_key = settings.JWT_AUTH_KEYS['zulip']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
+ key = settings.JWT_AUTH_KEYS['zulip']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['zulip']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
self.assert_json_error_contains(result, "No organization specified in JSON web token claims", 400)
@@ -3392,12 +3395,12 @@ def test_login_failure_when_key_does_not_exist(self) -> None:
self.assert_json_error_contains(result, "Auth key for this subdomain not found.", 400)
def test_login_failure_when_key_is_missing(self) -> None:
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
result = self.client_post('/accounts/login/jwt/')
self.assert_json_error_contains(result, "No JSON web token passed in request", 400)
def test_login_failure_when_bad_token_is_passed(self) -> None:
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
result = self.client_post('/accounts/login/jwt/')
self.assert_json_error_contains(result, "No JSON web token passed in request", 400)
data = {'json_web_token': 'bad token'}
@@ -3406,9 +3409,10 @@ def test_login_failure_when_bad_token_is_passed(self) -> None:
def test_login_failure_when_user_does_not_exist(self) -> None:
payload = {'user': 'nonexisting', 'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
- auth_key = settings.JWT_AUTH_KEYS['zulip']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
+ key = settings.JWT_AUTH_KEYS['zulip']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['zulip']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
self.assertEqual(result.status_code, 200) # This should ideally be not 200.
@@ -3416,11 +3420,12 @@ def test_login_failure_when_user_does_not_exist(self) -> None:
def test_login_failure_due_to_wrong_subdomain(self) -> None:
payload = {'user': 'hamlet', 'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'acme': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'acme': {'key': 'key', 'algorithms': ['HS256']}}):
with mock.patch('zerver.views.auth.get_subdomain', return_value='acme'), \
mock.patch('logging.warning'):
- auth_key = settings.JWT_AUTH_KEYS['acme']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ key = settings.JWT_AUTH_KEYS['acme']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['acme']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
@@ -3429,11 +3434,12 @@ def test_login_failure_due_to_wrong_subdomain(self) -> None:
def test_login_failure_due_to_empty_subdomain(self) -> None:
payload = {'user': 'hamlet', 'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'': {'key': 'key', 'algorithms': ['HS256']}}):
with mock.patch('zerver.views.auth.get_subdomain', return_value=''), \
mock.patch('logging.warning'):
- auth_key = settings.JWT_AUTH_KEYS['']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ key = settings.JWT_AUTH_KEYS['']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
@@ -3442,10 +3448,11 @@ def test_login_failure_due_to_empty_subdomain(self) -> None:
def test_login_success_under_subdomains(self) -> None:
payload = {'user': 'hamlet', 'realm': 'zulip.com'}
- with self.settings(JWT_AUTH_KEYS={'zulip': 'key'}):
+ with self.settings(JWT_AUTH_KEYS={'zulip': {'key': 'key', 'algorithms': ['HS256']}}):
with mock.patch('zerver.views.auth.get_subdomain', return_value='zulip'):
- auth_key = settings.JWT_AUTH_KEYS['zulip']
- web_token = jwt.encode(payload, auth_key).decode('utf8')
+ key = settings.JWT_AUTH_KEYS['zulip']['key']
+ [algorithm] = settings.JWT_AUTH_KEYS['zulip']['algorithms']
+ web_token = jwt.encode(payload, key, algorithm).decode('utf8')
data = {'json_web_token': web_token}
result = self.client_post('/accounts/login/jwt/', data)
| JWT auth: deprecated calls to jwt.decode with no algorithms argument
`python3 -Wd tools/test-backend zerver.tests.test_auth_backends.TestJWTLogin` shows these warnings:
```
/srv/zulip-py3-venv/lib/python3.6/site-packages/jwt/api_jwt.py:81: DeprecationWarning: It is strongly recommended that you pass in a value for the "algorithms" argument when calling decode(). This argument will be mandatory in a future version.
DeprecationWarning
/srv/zulip-py3-venv/lib/python3.6/site-packages/jwt/api_jws.py:145: DeprecationWarning: It is strongly recommended that you pass in a value for the "algorithms" argument when calling decode(). This argument will be mandatory in a future version.
DeprecationWarning
```
See https://github.com/jpadilla/pyjwt/blob/1.5.1/CHANGELOG.md#fixed-1. We’ll probably need to make this a mandatory setting for JWT auth users; this is for protecting against symmetric/asymmetric key confusion attacks.
| Hello @zulip/server-authentication members, this issue was labeled with the "area: authentication" label, so you may want to check it out!
<!-- areaLabelAddition -->
There are no current JWT auth users, so can freely change this.
(AFAIK this system has not been used since 2014). | 2020-06-08T20:45:57 |
zulip/zulip | 15,277 | zulip__zulip-15277 | [
"14498"
] | a0a7170f48a42500c1f0af980467592c466995a3 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -4521,6 +4521,29 @@ def do_update_message(user_profile: UserProfile, message: Message,
event["new_stream_id"] = new_stream.id
event["propagate_mode"] = propagate_mode
+ # When messages are moved from one stream to another, some
+ # users may lose access to those messages, including guest
+ # users and users not subscribed to the new stream (if it is a
+ # private stream). For those users, their experience is as
+ # though the messages were deleted, and we should send a
+ # delete_message event to them instead.
+
+ subscribers = get_active_subscriptions_for_stream_id(
+ stream_id).select_related("user_profile")
+ subs_to_new_stream = list(get_active_subscriptions_for_stream_id(
+ new_stream.id).select_related("user_profile"))
+
+ new_stream_sub_ids = [user.user_profile_id for user in subs_to_new_stream]
+
+ # Get guest users who aren't subscribed to the new_stream.
+ guest_subs_losing_access = [
+ sub for sub in subscribers
+ if sub.user_profile.is_guest
+ and sub.user_profile_id not in new_stream_sub_ids
+ ]
+ ums = ums.exclude(user_profile_id__in=[
+ sub.user_profile_id for sub in guest_subs_losing_access])
+
if topic_name is not None:
topic_name = truncate_topic(topic_name)
message.set_topic_name(topic_name)
@@ -4542,6 +4565,29 @@ def do_update_message(user_profile: UserProfile, message: Message,
)
changed_messages += messages_list
+ if new_stream is not None:
+ assert stream_being_edited is not None
+
+ message_ids = [msg.id for msg in changed_messages]
+ # Delete UserMessage objects from guest users who will no
+ # longer have access to these messages. Note: This could be
+ # very expensive, since it's N guest users x M messages.
+ UserMessage.objects.filter(
+ user_profile_id__in=[sub.user_profile_id for sub in
+ guest_subs_losing_access],
+ message_id__in=message_ids,
+ ).delete()
+
+ delete_event: DeleteMessagesEvent = {
+ 'type': 'delete_message',
+ 'message_ids': message_ids,
+ 'message_type': 'stream',
+ 'stream_id': stream_being_edited.id,
+ 'topic': orig_topic_name,
+ }
+ delete_event_notify_user_ids = [sub.user_profile_id for sub in guest_subs_losing_access]
+ send_event(user_profile.realm, delete_event, delete_event_notify_user_ids)
+
if message.edit_history is not None:
edit_history = ujson.loads(message.edit_history)
edit_history.insert(0, edit_history_event)
@@ -4586,8 +4632,31 @@ def user_info(um: UserMessage) -> Dict[str, Any]:
# Remove duplicates by excluding the id of users already in users_to_be_notified list.
# This is the case where a user both has a UserMessage row and is a current Subscriber
subscribers = subscribers.exclude(user_profile_id__in=[um.user_profile_id for um in ums])
+
+ if new_stream is not None:
+ assert guest_subs_losing_access is not None
+ # Exclude guest users who are not subscribed to the new stream from receing this event.
+ subscribers = subscribers.exclude(user_profile_id__in=[sub.user_profile_id for sub in guest_subs_losing_access])
+
# All users that are subscribed to the stream must be notified when a message is edited
subscribers_ids = [user.user_profile_id for user in subscribers]
+
+ if new_stream is not None:
+ # TODO: Guest users don't see the new moved topic unless breadcrumb message for
+ # new stream is enabled. Excluding these users from receiving this event helps
+ # us avoid a error trackeback for our clients. We should figure out a way to
+ # inform the guest users of this new topic if sending a 'message' event for these messages
+ # is not an option.
+ # Don't send this event to guest subs who are not subscrbied to the old stream but
+ # are subscribed to the new stream
+ old_stream_unsubed_guests = [
+ sub for sub in subs_to_new_stream
+ if sub.user_profile.is_guest
+ and sub.user_profile_id not in subscribers_ids
+ ]
+ subscribers = subscribers.exclude(user_profile_id__in=[sub.user_profile_id for sub in old_stream_unsubed_guests])
+ subscribers_ids = [user.user_profile_id for user in subscribers]
+
users_to_be_notified += list(map(subscriber_info, subscribers_ids))
send_event(user_profile.realm, event, users_to_be_notified)
| diff --git a/zerver/tests/test_messages.py b/zerver/tests/test_messages.py
--- a/zerver/tests/test_messages.py
+++ b/zerver/tests/test_messages.py
@@ -56,6 +56,7 @@
get_first_visible_message_id,
get_raw_unread_data,
get_recent_private_conversations,
+ has_message_access,
maybe_update_first_visible_message_id,
messages_for_ids,
render_markdown,
@@ -3559,7 +3560,7 @@ def test_move_message_to_stream_and_topic(self) -> None:
'propagate_mode': 'change_all',
'topic': 'new topic',
})
- self.assertEqual(len(queries), 49)
+ self.assertEqual(len(queries), 52)
messages = get_topic_messages(user_profile, old_stream, "test")
self.assertEqual(len(messages), 1)
@@ -3570,6 +3571,29 @@ def test_move_message_to_stream_and_topic(self) -> None:
self.assertEqual(messages[3].content, f"This topic was moved here from #**test move stream>test** by @_**Iago|{user_profile.id}**")
self.assert_json_success(result)
+ def test_inaccessible_msg_after_stream_change(self) -> None:
+ """Simulates the case where message is moved to a stream where user is not a subscribed"""
+ (user_profile, old_stream, new_stream, msg_id, msg_id_lt) = self.prepare_move_topics(
+ "iago", "test move stream", "new stream", "test")
+
+ guest_user = self.example_user('polonius')
+ self.subscribe(guest_user, old_stream.name)
+ msg_id_to_test_acesss = self.send_stream_message(user_profile, old_stream.name,
+ topic_name='test', content="fourth")
+
+ self.assertEqual(has_message_access(guest_user, Message.objects.get(id=msg_id_to_test_acesss), None), True)
+
+ result = self.client_patch("/json/messages/" + str(msg_id), {
+ 'message_id': msg_id,
+ 'stream_id': new_stream.id,
+ 'propagate_mode': 'change_all',
+ 'topic': 'new topic'
+ })
+ self.assert_json_success(result)
+
+ self.assertEqual(has_message_access(guest_user, Message.objects.get(id=msg_id_to_test_acesss), None), False)
+ self.assertEqual(has_message_access(self.example_user('iago'), Message.objects.get(id=msg_id_to_test_acesss), None), True)
+
def test_no_notify_move_message_to_stream(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_lt) = self.prepare_move_topics(
"iago", "test move stream", "new stream", "test")
| Fix live update for moving messages to a stream the user isn't able to see
Following 7990676583b5094e1a12572a782ece7ed57b401e (Part of #6427 / #13912), we have nicely working real-time sync for moving a group of messages in a topic to a new stream. (The UI hasn't quite merged yet, but I don't intend to block merging it on finishing this).
There's a bug where if a client receives an event notifying it to move messages to a stream ID it doesn't recognize, it'll throw an exception, because the client doesn't know anything about the new stream ID.
The right fix properly involves somewhat tricky changes in the `stream_changed` code paths inside `message_events.js` to basically just discard the messages if the target stream ID is unknown (rather than moving them to a stream_id the client knows nothing about and/or trying to renarrow). Hopefully we can find a clean way to do this.
Fortunately, it's pretty easy to test this by just using the feature with a second browser logged in as Polonius and narrowed to the messages about to be moved; the one thing to be careful about when reproducing it is to make sure the public stream is one the guest user has never been subscribed to in the past (I'd just make a new stream).
Once we've fixed this issue, which sounds like it's only for guest users, we can look at relaxing the rules that currently limit moving messages to another stream to just:
(1) Organization administrators can do it.
(2) Where both streams are public.
Which are almost certainly too strict, but are necessary because of this implementation limitations (and changing them also requires some thought on policy).
Tagging as a release-blocker since it's a known exception for installations using guest users.
| Hello @zulip/server-message-view members, this issue was labeled with the "area: message-editing" label, so you may want to check it out!
<!-- areaLabelAddition -->
Steve mentions that the right fix for this is probably to check if the stream ID is unknown to the client, and if so, treat it as a deletion of the group of message IDs, rather than going through the edit code path at all. This should work pretty well, be clean, and avoid code duplication. | 2020-06-09T18:12:17 |
zulip/zulip | 15,413 | zulip__zulip-15413 | [
"14828"
] | 4576742b2fbdb8523d9a41cd77e21913f000395c | diff --git a/zerver/views/auth.py b/zerver/views/auth.py
--- a/zerver/views/auth.py
+++ b/zerver/views/auth.py
@@ -283,9 +283,9 @@ def finish_desktop_flow(request: HttpRequest, user_profile: UserProfile,
key = bytes.fromhex(otp)
iv = os.urandom(12)
desktop_data = (iv + AESGCM(key).encrypt(iv, token.encode(), b"")).hex()
- browser_url = user_profile.realm.uri + reverse('zerver.views.auth.log_into_subdomain', args=[token])
context = {'desktop_data': desktop_data,
- 'browser_url': browser_url,
+ 'browser_url': reverse('zerver.views.auth.login_page',
+ kwargs = {'template_name': 'zerver/login.html'}),
'realm_icon_url': realm_icon_url(user_profile.realm)}
return render(request, 'zerver/desktop_redirect.html', context=context)
| diff --git a/zerver/tests/test_auth_backends.py b/zerver/tests/test_auth_backends.py
--- a/zerver/tests/test_auth_backends.py
+++ b/zerver/tests/test_auth_backends.py
@@ -664,10 +664,10 @@ def verify_desktop_flow_end_page(self, response: HttpResponse, email: str,
desktop_data = soup.find("input", value=True)["value"]
browser_url = soup.find("a", href=True)["href"]
+ self.assertEqual(browser_url, '/login/')
decrypted_key = self.verify_desktop_data_and_return_key(desktop_data, desktop_flow_otp)
- self.assertEqual(browser_url, f'http://zulip.testserver/accounts/login/subdomain/{decrypted_key}')
- result = self.client_get(browser_url)
+ result = self.client_get(f'http://zulip.testserver/accounts/login/subdomain/{decrypted_key}')
self.assertEqual(result.status_code, 302)
realm = get_realm("zulip")
user_profile = get_user_by_delivery_email(email, realm)
| Fix error handling for expired tokens in desktop app login flow
Following https://github.com/zulip/zulip/pull/14747, if you complete the desktop app login flow to login the desktop app, and on the landing page (see https://github.com/zulip/zulip/issues/14827) and click "Continue in browser"), we'll try reusing the login token, which is forbidden. This results in the following broken page:

(This can happen both in the desktop app and the server).
We could clean up the page, but @mateuszmandera is there a reason we don't just want to redirect back to /login in this expired case case? (@andersk FYI).
A few other questions to think a bit about:
* How long do we want to give users to use the token?
* Does it make sense to allow the token to be used more than once? (Or the token to be used once and the link to be used once?) Then a user who wants to login to both the desktop app and browser could do that in one login transaction.
| > but @mateuszmandera is there a reason we don't just want to redirect back to /login in this expired case case?
I'd say that it's much nicer to get the error page and a link to click to try to log in again, than be taken to the login page immediately not knowing what happened :thinking:
For the main issue, let's just allow multiple uses of the token then? And a time limit of - I'm not sure what the right amount is - let's say 1 minute sounds reasonable?
I think 1 minute is reasonable, and I think given the 1-minute time limit, the main concern one would have with multiple uses of the token would be that a MITM attacker could use it to get a duplicate login session, perhaps?
@andersk what are your thoughts?
Yes, that's the main concern. We could generate two one-time tokens, one for the app and one for the "continue in browser" link.
My view is that opening the app just to log in the browser is a convoluted use case that nobody would think of if the “continue in browser” link wasn’t specifically presented to them. It’s good to have it so that if they *didn’t* open the app and are instead getting phished for the token, they have an obvious safe alternative. We should make sure it doesn’t feel broken, but we don’t need to go out of our way to make it unexpectedly extra-convenient. Toward that end, “continue in browser” could just link directly to the login page with no token magic.
> It’s good to have it so that if they didn’t open the app and are instead getting phished for the token, they have an obvious safe alternative
In this scenario, the browser link with one-time token will work, so things won't feel broken. To get the "broken" feel one would have to first log in the app and then use the browser link.
Making two one-time tokens to make everything convenient and reliable should be as simple as:
```diff
diff --git a/zerver/views/auth.py b/zerver/views/auth.py
index a11dc9a363..75623ebe49 100644
--- a/zerver/views/auth.py
+++ b/zerver/views/auth.py
@@ -269,12 +269,15 @@ def finish_desktop_flow(request: HttpRequest, user_profile: UserProfile,
of being created, as nothing more powerful is needed for the desktop flow
and this ensures the key can only be used for completing this authentication attempt.
"""
- result = ExternalAuthResult(user_profile=user_profile)
- token = result.store_data()
+ result_for_desktop = ExternalAuthResult(user_profile=user_profile)
+ token_for_desktop = result.store_data()
key = bytes.fromhex(otp)
iv = os.urandom(12)
- desktop_data = (iv + AESGCM(key).encrypt(iv, token.encode(), b"")).hex()
- browser_url = user_profile.realm.uri + reverse('zerver.views.auth.log_into_subdomain', args=[token])
+ desktop_data = (iv + AESGCM(key).encrypt(iv, token_for_desktop.encode(), b"")).hex()
+
+ result_for_browser = ExternalAuthResult(user_profile=user_profile)
+ token_for_browser = result_for_browser.store_data()
+ browser_url = user_profile.realm.uri + reverse('zerver.views.auth.log_into_subdomain', args=[token_for_desktop])
context = {'desktop_data': desktop_data,
'browser_url': browser_url,
'realm_icon_url': realm_icon_url(user_profile.realm)}
```
so perhaps we should go with that?
Yeah, but *not* making two one-time tokens is even simpler, so it’s reasonable to ask what use case they’re serving and whether they actually convenience anyone. Wouldn’t someone who wanted to log in with the browser start by opening the browser, not by opening the app and starting a flow that clearly indicates an intent to log in with the app?
Yeah, I think I agree simpler is better. I think redirecting you back to /login in the event the token is expired is a totally reasonable answer.. | 2020-06-16T19:35:39 |
zulip/zulip | 15,419 | zulip__zulip-15419 | [
"14484"
] | 508ba663dcca51d51786b3d3aa602aa2f5762455 | diff --git a/zerver/lib/transfer.py b/zerver/lib/transfer.py
--- a/zerver/lib/transfer.py
+++ b/zerver/lib/transfer.py
@@ -45,9 +45,8 @@ def _transfer_message_files_to_s3(attachment: Attachment) -> int:
file_path = os.path.join(settings.LOCAL_UPLOADS_DIR, "files", attachment.path_id)
try:
with open(file_path, 'rb') as f:
- bucket_name = settings.S3_AUTH_UPLOADS_BUCKET
guessed_type = guess_type(attachment.file_name)[0]
- upload_image_to_s3(bucket_name, attachment.path_id, guessed_type, attachment.owner, f.read())
+ upload_image_to_s3(s3backend.uploads_bucket, attachment.path_id, guessed_type, attachment.owner, f.read())
logging.info("Uploaded message file in path %s", file_path)
except FileNotFoundError: # nocoverage
pass
diff --git a/zerver/lib/upload.py b/zerver/lib/upload.py
--- a/zerver/lib/upload.py
+++ b/zerver/lib/upload.py
@@ -32,14 +32,7 @@
from zerver.lib.avatar_hash import user_avatar_path
from zerver.lib.exceptions import ErrorCode, JsonableError
from zerver.lib.utils import generate_random_token
-from zerver.models import (
- Attachment,
- Message,
- Realm,
- RealmEmoji,
- UserProfile,
- get_user_profile_by_id,
-)
+from zerver.models import Attachment, Message, Realm, RealmEmoji, UserProfile
DEFAULT_AVATAR_SIZE = 100
MEDIUM_AVATAR_SIZE = 500
@@ -281,14 +274,12 @@ def get_bucket(session: Session, bucket_name: str) -> ServiceResource:
return bucket
def upload_image_to_s3(
- bucket_name: str,
+ # See https://github.com/python/typeshed/issues/2706
+ bucket: ServiceResource,
file_name: str,
content_type: Optional[str],
user_profile: UserProfile,
contents: bytes) -> None:
-
- session = boto3.Session(settings.S3_KEY, settings.S3_SECRET_KEY)
- bucket = get_bucket(session, bucket_name)
key = bucket.Object(file_name)
metadata = {
"user_profile_id": str(user_profile.id),
@@ -341,23 +332,18 @@ def get_signed_upload_url(path: str) -> str:
ExpiresIn=SIGNED_UPLOAD_URL_DURATION,
HttpMethod='GET')
-def get_realm_for_filename(path: str) -> Optional[int]:
- session = boto3.Session(settings.S3_KEY, settings.S3_SECRET_KEY)
- bucket = get_bucket(session, settings.S3_AUTH_UPLOADS_BUCKET)
- key = bucket.Object(path)
-
- try:
- user_profile_id = key.metadata['user_profile_id']
- except botocore.exceptions.ClientError:
- return None
- return get_user_profile_by_id(user_profile_id).realm_id
-
class S3UploadBackend(ZulipUploadBackend):
def __init__(self) -> None:
self.session = boto3.Session(settings.S3_KEY, settings.S3_SECRET_KEY)
- def delete_file_from_s3(self, path_id: str, bucket_name: str) -> bool:
- bucket = get_bucket(self.session, bucket_name)
+ self.avatar_bucket = get_bucket(self.session, settings.S3_AVATAR_BUCKET)
+ network_location = urllib.parse.urlparse(
+ self.avatar_bucket.meta.client.meta.endpoint_url).netloc
+ self.avatar_bucket_url = f"https://{self.avatar_bucket.name}.{network_location}"
+
+ self.uploads_bucket = get_bucket(self.session, settings.S3_AUTH_UPLOADS_BUCKET)
+
+ def delete_file_from_s3(self, path_id: str, bucket: ServiceResource) -> bool:
key = bucket.Object(path_id)
try:
@@ -372,7 +358,6 @@ def delete_file_from_s3(self, path_id: str, bucket_name: str) -> bool:
def upload_message_file(self, uploaded_file_name: str, uploaded_file_size: int,
content_type: Optional[str], file_data: bytes,
user_profile: UserProfile, target_realm: Optional[Realm]=None) -> str:
- bucket_name = settings.S3_AUTH_UPLOADS_BUCKET
if target_realm is None:
target_realm = user_profile.realm
s3_file_name = "/".join([
@@ -383,7 +368,7 @@ def upload_message_file(self, uploaded_file_name: str, uploaded_file_size: int,
url = f"/user_uploads/{s3_file_name}"
upload_image_to_s3(
- bucket_name,
+ self.uploads_bucket,
s3_file_name,
content_type,
user_profile,
@@ -394,14 +379,12 @@ def upload_message_file(self, uploaded_file_name: str, uploaded_file_size: int,
return url
def delete_message_image(self, path_id: str) -> bool:
- return self.delete_file_from_s3(path_id, settings.S3_AUTH_UPLOADS_BUCKET)
+ return self.delete_file_from_s3(path_id, self.uploads_bucket)
def write_avatar_images(self, s3_file_name: str, target_user_profile: UserProfile,
image_data: bytes, content_type: Optional[str]) -> None:
- bucket_name = settings.S3_AVATAR_BUCKET
-
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + ".original",
content_type,
target_user_profile,
@@ -411,7 +394,7 @@ def write_avatar_images(self, s3_file_name: str, target_user_profile: UserProfil
# custom 500px wide version
resized_medium = resize_avatar(image_data, MEDIUM_AVATAR_SIZE)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + "-medium.png",
"image/png",
target_user_profile,
@@ -420,7 +403,7 @@ def write_avatar_images(self, s3_file_name: str, target_user_profile: UserProfil
resized_data = resize_avatar(image_data)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name,
'image/png',
target_user_profile,
@@ -443,18 +426,15 @@ def upload_avatar_image(self, user_file: File,
def delete_avatar_image(self, user: UserProfile) -> None:
path_id = user_avatar_path(user)
- bucket_name = settings.S3_AVATAR_BUCKET
- self.delete_file_from_s3(path_id + ".original", bucket_name)
- self.delete_file_from_s3(path_id + "-medium.png", bucket_name)
- self.delete_file_from_s3(path_id, bucket_name)
+ self.delete_file_from_s3(path_id + ".original", self.avatar_bucket)
+ self.delete_file_from_s3(path_id + "-medium.png", self.avatar_bucket)
+ self.delete_file_from_s3(path_id, self.avatar_bucket)
def get_avatar_key(self, file_name: str) -> ServiceResource:
# See https://github.com/python/typeshed/issues/2706
# for why this return type is a `ServiceResource`.
- bucket = get_bucket(self.session, settings.S3_AVATAR_BUCKET)
-
- key = bucket.Object(file_name)
+ key = self.avatar_bucket.Object(file_name)
return key
def copy_avatar(self, source_profile: UserProfile, target_profile: UserProfile) -> None:
@@ -468,27 +448,24 @@ def copy_avatar(self, source_profile: UserProfile, target_profile: UserProfile)
self.write_avatar_images(s3_target_file_name, target_profile, image_data, content_type)
def get_avatar_url(self, hash_key: str, medium: bool=False) -> str:
- bucket = settings.S3_AVATAR_BUCKET
medium_suffix = "-medium.png" if medium else ""
# ?x=x allows templates to append additional parameters with &s
- return f"https://{bucket}.s3.amazonaws.com/{hash_key}{medium_suffix}?x=x"
+ return f"{self.avatar_bucket_url}/{hash_key}{medium_suffix}?x=x"
def get_export_tarball_url(self, realm: Realm, export_path: str) -> str:
- bucket = settings.S3_AVATAR_BUCKET
# export_path has a leading /
- return f"https://{bucket}.s3.amazonaws.com{export_path}"
+ return f"{self.avatar_bucket_url}{export_path}"
def realm_avatar_and_logo_path(self, realm: Realm) -> str:
return os.path.join(str(realm.id), 'realm')
def upload_realm_icon_image(self, icon_file: File, user_profile: UserProfile) -> None:
content_type = guess_type(icon_file.name)[0]
- bucket_name = settings.S3_AVATAR_BUCKET
s3_file_name = os.path.join(self.realm_avatar_and_logo_path(user_profile.realm), 'icon')
image_data = icon_file.read()
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + ".original",
content_type,
user_profile,
@@ -497,7 +474,7 @@ def upload_realm_icon_image(self, icon_file: File, user_profile: UserProfile) ->
resized_data = resize_avatar(image_data)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + ".png",
'image/png',
user_profile,
@@ -507,14 +484,12 @@ def upload_realm_icon_image(self, icon_file: File, user_profile: UserProfile) ->
# that users use gravatar.)
def get_realm_icon_url(self, realm_id: int, version: int) -> str:
- bucket = settings.S3_AVATAR_BUCKET
# ?x=x allows templates to append additional parameters with &s
- return f"https://{bucket}.s3.amazonaws.com/{realm_id}/realm/icon.png?version={version}"
+ return f"{self.avatar_bucket_url}/{realm_id}/realm/icon.png?version={version}"
def upload_realm_logo_image(self, logo_file: File, user_profile: UserProfile,
night: bool) -> None:
content_type = guess_type(logo_file.name)[0]
- bucket_name = settings.S3_AVATAR_BUCKET
if night:
basename = 'night_logo'
else:
@@ -523,7 +498,7 @@ def upload_realm_logo_image(self, logo_file: File, user_profile: UserProfile,
image_data = logo_file.read()
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + ".original",
content_type,
user_profile,
@@ -532,7 +507,7 @@ def upload_realm_logo_image(self, logo_file: File, user_profile: UserProfile,
resized_data = resize_logo(image_data)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + ".png",
'image/png',
user_profile,
@@ -542,26 +517,23 @@ def upload_realm_logo_image(self, logo_file: File, user_profile: UserProfile,
# that users use gravatar.)
def get_realm_logo_url(self, realm_id: int, version: int, night: bool) -> str:
- bucket = settings.S3_AVATAR_BUCKET
# ?x=x allows templates to append additional parameters with &s
if not night:
file_name = 'logo.png'
else:
file_name = 'night_logo.png'
- return f"https://{bucket}.s3.amazonaws.com/{realm_id}/realm/{file_name}?version={version}"
+ return f"{self.avatar_bucket_url}/{realm_id}/realm/{file_name}?version={version}"
def ensure_medium_avatar_image(self, user_profile: UserProfile) -> None:
file_path = user_avatar_path(user_profile)
s3_file_name = file_path
- bucket_name = settings.S3_AVATAR_BUCKET
- bucket = get_bucket(self.session, bucket_name)
- key = bucket.Object(file_path + ".original")
+ key = self.avatar_bucket.Object(file_path + ".original")
image_data = key.get()['Body'].read()
resized_medium = resize_avatar(image_data, MEDIUM_AVATAR_SIZE)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name + "-medium.png",
"image/png",
user_profile,
@@ -574,14 +546,12 @@ def ensure_basic_avatar_image(self, user_profile: UserProfile) -> None: # nocov
# Also TODO: Migrate to user_avatar_path(user_profile) + ".png".
s3_file_name = file_path
- bucket_name = settings.S3_AVATAR_BUCKET
- bucket = get_bucket(self.session, bucket_name)
- key = bucket.Object(file_path + ".original")
+ key = self.avatar_bucket.Object(file_path + ".original")
image_data = key.get()['Body'].read()
resized_avatar = resize_avatar(image_data)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
s3_file_name,
"image/png",
user_profile,
@@ -591,7 +561,6 @@ def ensure_basic_avatar_image(self, user_profile: UserProfile) -> None: # nocov
def upload_emoji_image(self, emoji_file: File, emoji_file_name: str,
user_profile: UserProfile) -> None:
content_type = guess_type(emoji_file.name)[0]
- bucket_name = settings.S3_AVATAR_BUCKET
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=user_profile.realm_id,
emoji_file_name=emoji_file_name,
@@ -600,14 +569,14 @@ def upload_emoji_image(self, emoji_file: File, emoji_file_name: str,
image_data = emoji_file.read()
resized_image_data = resize_emoji(image_data)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
".".join((emoji_path, "original")),
content_type,
user_profile,
image_data,
)
upload_image_to_s3(
- bucket_name,
+ self.avatar_bucket,
emoji_path,
content_type,
user_profile,
@@ -615,21 +584,18 @@ def upload_emoji_image(self, emoji_file: File, emoji_file_name: str,
)
def get_emoji_url(self, emoji_file_name: str, realm_id: int) -> str:
- bucket = settings.S3_AVATAR_BUCKET
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(realm_id=realm_id,
emoji_file_name=emoji_file_name)
- return f"https://{bucket}.s3.amazonaws.com/{emoji_path}"
+ return f"{self.avatar_bucket_url}/{emoji_path}"
def upload_export_tarball(self, realm: Optional[Realm], tarball_path: str) -> str:
def percent_callback(bytes_transferred: Any) -> None:
sys.stdout.write('.')
sys.stdout.flush()
- session = boto3.Session(settings.S3_KEY, settings.S3_SECRET_KEY)
# We use the avatar bucket, because it's world-readable.
- bucket = get_bucket(session, settings.S3_AVATAR_BUCKET)
- key = bucket.Object(os.path.join("exports", generate_random_token(32),
- os.path.basename(tarball_path)))
+ key = self.avatar_bucket.Object(os.path.join("exports", generate_random_token(32),
+ os.path.basename(tarball_path)))
key.upload_file(tarball_path, Callback=percent_callback)
@@ -639,7 +605,7 @@ def percent_callback(bytes_transferred: Any) -> None:
public_url = session.create_client('s3', config=config).generate_presigned_url(
'get_object',
Params={
- 'Bucket': bucket.name,
+ 'Bucket': self.avatar_bucket.name,
'Key': key.key,
},
ExpiresIn=0,
@@ -647,7 +613,7 @@ def percent_callback(bytes_transferred: Any) -> None:
return public_url
def delete_export_tarball(self, path_id: str) -> Optional[str]:
- if self.delete_file_from_s3(path_id, settings.S3_AVATAR_BUCKET):
+ if self.delete_file_from_s3(path_id, self.avatar_bucket):
return path_id
return None
| diff --git a/zerver/tests/test_upload.py b/zerver/tests/test_upload.py
--- a/zerver/tests/test_upload.py
+++ b/zerver/tests/test_upload.py
@@ -53,7 +53,6 @@
delete_export_tarball,
delete_message_image,
exif_rotate,
- get_realm_for_filename,
resize_avatar,
resize_emoji,
sanitize_name,
@@ -1771,19 +1770,6 @@ def test_delete_avatar_image(self) -> None:
with self.assertRaises(botocore.exceptions.ClientError):
bucket.Object(avatar_medium_path_id).load()
- @use_s3_backend
- def test_get_realm_for_filename(self) -> None:
- create_s3_buckets(settings.S3_AUTH_UPLOADS_BUCKET)
-
- user_profile = self.example_user('hamlet')
- uri = upload_message_file('dummy.txt', len(b'zulip!'), 'text/plain', b'zulip!', user_profile)
- path_id = re.sub('/user_uploads/', '', uri)
- self.assertEqual(user_profile.realm_id, get_realm_for_filename(path_id))
-
- @use_s3_backend
- def test_get_realm_for_filename_when_key_doesnt_exist(self) -> None:
- self.assertIsNone(get_realm_for_filename('non-existent-file-path'))
-
@use_s3_backend
def test_upload_realm_icon_image(self) -> None:
bucket = create_s3_buckets(settings.S3_AVATAR_BUCKET)[0]
| boto3: Fix support for using non-S3 providers
In #14378, we made the migration from boto to boto3, which generally makes life easier, but it breaks the pattern we'd been using of using `self.connection.DefaultHost` to get the "S3" URL for the bucket in use based on the boto configuration. See 7e0ea61b004ed12e653ed085d75c2508882862e2 for context; we'll want to re-apply that with the right boto protocol for doing so.
Relatedly, we'll want to figure out what (if anything) should replace `/etc/zulip/boto.cfg`.
Tagging as a release blocker since it's a regression, but it's not so important to block moving #14378 to master as we can fix forward.
We'll also want to look at:
* Updating the discussion in f1f60bc9bb916276ca5643f21971b40cba6a0b0d; I bet with boto3 the signature part is different / can be deleted?
* See if we can remove the block in `zproject/settings.py` related to `/etc/zulip/boto.cfg` that exists as a GCE workaround.
| Hello @zulip/server-misc members, this issue was labeled with the "area: uploads" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-06-17T04:22:51 |
zulip/zulip | 15,683 | zulip__zulip-15683 | [
"14101"
] | dbd1b563628e3f8a951fb3bcf5ec88aef68abfb0 | diff --git a/zerver/lib/events.py b/zerver/lib/events.py
--- a/zerver/lib/events.py
+++ b/zerver/lib/events.py
@@ -672,6 +672,13 @@ def name(sub: Dict[str, Any]) -> str:
# We don't return messages in /register, so we don't need to
# do anything for content updates, but we may need to update
# the unread_msgs data if the topic of an unread message changed.
+ if 'new_stream_id' in event:
+ stream_dict = state['raw_unread_msgs']['stream_dict']
+ stream_id = event['new_stream_id']
+ for message_id in event['message_ids']:
+ if message_id in stream_dict:
+ stream_dict[message_id]['stream_id'] = stream_id
+
if TOPIC_NAME in event:
stream_dict = state['raw_unread_msgs']['stream_dict']
topic = event[TOPIC_NAME]
| diff --git a/zerver/tests/test_events.py b/zerver/tests/test_events.py
--- a/zerver/tests/test_events.py
+++ b/zerver/tests/test_events.py
@@ -479,6 +479,54 @@ def get_checker(check_gravatar: Validator[Optional[str]]) -> Validator[Dict[str,
)
schema_checker('events[0]', events[0])
+ # Verify move topic to different stream.
+ schema_checker = check_events_dict([
+ ('type', equals('update_message')),
+ ('flags', check_list(None)),
+ ('edit_timestamp', check_int),
+ ('message_id', check_int),
+ ('message_ids', check_list(check_int)),
+ (ORIG_TOPIC, check_string),
+ ('propagate_mode', check_string),
+ ('stream_id', check_int),
+ ('new_stream_id', check_int),
+ ('stream_name', check_string),
+ (TOPIC_NAME, check_string),
+ (TOPIC_LINKS, check_list(None)),
+ ('user_id', check_int),
+ ])
+
+ # Send 2 messages in "test" topic.
+ self.send_stream_message(self.user_profile, "Verona")
+ message_id = self.send_stream_message(self.user_profile, "Verona")
+ message = Message.objects.get(id=message_id)
+ topic = 'new_topic'
+ stream = get_stream("Denmark", self.user_profile.realm)
+ propagate_mode = 'change_all'
+ prior_mention_user_ids = set()
+
+ events = self.verify_action(
+ lambda: do_update_message(
+ self.user_profile,
+ message,
+ stream,
+ topic,
+ propagate_mode,
+ True,
+ True,
+ None,
+ None,
+ set(),
+ set(),
+ None),
+ state_change_expected=True,
+ # There are 3 events generated for this action
+ # * update_message: For updating existing messages
+ # * 2 new message events: Breadcrumb messages in the new and old topics.
+ num_events=3,
+ )
+ schema_checker('events[0]', events[0])
+
def test_update_message_flags(self) -> None:
# Test message flag update events
schema_checker = check_events_dict([
| Add zerver/lib/events / test_events support for moving messages between streams
This is a planned follow-up to #13912; I'm opening it now so we don't lose track of it. We should add a case to `test_stream_send_message_events` (or similar to the message editing piece of it) that moving a message between streams using the `propagate` feature, and verifies that our `zerver/lib/events.py` code does the right thing.
I'm pretty sure it doesn't today and might be fairly tricky to fix, and it's a very small race, so I'm OK with merging #13912 without a proper fix for it.
| Hello @zulip/server-api, @zulip/server-message-view members, this issue was labeled with the "area: api", "area: message-editing" labels, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-07-06T07:48:30 |
zulip/zulip | 15,697 | zulip__zulip-15697 | [
"15442"
] | f8d1e0f86a896488573047cba42c0d320bedece9 | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -1936,7 +1936,7 @@ def build_inlinepatterns(self) -> markdown.util.Registry:
reg.register(StreamTopicPattern(get_compiled_stream_topic_link_regex(), self), 'topic', 87)
reg.register(StreamPattern(get_compiled_stream_link_regex(), self), 'stream', 85)
reg.register(Avatar(AVATAR_REGEX, self), 'avatar', 80)
- reg.register(Timestamp(r'!time\((?P<time>[^)]*)\)'), 'timestamp', 75)
+ reg.register(Timestamp(r'<time:(?P<time>[^>]*?)>'), 'timestamp', 75)
# Note that !gravatar syntax should be deprecated long term.
reg.register(Avatar(GRAVATAR_REGEX, self), 'gravatar', 70)
reg.register(UserGroupMentionPattern(mention.user_group_mentions, self), 'usergroupmention', 65)
| diff --git a/frontend_tests/node_tests/composebox_typeahead.js b/frontend_tests/node_tests/composebox_typeahead.js
--- a/frontend_tests/node_tests/composebox_typeahead.js
+++ b/frontend_tests/node_tests/composebox_typeahead.js
@@ -1302,14 +1302,14 @@ run_test('begins_typeahead', () => {
assert_typeahead_equals("#**Sweden>totally new topic", sweden_topics_to_show);
// time_jump
- assert_typeahead_equals("!tim", false);
- assert_typeahead_equals("!timerandom", false);
- assert_typeahead_equals("!time", ['translated: Mention a timezone-aware time']);
- assert_typeahead_equals("!time(", ['translated: Mention a timezone-aware time']);
- assert_typeahead_equals("!time(something", ['translated: Mention a timezone-aware time']);
- assert_typeahead_equals("!time(something", ") ", ['translated: Mention a timezone-aware time']);
- assert_typeahead_equals("!time(something)", false);
- assert_typeahead_equals("!time(something) ", false); // Already completed the mention
+ assert_typeahead_equals("<tim", false);
+ assert_typeahead_equals("<timerandom", false);
+ assert_typeahead_equals("<time", ['translated: Mention a timezone-aware time']);
+ assert_typeahead_equals("<time:", ['translated: Mention a timezone-aware time']);
+ assert_typeahead_equals("<time:something", ['translated: Mention a timezone-aware time']);
+ assert_typeahead_equals("<time:something", "> ", ['translated: Mention a timezone-aware time']);
+ assert_typeahead_equals("<time:something>", ['translated: Mention a timezone-aware time']);
+ assert_typeahead_equals("<time:something> ", false); // Already completed the mention
// Following tests place the cursor before the second string
assert_typeahead_equals("#test", "ing", false);
@@ -1317,7 +1317,7 @@ run_test('begins_typeahead', () => {
assert_typeahead_equals(":test", "ing", false);
assert_typeahead_equals("```test", "ing", false);
assert_typeahead_equals("~~~test", "ing", false);
- const terminal_symbols = ',.;?!()[] "\'\n\t';
+ const terminal_symbols = ',.;?!()[]> "\'\n\t';
terminal_symbols.split().forEach((symbol) => {
assert_stream_list("#test", symbol);
assert_typeahead_equals("@test", symbol, all_mentions);
diff --git a/frontend_tests/node_tests/rendered_markdown.js b/frontend_tests/node_tests/rendered_markdown.js
--- a/frontend_tests/node_tests/rendered_markdown.js
+++ b/frontend_tests/node_tests/rendered_markdown.js
@@ -69,6 +69,7 @@ const get_content_element = () => {
$content.set_find_results('a.stream', $array([]));
$content.set_find_results('a.stream-topic', $array([]));
$content.set_find_results('time', $array([]));
+ $content.set_find_results('span.timestamp-error', $array([]));
$content.set_find_results('.emoji', $array([]));
$content.set_find_results('div.spoiler-header', $array([]));
return $content;
@@ -164,6 +165,7 @@ run_test('timestamp', () => {
const $timestamp_invalid = $.create('timestamp(invalid)');
$timestamp_invalid.attr('datetime', 'invalid');
$content.set_find_results('time', $array([$timestamp, $timestamp_invalid]));
+ blueslip.expect('error', 'Moment could not parse datetime supplied by backend: invalid');
// Initial asserts
assert.equal($timestamp.text(), 'never-been-set');
@@ -174,7 +176,23 @@ run_test('timestamp', () => {
// Final asserts
assert.equal($timestamp.text(), 'Thu, Jan 1 1970, 12:00 AM');
assert.equal($timestamp.attr('title'), "This time is in your timezone. Original text was 'never-been-set'.");
- assert.equal($timestamp_invalid.text(), 'translated: Could not parse timestamp.');
+ assert.equal($timestamp_invalid.text(), 'never-been-set');
+});
+
+run_test('timestamp-error', () => {
+ // Setup
+ const $content = get_content_element();
+ const $timestamp_error = $.create('timestamp-error');
+ $timestamp_error.text('Invalid time format: the-time-format');
+ $content.set_find_results('span.timestamp-error', $array([$timestamp_error]));
+
+ // Initial assert
+ assert.equal($timestamp_error.text(), 'Invalid time format: the-time-format');
+
+ rm.update_elements($content);
+
+ // Final assert
+ assert.equal($timestamp_error.text(), 'translated: Invalid time format: the-time-format');
});
run_test('emoji', () => {
@@ -205,7 +223,7 @@ run_test('spoiler-header', () => {
const $header = $.create('div.spoiler-header');
$content.set_find_results('div.spoiler-header', $array([$header]));
- // Test that button gets appened to a spoiler header
+ // Test that the show/hide button gets added to a spoiler header.
const label = 'My Spoiler Header';
const toggle_button_html = '<span class="spoiler-button" aria-expanded="false"><span class="spoiler-arrow"></span></span>';
$header.html(label);
@@ -219,7 +237,7 @@ run_test('spoiler-header-empty-fill', () => {
const $header = $.create('div.spoiler-header');
$content.set_find_results('div.spoiler-header', $array([$header]));
- // Test that an empty header gets the default text applied (through i18n filter)
+ // Test that an empty header gets the default text applied (through i18n filter).
const toggle_button_html = '<span class="spoiler-button" aria-expanded="false"><span class="spoiler-arrow"></span></span>';
$header.html('');
rm.update_elements($content);
diff --git a/frontend_tests/node_tests/timerender.js b/frontend_tests/node_tests/timerender.js
--- a/frontend_tests/node_tests/timerender.js
+++ b/frontend_tests/node_tests/timerender.js
@@ -2,6 +2,7 @@ set_global('$', global.make_zjquery());
set_global('page_params', {
twenty_four_hour_time: true,
});
+set_global('moment', require('moment-timezone'));
set_global('XDate', zrequire('XDate', 'xdate'));
zrequire('timerender');
@@ -130,6 +131,16 @@ run_test('get_full_time', () => {
assert.equal(expected, actual);
});
+run_test('get_timestamp_for_flatpickr', () => {
+ const unix_timestamp = 1495091573000; // 5/18/2017 7:12:53 AM (UTC+0)
+ const iso_timestamp = '2017-05-18T07:12:53Z'; // ISO 8601 date format
+ const func = timerender.get_timestamp_for_flatpickr;
+ // Invalid timestamps should show current time.
+ assert.equal(func("random str").valueOf(), moment().valueOf());
+ // Valid ISO timestamps should return Date objects.
+ assert.equal(func(iso_timestamp).valueOf(), moment(unix_timestamp).valueOf());
+});
+
run_test('absolute_time_12_hour', () => {
set_global('page_params', {
twenty_four_hour_time: false,
diff --git a/zerver/tests/fixtures/markdown_test_cases.json b/zerver/tests/fixtures/markdown_test_cases.json
--- a/zerver/tests/fixtures/markdown_test_cases.json
+++ b/zerver/tests/fixtures/markdown_test_cases.json
@@ -748,34 +748,36 @@
},
{
"name": "timestamp_backend_markdown_only",
- "input": "!time(Jun 5th 2017, 10:30PM)",
+ "input": "<time:Jun 5th 2017, 10:30PM>",
"expected_output": "<p><time datetime=\"2017-06-05T22:30:00Z\">Jun 5th 2017, 10:30PM</time></p>",
- "marked_expected_output": "<p><span class=\"timestamp-error\">Invalid time format: Jun 5th 2017, 10:30PM</span></p>"
+ "marked_expected_output": "<p><span>Jun 5th 2017, 10:30PM</span></p>"
},
{
"name": "timestamp_backend_markdown_and_marked",
- "input": "!time(31 Dec 2017)",
+ "input": "<time:31 Dec 2017>",
"expected_output": "<p><time datetime=\"2017-12-31T00:00:00Z\">31 Dec 2017</time></p>"
},
{
"name": "timestamp_invalid_input",
- "input": "!time(<alert(1)>)",
- "expected_output": "<p><span class=\"timestamp-error\">Invalid time format: <alert(1</span>>)</p>"
+ "input": "<time:<alert(1)>>",
+ "expected_output": "<p><span class=\"timestamp-error\">Invalid time format: <alert(1)</span>></p>",
+ "marked_expected_output": "<p><span><alert(1)</span>></p>"
},
{
"name": "timestamp_timezone",
- "input": "!time(31 Dec 2017 5:30 am IST)",
+ "input": "<time:31 Dec 2017 5:30 am IST>",
"expected_output": "<p><time datetime=\"2017-12-31T00:00:00Z\">31 Dec 2017 5:30 am IST</time></p>",
- "marked_expected_output": "<p><span class=\"timestamp-error\">Invalid time format: 31 Dec 2017 5:30 am IST</span></p>"
+ "marked_expected_output": "<p><span>31 Dec 2017 5:30 am IST</span></p>"
},
{
"name": "timestamp_incorrect",
- "input": "!time(**hello world**)",
- "expected_output": "<p><span class=\"timestamp-error\">Invalid time format: **hello world**</span></p>"
+ "input": "<time:**hello world**>",
+ "expected_output": "<p><span class=\"timestamp-error\">Invalid time format: **hello world**</span></p>",
+ "marked_expected_output": "<p><span>**hello world**</span></p>"
},
{
"name": "timestamp_unix",
- "input": "!time(1496701800)",
+ "input": "<time:1496701800>",
"expected_output": "<p><time datetime=\"2017-06-05T22:30:00Z\">1496701800</time></p>"
},
{
| markdown: change syntax from !time(...) to <time:...>
Suggested by @andersk [on CZO](https://chat.zulip.org/#narrow/stream/101-design/topic/time.20mentions/near/906857).
It's better to built on established syntax constructs instead of inventing our own. With Markdown you can usually format links with `<http://example.com>`, since we already support rich links for Twitter it makes sense to expand on the *rich link* metaphor.
This change should probably be made after #15431 to avoid conflicts.
| These changes can be done simultaneously as well, and should involve minimal changes to the code anyway. We just need to decide on a syntax.
Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
I'm a little skeptical of this change concept, mainly from thinking about the spoiler syntax; I wouldn't want to ask folks to do `<
Another direction we could go is `@**time:isotime**`; that would be consistent with all of our existing mentions.
I don’t see any world in which this is related to spoiler syntax. Spoilers are blocks and contain markdown for human consumption. Times are inline and contain a specific machine-parsed format.
Yeah, that's fair.
I think the other big question is whether we want the typeahead PR: https://github.com/zulip/zulip/pull/15052 as well as a button, or just one or the other.
We definitely want typeahead because it's very convenient, especially if we also trigger it when entering something that looks like a date e.g. `2020-` (we would need to take care to directly jump to the entered year/month/day in the datetime picker).
A button would also be a good idea to make the feature more discoverable, since users unaware of it usually won't communicate dates in a form for which we could reasonably trigger the typeahead (e.g. `Saturday 3 PM`, `tomorrow 10h`).
| 2020-07-06T20:11:45 |
zulip/zulip | 15,713 | zulip__zulip-15713 | [
"26249"
] | 13311ef91d20bbcabbb5715154c652bc0a852c5d | diff --git a/zerver/views/documentation.py b/zerver/views/documentation.py
--- a/zerver/views/documentation.py
+++ b/zerver/views/documentation.py
@@ -9,6 +9,9 @@
from django.http import HttpRequest, HttpResponse, HttpResponseNotFound
from django.template import loader
from django.views.generic import TemplateView
+from lxml import html
+from lxml.etree import Element, SubElement, XPath, _Element
+from markupsafe import Markup
from zerver.context_processors import zulip_default_context
from zerver.decorator import add_google_analytics_context
@@ -65,6 +68,9 @@ def get_context_data(self, **kwargs: Any) -> Dict[str, str]:
return context
+sidebar_links = XPath("//a[@href=$url]")
+
+
class MarkdownDirectoryView(ApiURLView):
path_template = ""
policies_view = False
@@ -215,7 +221,6 @@ def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:
)
context["PAGE_DESCRIPTION"] = request_notes.placeholder_open_graph_description
- context["sidebar_index"] = sidebar_index
# An "article" might require the api_url_context to be rendered
api_url_context: Dict[str, Any] = {}
add_api_url_context(api_url_context, self.request)
@@ -223,6 +228,28 @@ def get_context_data(self, **kwargs: Any) -> Dict[str, Any]:
context["api_url_context"] = api_url_context
if endpoint_name and endpoint_method:
context["api_url_context"]["API_ENDPOINT_NAME"] = endpoint_name + ":" + endpoint_method
+
+ sidebar_html = render_markdown_path(sidebar_index)
+ tree = html.fragment_fromstring(sidebar_html, create_parent=True)
+ if not context.get("page_is_policy_center", False):
+ home_h1 = Element("h1")
+ home_link = SubElement(home_h1, "a")
+ home_link.attrib["class"] = "no-underline"
+ home_link.attrib["href"] = context["doc_root"]
+ home_link.text = context["doc_root_title"] + " home"
+ tree.insert(0, home_h1)
+ url = context["doc_root"] + article
+ # Highlight current article link
+ links = sidebar_links(tree, url=url)
+ assert isinstance(links, list)
+ for a in links:
+ assert isinstance(a, _Element)
+ old_class = a.attrib.get("class", "")
+ assert isinstance(old_class, str)
+ a.attrib["class"] = old_class + " highlighted"
+ sidebar_html = "".join(html.tostring(child, encoding="unicode") for child in tree)
+ context["sidebar_html"] = Markup(sidebar_html)
+
add_google_analytics_context(context)
return context
| scrollToHash can raise an exception if the browser does not support it
If the browser is Safari (or Google's webcrawler) and the page was loaded with an anchor of the form `#:~:text=something`, then calling `scrollToHash` will raise `Error: Syntax error, unrecognized expression` here:
https://github.com/zulip/zulip/blob/1676d0b638d91680badc474190a08a52c8dc7a36/web/src/portico/help.js#L112
We should catch it and do nothing, in that case.
| 2020-07-07T23:00:00 |
||
zulip/zulip | 15,746 | zulip__zulip-15746 | [
"12868"
] | 57d3ef42b890e99b9a726de2cd82a1113443976d | diff --git a/scripts/lib/hash_reqs.py b/scripts/lib/hash_reqs.py
--- a/scripts/lib/hash_reqs.py
+++ b/scripts/lib/hash_reqs.py
@@ -2,29 +2,20 @@
import argparse
import hashlib
import os
+import subprocess
import sys
-from typing import Iterable, List, MutableSet
+from typing import Iterable, List
-def expand_reqs_helper(fpath: str, visited: MutableSet[str]) -> List[str]:
- if fpath in visited:
- return []
- else:
- visited.add(fpath)
-
- curr_dir = os.path.dirname(fpath)
+def expand_reqs_helper(fpath: str) -> List[str]:
result = [] # type: List[str]
for line in open(fpath):
- if line.startswith('#'):
+ if line.strip().startswith(('#', '--hash')):
continue
- dep = line.split(" #", 1)[0].strip() # remove comments and strip whitespace
+ dep = line.split(" \\", 1)[0].strip()
if dep:
- if dep.startswith('-r'):
- child = os.path.join(curr_dir, dep[3:])
- result += expand_reqs_helper(child, visited)
- else:
- result.append(dep)
+ result.append(dep)
return result
def expand_reqs(fpath: str) -> List[str]:
@@ -34,11 +25,17 @@ def expand_reqs(fpath: str) -> List[str]:
`fpath` can be either an absolute path or a relative path.
"""
absfpath = os.path.abspath(fpath)
- output = expand_reqs_helper(absfpath, set())
+ output = expand_reqs_helper(absfpath)
return sorted(set(output))
+def python_version() -> str:
+ """
+ Returns the Python version as string 'Python major.minor.patchlevel'
+ """
+ return subprocess.check_output(["/usr/bin/python3", "-VV"], universal_newlines=True)
+
def hash_deps(deps: Iterable[str]) -> str:
- deps_str = "\n".join(deps) + "\n"
+ deps_str = "\n".join(deps) + "\n" + python_version()
return hashlib.sha1(deps_str.encode('utf-8')).hexdigest()
def main() -> int:
diff --git a/scripts/lib/setup_venv.py b/scripts/lib/setup_venv.py
--- a/scripts/lib/setup_venv.py
+++ b/scripts/lib/setup_venv.py
@@ -4,7 +4,7 @@
import subprocess
from typing import List, Optional, Set, Tuple
-from scripts.lib.hash_reqs import expand_reqs
+from scripts.lib.hash_reqs import expand_reqs, python_version
from scripts.lib.zulip_tools import ENDC, WARNING, os_families, run, run_as_root
ZULIP_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -163,6 +163,7 @@ def try_to_copy_venv(venv_path: str, new_packages: Set[str]) -> bool:
if not os.path.exists(VENV_CACHE_PATH):
return False
+ desired_python_version = python_version()
venv_name = os.path.basename(venv_path)
overlaps = [] # type: List[Tuple[int, str, Set[str]]]
@@ -173,6 +174,15 @@ def try_to_copy_venv(venv_path: str, new_packages: Set[str]) -> bool:
not os.path.exists(get_index_filename(curr_venv_path))):
continue
+ # Check the Python version in the venv matches the version we want to use.
+ venv_python3 = os.path.join(curr_venv_path, "bin", "python3")
+ if not os.path.exists(venv_python3):
+ continue
+ venv_python_version = subprocess.check_output([
+ venv_python3, "-VV"], universal_newlines=True)
+ if desired_python_version != venv_python_version:
+ continue
+
old_packages = get_venv_packages(curr_venv_path)
# We only consider using using old virtualenvs that only
# contain packages that we want in our new virtualenv.
@@ -262,16 +272,19 @@ def do_patch_activate_script(venv_path: str) -> None:
with open(script_path, 'w') as f:
f.write("".join(lines))
+def generate_hash(requirements_file: str) -> str:
+ path = os.path.join(ZULIP_PATH, 'scripts', 'lib', 'hash_reqs.py')
+ output = subprocess.check_output([path, requirements_file], universal_newlines=True)
+ return output.split()[0]
+
def setup_virtualenv(
target_venv_path: Optional[str],
requirements_file: str,
patch_activate_script: bool = False,
) -> str:
+ sha1sum = generate_hash(requirements_file)
# Check if a cached version already exists
- path = os.path.join(ZULIP_PATH, 'scripts', 'lib', 'hash_reqs.py')
- output = subprocess.check_output([path, requirements_file], universal_newlines=True)
- sha1sum = output.split()[0]
if target_venv_path is None:
cached_venv_path = os.path.join(VENV_CACHE_PATH, sha1sum, 'venv')
else:
| diff --git a/tools/tests/test_hash_reqs.py b/tools/tests/test_hash_reqs.py
new file mode 100644
--- /dev/null
+++ b/tools/tests/test_hash_reqs.py
@@ -0,0 +1,25 @@
+import unittest
+
+import mock
+
+from scripts.lib.hash_reqs import expand_reqs, hash_deps
+from tools.setup.setup_venvs import DEV_REQS_FILE
+
+
+class TestHashCreation(unittest.TestCase):
+
+ def test_diff_hash_for_diff_python_version(self) -> None:
+ with mock.patch('scripts.lib.hash_reqs.python_version', return_value='Python 3.6.9'):
+ deps = expand_reqs(DEV_REQS_FILE)
+ hash1 = hash_deps(deps)
+
+ with mock.patch('scripts.lib.hash_reqs.python_version', return_value='Python 3.6.9'):
+ deps = expand_reqs(DEV_REQS_FILE)
+ hash2 = hash_deps(deps)
+
+ with mock.patch('scripts.lib.hash_reqs.python_version', return_value='Python 3.8.2'):
+ deps = expand_reqs(DEV_REQS_FILE)
+ hash3 = hash_deps(deps)
+
+ assert hash1 == hash2
+ assert hash1 != hash3
| Improve virtualenv-clone hashing logic
Our `virtualenv` management logic is designed to have a few properties:
* After running `provision` or the equivalent production operation, you always have a correct virtualenv as though you'd built it fresh
* We avoid the performance pain of redownloading and/or rebuilding all of our packages fresh for the common operations of "adding a new package" or "upgrading a minor version of a package", both in the development environment and in production.
* https://zulip.readthedocs.io/en/latest/subsystems/dependencies.html is a useful reference.
There are a few issues we need to fix with the current implementation:
* We don't currently compare the `Python` version when deciding whether we need to build a new virtualenv from scratch.
* Our index of packages included in the virtualenv doesn't have package versions; we should include those and only allow package upgrades to use the `virtualenv-clone` plus cache approach (downgrades are often not tested upstream).
We should be able to test whether the implementation is working manually through inspection of `/srv/zulip-venv-cache/` before and after adding a new python dependency (see the docs for how to do this using `update-locked-requirements`), downgrading in `dev.txt`, and upgrading/downgrading the Python version itself. We don't currently have unit tests of any this logic, though it wouldn't be a bad idea to write some simple tests of the logic functions from `scripts/lib/setup_venv.py` in `tools/tests/` somewhere (`tools/test-tools` is the runner for those).
@andersk FYI; we might want to fix this before we merge https://github.com/zulip/zulip/pull/12837.
| Hello @zulip/server-production, @zulip/server-tooling members, this issue was labeled with the "area: production", "area: tooling" labels, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-07-10T05:55:17 |
zulip/zulip | 15,773 | zulip__zulip-15773 | [
"15518"
] | 46c966576d65eaec0e9a185d62a2082060b129dc | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -94,6 +94,12 @@ class FullNameInfo(TypedDict):
email: str
full_name: str
+class LinkInfo(TypedDict):
+ parent: Element
+ title: Optional[str]
+ index: Optional[int]
+ remove: Optional[Element]
+
DbData = Dict[str, Any]
# Format version of the markdown rendering; stored along with rendered
@@ -988,25 +994,31 @@ def get_url_data(self, e: Element) -> Optional[Tuple[str, Optional[str]]]:
return (url, e.text)
return None
- def handle_image_inlining(
+ def get_inlining_information(
self,
root: Element,
found_url: ResultWithFamily[Tuple[str, Optional[str]]],
- ) -> None:
+ ) -> LinkInfo:
+
grandparent = found_url.family.grandparent
parent = found_url.family.parent
ahref_element = found_url.family.child
(url, text) = found_url.result
- actual_url = self.get_actual_image_url(url)
# url != text usually implies a named link, which we opt not to remove
url_eq_text = text is None or url == text
title = None if url_eq_text else text
+ info: LinkInfo = {
+ 'parent': root,
+ 'title': title,
+ 'index': None,
+ 'remove': None,
+ }
if parent.tag == 'li':
- self.add_a(parent, self.get_actual_image_url(url), url, title=title)
+ info['parent'] = parent
if not parent.text and not ahref_element.tail and url_eq_text:
- parent.remove(ahref_element)
+ info['remove'] = ahref_element
elif parent.tag == 'p':
assert grandparent is not None
@@ -1016,25 +1028,50 @@ def handle_image_inlining(
parent_index = index
break
- if parent_index is not None:
- ins_index = self.find_proper_insertion_index(grandparent, parent, parent_index)
- self.add_a(grandparent, actual_url, url, title=title, insertion_index=ins_index)
+ # Append to end of list of grandparent's children as normal
+ info['parent'] = grandparent
- else:
- # We're not inserting after parent, since parent not found.
- # Append to end of list of grandparent's children as normal
- self.add_a(grandparent, actual_url, url, title=title)
-
- # If link is alone in a paragraph, delete paragraph containing it
if (len(parent) == 1 and
(not parent.text or parent.text == "\n") and
not ahref_element.tail and
url_eq_text):
- grandparent.remove(parent)
+ info['remove'] = parent
+
+ if parent_index is not None:
+ info['index'] = self.find_proper_insertion_index(grandparent, parent, parent_index)
+ return info
+
+ def handle_image_inlining(
+ self,
+ root: Element,
+ found_url: ResultWithFamily[Tuple[str, Optional[str]]],
+ ) -> None:
+ info = self.get_inlining_information(root, found_url)
+ (url, text) = found_url.result
+ actual_url = self.get_actual_image_url(url)
+ self.add_a(info['parent'], actual_url, url, title=info['title'], insertion_index=info['index'])
+ if info['remove'] is not None:
+ info['parent'].remove(info['remove'])
+
+ def handle_tweet_inlining(
+ self,
+ root: Element,
+ found_url: ResultWithFamily[Tuple[str, Optional[str]]],
+ twitter_data: Element,
+ ) -> None:
+ info = self.get_inlining_information(root, found_url)
+
+ if info['index'] is not None:
+ div = Element("div")
+ root.insert(info['index'], div)
else:
- # If none of the above criteria match, fall back to old behavior
- self.add_a(root, actual_url, url, title=title)
+ div = SubElement(root, "div")
+
+ div.set("class", "inline-preview-twitter")
+ div.insert(0, twitter_data)
+ if info['remove'] is not None:
+ info['parent'].remove(info['remove'])
def find_proper_insertion_index(self, grandparent: Element, parent: Element,
parent_index_in_grandparent: int) -> int:
@@ -1051,7 +1088,7 @@ def find_proper_insertion_index(self, grandparent: Element, parent: Element,
return insertion_index
uncle = grandparent[insertion_index]
- inline_image_classes = ['message_inline_image', 'message_inline_ref']
+ inline_image_classes = ['message_inline_image', 'message_inline_ref', 'inline-preview-twitter']
if (
uncle.tag != 'div' or
'class' not in uncle.keys() or
@@ -1155,9 +1192,7 @@ def run(self, root: Element) -> None:
# This link is not actually a tweet known to twitter
continue
rendered_tweet_count += 1
- div = SubElement(root, "div")
- div.set("class", "inline-preview-twitter")
- div.insert(0, twitter_data)
+ self.handle_tweet_inlining(root, found_url, twitter_data)
continue
youtube = self.youtube_image(url)
if youtube is not None:
| diff --git a/zerver/tests/test_markdown.py b/zerver/tests/test_markdown.py
--- a/zerver/tests/test_markdown.py
+++ b/zerver/tests/test_markdown.py
@@ -790,21 +790,21 @@ def make_inline_twitter_preview(url: str, tweet_html: str, image_html: str='') -
converted = markdown_convert_wrapper(msg)
self.assertEqual(converted, '<p>{}</p>'.format(make_link('http://www.twitter.com/wdaher/status/999999999999999999')))
- msg = 'http://www.twitter.com/wdaher/status/287977969287315456'
+ msg = 'Tweet: http://www.twitter.com/wdaher/status/287977969287315456'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('http://www.twitter.com/wdaher/status/287977969287315456'),
make_inline_twitter_preview('http://www.twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
- msg = 'https://www.twitter.com/wdaher/status/287977969287315456'
+ msg = 'Tweet: https://www.twitter.com/wdaher/status/287977969287315456'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('https://www.twitter.com/wdaher/status/287977969287315456'),
make_inline_twitter_preview('https://www.twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
- msg = 'http://twitter.com/wdaher/status/287977969287315456'
+ msg = 'Tweet: http://twitter.com/wdaher/status/287977969287315456'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('http://twitter.com/wdaher/status/287977969287315456'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
@@ -837,19 +837,29 @@ def make_inline_twitter_preview(url: str, tweet_html: str, image_html: str='') -
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315457', normal_tweet_html),
make_inline_twitter_preview('https://twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
+ # Test smart in-place inlining behavior:
+ msg = ('Paragraph 1: http://twitter.com/wdaher/status/287977969287315456\n\n'
+ 'Paragraph 2. Below paragraph will be removed.\n\n'
+ 'http://twitter.com/wdaher/status/287977969287315457')
+ converted = markdown_convert_wrapper(msg)
+ self.assertEqual(converted, '<p>Paragraph 1: {}</p>\n{}<p>Paragraph 2. Below paragraph will be removed.</p>\n{}'.format(
+ make_link('http://twitter.com/wdaher/status/287977969287315456'),
+ make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315456', normal_tweet_html),
+ make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315457', normal_tweet_html)))
+
# Tweet has a mention in a URL, only the URL is linked
- msg = 'http://twitter.com/wdaher/status/287977969287315458'
+ msg = 'Tweet: http://twitter.com/wdaher/status/287977969287315458'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('http://twitter.com/wdaher/status/287977969287315458'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315458', mention_in_link_tweet_html)))
# Tweet with an image
- msg = 'http://twitter.com/wdaher/status/287977969287315459'
+ msg = 'Tweet: http://twitter.com/wdaher/status/287977969287315459'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('http://twitter.com/wdaher/status/287977969287315459'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315459',
media_tweet_html,
@@ -859,12 +869,21 @@ def make_inline_twitter_preview(url: str, tweet_html: str, image_html: str='') -
'</a>'
'</div>'))))
- msg = 'http://twitter.com/wdaher/status/287977969287315460'
+ msg = 'Tweet: http://twitter.com/wdaher/status/287977969287315460'
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<p>{}</p>\n{}'.format(
+ self.assertEqual(converted, '<p>Tweet: {}</p>\n{}'.format(
make_link('http://twitter.com/wdaher/status/287977969287315460'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315460', emoji_in_tweet_html)))
+ # Test twitter previews in spoiler tags.
+ msg = '```spoiler secret tweet\nTweet: http://twitter.com/wdaher/status/287977969287315456\n```'
+ converted = markdown_convert_wrapper(msg)
+
+ rendered_spoiler = "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>secret tweet</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>Tweet: {}</p>\n{}</div></div>"
+ self.assertEqual(converted, rendered_spoiler.format(
+ make_link('http://twitter.com/wdaher/status/287977969287315456'),
+ make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
+
def test_fetch_tweet_data_settings_validation(self) -> None:
with self.settings(TEST_SUITE=False, TWITTER_CONSUMER_KEY=None):
self.assertIs(None, fetch_tweet_data('287977969287315459'))
| Twitter preview embeds show outside of spoiler blocks
When a tweet is linked to inside of a spoiler block, the embedded preview for the tweet shows up outside of the spoiler block (at the end of the post as is typical for non-spoilered links to tweets). The preview should probably show up inside of the spoiler block (perhaps at the end of the inside of the block, if there are concerns about tweet previews breaking the flow of a post).
(Previews for uploaded images appear to respect spoiler blocks.)
| I agree. This may be somewhat messy to change because of how tweets are rendered in a post-processing step; I'm not sure. Ideally we'd just append then as the last element of the parent spoiler block (if any).
@Dylnuge @opheliasdaisies FYI.
Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
If anyone else takes this on, @timabbott's comment on the PR might be useful: https://github.com/zulip/zulip/pull/15281#discussion_r437864539
I think fixing this correctly involves changing the logic around `handle_image_inlining` to set the root element for where we add image previews to be the end of the spoiler tag, not the top element. I haven't read the code closely enough to say how complex that will be.
@zulipbot claim
Hello @aero31aero, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning --> | 2020-07-13T17:04:21 |
zulip/zulip | 15,887 | zulip__zulip-15887 | [
"15866"
] | 19b1ef62d244e8db911f6fc3a706bd07d1f7b7ec | diff --git a/zerver/views/reactions.py b/zerver/views/reactions.py
--- a/zerver/views/reactions.py
+++ b/zerver/views/reactions.py
@@ -73,7 +73,7 @@ def add_reaction(request: HttpRequest, user_profile: UserProfile, message_id: in
# Otherwise, use the name provided in this request, but verify
# it is valid in the user's realm (e.g. not a deactivated
# realm emoji).
- check_emoji_request(message.sender.realm, emoji_name,
+ check_emoji_request(user_profile.realm, emoji_name,
emoji_code, reaction_type)
if user_message is None:
| Cannot react with a custom emoji to a bot message
Attempting to react to a bot message with a custom emoji appears to work but if you refresh the page it's gone. Inspecting the network requests reveals that the request to add the reaction fails with a 400 Bad Request error `Reaction doesn't exist.`.
This can be easily reproduced with the Notification Bot. Note that the `zulip` reaction works since it's not really a custom emoji, but you can reproduce the problem with any other custom emoji.
| Hello @zulip/server-emoji members, this issue was labeled with the "area: emoji" label, so you may want to check it out!
<!-- areaLabelAddition -->
I think the key detail here is not that it's a bot message, but that it's a Notification bot message (i.e. we're dealing with a cross-realm user). We're incorrectly using `message.sender.realm` here rather than `user_profile.realm`, I think:
```
check_emoji_request(message.sender.realm, emoji_name,
emoji_code, reaction_type)
```
Longer-term, we will probably want to denormalize `Message.realm` and use that for queries like this.
@mateuszmandera would you be interested in working on that denormalization effort? I think it could work very similarly to the Recipient project you did a couple months back.
Tagging as a priority since this is a correctness issue.
@timabbott So should a temporary fix be filed or wait for `message.realm` to be denormalized?
There's a quick temporary fix we can do involving `user_profile.realm`, as mentioned above. I imagine that should close this issue and we'll make a separate one for denormalization of `realm`. | 2020-07-23T05:21:01 |
|
zulip/zulip | 15,923 | zulip__zulip-15923 | [
"15904"
] | 5b0b1efb155ac5692f1fb9059efe75ab8c0df12e | diff --git a/zproject/backends.py b/zproject/backends.py
--- a/zproject/backends.py
+++ b/zproject/backends.py
@@ -38,6 +38,7 @@
from lxml.etree import XMLSyntaxError
from onelogin.saml2.errors import OneLogin_Saml2_Error
from onelogin.saml2.response import OneLogin_Saml2_Response
+from onelogin.saml2.settings import OneLogin_Saml2_Settings
from requests import HTTPError
from social_core.backends.apple import AppleIdAuth
from social_core.backends.azuread import AzureADOAuth2
@@ -1780,8 +1781,7 @@ def get_data_from_redis(cls, key: str) -> Optional[Dict[str, Any]]:
return data
- @classmethod
- def get_issuing_idp(cls, SAMLResponse: str) -> Optional[str]:
+ def get_issuing_idp(self, SAMLResponse: str) -> Optional[str]:
"""
Given a SAMLResponse, returns which of the configured IdPs is declared as the issuer.
This value MUST NOT be trusted as the true issuer!
@@ -1792,11 +1792,12 @@ def get_issuing_idp(cls, SAMLResponse: str) -> Optional[str]:
of the configured IdPs' information to use for parsing and validating the response.
"""
try:
- resp = OneLogin_Saml2_Response(settings={}, response=SAMLResponse)
+ config = self.generate_saml_config()
+ saml_settings = OneLogin_Saml2_Settings(config, sp_validation_only=True)
+ resp = OneLogin_Saml2_Response(settings=saml_settings, response=SAMLResponse)
issuers = resp.get_issuers()
- except cls.SAMLRESPONSE_PARSING_EXCEPTIONS:
- logger = logging.getLogger(f"zulip.auth.{cls.name}")
- logger.info("Error while parsing SAMLResponse:", exc_info=True)
+ except self.SAMLRESPONSE_PARSING_EXCEPTIONS:
+ self.logger.info("Error while parsing SAMLResponse:", exc_info=True)
return None
for idp_name, idp_config in settings.SOCIAL_AUTH_SAML_ENABLED_IDPS.items():
| SAML: First call of /complete/saml/ fails, settings missing?
With the new Version 3.0 the first call of /complete/saml/ (callback from IDP) fails with the following error. It seems that OneLogin_Saml2_Response is called with a empty Settings-Object. But when i call "Login" again everything is working fine.
**ERROR**
2020-07-24 08:35:03.823 ERR [django.request] Internal Server Error: /complete/saml/
Traceback (most recent call last):
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/social_django/utils.py", line 49, in wrapper
return func(request, backend, *args, **kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/social_django/views.py", line 33, in complete
*args, **kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/social_core/actions.py", line 45, in do_complete
user = backend.complete(user=user, *args, **kwargs)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/social_core/backends/base.py", line 40, in complete
return self.auth_complete(*args, **kwargs)
File "./zproject/backends.py", line 1912, in auth_complete
idp_name = self.get_issuing_idp(SAMLResponse)
File "./zproject/backends.py", line 1793, in get_issuing_idp
resp = OneLogin_Saml2_Response(settings={}, response=SAMLResponse)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/onelogin/saml2/response.py", line 49, in __init__
self.decrypted_document = self.__decrypt_assertion(decrypted_document)
File "/srv/zulip-venv-cache/99ce7a22e72bf7ac1fd2eb4428af7f7f69b4ac9a/zulip-py3-venv/lib/python3.7/site-packages/onelogin/saml2/response.py", line 823, in __decrypt_assertion
key = self.__settings.get_sp_key()
AttributeError: 'dict' object has no attribute 'get_sp_key'
| Hello @zulip/server-authentication members, this issue was labeled with the "area: authentication" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-07-25T13:08:48 |
|
zulip/zulip | 15,971 | zulip__zulip-15971 | [
"15970"
] | ceb909dbc57559b973eada376d3e7691416bf4d0 | diff --git a/tools/setup/emoji/emoji_names.py b/tools/setup/emoji/emoji_names.py
--- a/tools/setup/emoji/emoji_names.py
+++ b/tools/setup/emoji/emoji_names.py
@@ -34,11 +34,9 @@
'1f60b': {'canonical_name': 'yum', 'aliases': []},
# crazy from https://beebom.com/emoji-meanings/, seems like best emoji for
# joking
+ '1f61b': {'canonical_name': 'stuck_out_tongue', 'aliases': ['mischievous']},
'1f61c': {'canonical_name': 'stuck_out_tongue_wink', 'aliases': ['joking', 'crazy']},
- '1f61d': {'canonical_name': 'stuck_out_tongue', 'aliases': []},
- # don't really need two stuck_out_tongues (see People/23), so chose
- # something else that could fit
- '1f61b': {'canonical_name': 'mischievous', 'aliases': []},
+ '1f61d': {'canonical_name': 'stuck_out_tongue_closed_eyes', 'aliases': []},
# kaching suggested by user
'1f911': {'canonical_name': 'money_face', 'aliases': ['kaching']},
# arms_open seems like a natural addition
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -44,4 +44,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '93.0'
+PROVISION_VERSION = '94.0'
| Use 😛 as :P emoticon translation
#15590 added the (optional) translation of `:P` to `:stuck_out_tongue:`. However, Zulip renders `:stuck_out_tongue:` as [`😝 U+1F61D`](https://emojipedia.org/squinting-face-with-tongue/), while I would expect `:P`/`:p` to render as [`😛 U+1F61B`](https://emojipedia.org/face-with-tongue/) (called `:mischievous:` in Zulip). `U+1F61B` is what gets rendered for `:P`/`:p` on any other platform I am on (e.g. Slack, FB, Hangouts).
I guess https://github.com/zulip/zulip/commit/32d234ef8ae7421e1c0f1ef7e9b36bf1026967f0 would be a way to fix this, but actually maybe the emoji names for these ones should be changed. I find it strange that `😛 U+1F61B` would be called `:mischievous:`, and I think a more conventional name for it would be `:stuck_out_tongue:`.
| Hello @zulip/server-emoji members, this issue was labeled with the "area: emoji" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-07-29T10:21:42 |
|
zulip/zulip | 16,034 | zulip__zulip-16034 | [
"15836"
] | 69685c2864428e40d0f1ddab49058f70e86bfb8c | diff --git a/zerver/lib/email_notifications.py b/zerver/lib/email_notifications.py
--- a/zerver/lib/email_notifications.py
+++ b/zerver/lib/email_notifications.py
@@ -170,6 +170,19 @@ def fix_spoilers_in_text(content: str, language: str) -> str:
return "\n".join(output)
+def add_quote_prefix_in_text(content: str) -> str:
+ """
+ We add quote prefix ">" to each line of the message in plain text
+ format, such that email clients render the message as quote.
+ """
+ lines = content.split("\n")
+ output = []
+ for line in lines:
+ quoted_line = f"> {line}"
+ output.append(quoted_line)
+ return "\n".join(output)
+
+
def build_message_list(
user: UserProfile,
messages: List[Message],
@@ -197,7 +210,7 @@ def fix_plaintext_image_urls(content: str) -> str:
def prepend_sender_to_message(
message_plain: str, message_html: str, sender: str
) -> Tuple[str, str]:
- message_plain = f"{sender}: {message_plain}"
+ message_plain = f"{sender}:\n{message_plain}"
message_soup = BeautifulSoup(message_html, "html.parser")
sender_name_soup = BeautifulSoup(f"<b>{sender}</b>: ", "html.parser")
first_tag = message_soup.find()
@@ -219,6 +232,7 @@ def build_message_payload(message: Message, sender: Optional[str] = None) -> Dic
# plain text.
plain = re.sub(r"/user_uploads/(\S*)", user.realm.uri + r"/user_uploads/\1", plain)
plain = fix_spoilers_in_text(plain, user.default_language)
+ plain = add_quote_prefix_in_text(plain)
assert message.rendered_content is not None
fragment = lxml.html.fragment_fromstring(message.rendered_content, create_parent=True)
| diff --git a/zerver/tests/test_email_notifications.py b/zerver/tests/test_email_notifications.py
--- a/zerver/tests/test_email_notifications.py
+++ b/zerver/tests/test_email_notifications.py
@@ -517,7 +517,7 @@ def _extra_context_in_missed_stream_messages_mention(
if show_message_content:
verify_body_include = [
- "Othello, the Moor of Venice: 1 2 3 4 5 6 7 8 9 10 @**King Hamlet** -- ",
+ "Othello, the Moor of Venice: > 1 > 2 > 3 > 4 > 5 > 6 > 7 > 8 > 9 > 10 > @**King Hamlet** -- ",
"You are receiving this because you were mentioned in Zulip Dev.",
]
email_subject = "#Denmark > test"
@@ -560,7 +560,7 @@ def _extra_context_in_missed_stream_messages_wildcard_mention(
if show_message_content:
verify_body_include = [
- "Othello, the Moor of Venice: 1 2 3 4 5 @**all** -- ",
+ "Othello, the Moor of Venice: > 1 > 2 > 3 > 4 > 5 > @**all** -- ",
"You are receiving this because you were mentioned in Zulip Dev.",
]
email_subject = "#Denmark > test"
@@ -598,7 +598,7 @@ def _extra_context_in_missed_stream_messages_email_notify(self, send_as_user: bo
self.send_stream_message(self.example_user("othello"), "Denmark", "11", topic_name="test2")
msg_id = self.send_stream_message(self.example_user("othello"), "denmark", "12")
verify_body_include = [
- "Othello, the Moor of Venice: 1 2 3 4 5 6 7 8 9 10 12 -- ",
+ "Othello, the Moor of Venice: > 1 > 2 > 3 > 4 > 5 > 6 > 7 > 8 > 9 > 10 > 12 -- ",
"You are receiving this because you have email notifications enabled for this stream.",
]
email_subject = "#Denmark > test"
@@ -618,7 +618,7 @@ def _extra_context_in_missed_stream_messages_mention_two_senders(
self.example_user("othello"), "Denmark", "@**King Hamlet**"
)
verify_body_include = [
- "Cordelia, Lear's daughter: 0 1 2 Othello, the Moor of Venice: @**King Hamlet** -- ",
+ "Cordelia, Lear's daughter: > 0 > 1 > 2 Othello, the Moor of Venice: > @**King Hamlet** -- ",
"You are receiving this because you were mentioned in Zulip Dev.",
]
email_subject = "#Denmark > test"
@@ -626,7 +626,7 @@ def _extra_context_in_missed_stream_messages_mention_two_senders(
msg_id, verify_body_include, email_subject, send_as_user, trigger="mentioned"
)
- def _extra_context_in_personal_missed_stream_messages(
+ def _extra_context_in_missed_personal_messages(
self,
send_as_user: bool,
show_message_content: bool = True,
@@ -640,7 +640,7 @@ def _extra_context_in_personal_missed_stream_messages(
)
if show_message_content:
- verify_body_include = ["Extremely personal message!"]
+ verify_body_include = ["> Extremely personal message!"]
email_subject = "PMs with Othello, the Moor of Venice"
verify_body_does_not_include: List[str] = []
else:
@@ -675,7 +675,7 @@ def _extra_context_in_personal_missed_stream_messages(
verify_body_does_not_include=verify_body_does_not_include,
)
- def _reply_to_email_in_personal_missed_stream_messages(self, send_as_user: bool) -> None:
+ def _reply_to_email_in_missed_personal_messages(self, send_as_user: bool) -> None:
msg_id = self.send_personal_message(
self.example_user("othello"),
self.example_user("hamlet"),
@@ -685,7 +685,7 @@ def _reply_to_email_in_personal_missed_stream_messages(self, send_as_user: bool)
email_subject = "PMs with Othello, the Moor of Venice"
self._test_cases(msg_id, verify_body_include, email_subject, send_as_user)
- def _reply_warning_in_personal_missed_stream_messages(self, send_as_user: bool) -> None:
+ def _reply_warning_in_missed_personal_messages(self, send_as_user: bool) -> None:
msg_id = self.send_personal_message(
self.example_user("othello"),
self.example_user("hamlet"),
@@ -695,7 +695,7 @@ def _reply_warning_in_personal_missed_stream_messages(self, send_as_user: bool)
email_subject = "PMs with Othello, the Moor of Venice"
self._test_cases(msg_id, verify_body_include, email_subject, send_as_user)
- def _extra_context_in_huddle_missed_stream_messages_two_others(
+ def _extra_context_in_missed_huddle_messages_two_others(
self, send_as_user: bool, show_message_content: bool = True
) -> None:
msg_id = self.send_huddle_message(
@@ -708,7 +708,9 @@ def _extra_context_in_huddle_missed_stream_messages_two_others(
)
if show_message_content:
- verify_body_include = ["Othello, the Moor of Venice: Group personal message! -- Reply"]
+ verify_body_include = [
+ "Othello, the Moor of Venice: > Group personal message! -- Reply"
+ ]
email_subject = "Group PMs with Iago and Othello, the Moor of Venice"
verify_body_does_not_include: List[str] = []
else:
@@ -735,9 +737,7 @@ def _extra_context_in_huddle_missed_stream_messages_two_others(
verify_body_does_not_include=verify_body_does_not_include,
)
- def _extra_context_in_huddle_missed_stream_messages_three_others(
- self, send_as_user: bool
- ) -> None:
+ def _extra_context_in_missed_huddle_messages_three_others(self, send_as_user: bool) -> None:
msg_id = self.send_huddle_message(
self.example_user("othello"),
[
@@ -748,15 +748,13 @@ def _extra_context_in_huddle_missed_stream_messages_three_others(
"Group personal message!",
)
- verify_body_include = ["Othello, the Moor of Venice: Group personal message! -- Reply"]
+ verify_body_include = ["Othello, the Moor of Venice: > Group personal message! -- Reply"]
email_subject = (
"Group PMs with Cordelia, Lear's daughter, Iago, and Othello, the Moor of Venice"
)
self._test_cases(msg_id, verify_body_include, email_subject, send_as_user)
- def _extra_context_in_huddle_missed_stream_messages_many_others(
- self, send_as_user: bool
- ) -> None:
+ def _extra_context_in_missed_huddle_messages_many_others(self, send_as_user: bool) -> None:
msg_id = self.send_huddle_message(
self.example_user("othello"),
[
@@ -768,7 +766,7 @@ def _extra_context_in_huddle_missed_stream_messages_many_others(
"Group personal message!",
)
- verify_body_include = ["Othello, the Moor of Venice: Group personal message! -- Reply"]
+ verify_body_include = ["Othello, the Moor of Venice: > Group personal message! -- Reply"]
email_subject = "Group PMs with Cordelia, Lear's daughter, Iago, and 2 others"
self._test_cases(msg_id, verify_body_include, email_subject, send_as_user)
@@ -784,7 +782,7 @@ def _deleted_message_in_missed_stream_messages(self, send_as_user: bool) -> None
handle_missedmessage_emails(hamlet.id, [{"message_id": msg_id}])
self.assert_length(mail.outbox, 0)
- def _deleted_message_in_personal_missed_stream_messages(self, send_as_user: bool) -> None:
+ def _deleted_message_in_missed_personal_messages(self, send_as_user: bool) -> None:
msg_id = self.send_personal_message(
self.example_user("othello"),
self.example_user("hamlet"),
@@ -798,7 +796,7 @@ def _deleted_message_in_personal_missed_stream_messages(self, send_as_user: bool
handle_missedmessage_emails(hamlet.id, [{"message_id": msg_id}])
self.assert_length(mail.outbox, 0)
- def _deleted_message_in_huddle_missed_stream_messages(self, send_as_user: bool) -> None:
+ def _deleted_message_in_missed_huddle_messages(self, send_as_user: bool) -> None:
msg_id = self.send_huddle_message(
self.example_user("othello"),
[
@@ -850,7 +848,7 @@ def test_smaller_user_group_mention_priority(self) -> None:
)
expected_email_include = [
- "Othello, the Moor of Venice: @*hamlet_only* @*hamlet_and_cordelia* -- ",
+ "Othello, the Moor of Venice: > @*hamlet_only* > @*hamlet_and_cordelia* -- ",
"You are receiving this because @hamlet_only was mentioned in Zulip Dev.",
]
@@ -890,7 +888,7 @@ def test_personal_over_user_group_mention_priority(self) -> None:
)
expected_email_include = [
- "Othello, the Moor of Venice: @*hamlet_and_cordelia* @**King Hamlet** -- ",
+ "Othello, the Moor of Venice: > @*hamlet_and_cordelia* > @**King Hamlet** -- ",
"You are receiving this because you were mentioned in Zulip Dev.",
]
@@ -924,13 +922,11 @@ def test_message_content_disabled_in_missed_message_notifications(self) -> None:
False, show_message_content=False
)
mail.outbox = []
- self._extra_context_in_personal_missed_stream_messages(
+ self._extra_context_in_missed_personal_messages(
False, show_message_content=False, message_content_disabled_by_user=True
)
mail.outbox = []
- self._extra_context_in_huddle_missed_stream_messages_two_others(
- False, show_message_content=False
- )
+ self._extra_context_in_missed_huddle_messages_two_others(False, show_message_content=False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
def test_extra_context_in_missed_stream_messages_as_user(self) -> None:
@@ -953,8 +949,8 @@ def test_extra_context_in_missed_stream_messages_as_user_two_senders(self) -> No
def test_extra_context_in_missed_stream_messages_two_senders(self) -> None:
self._extra_context_in_missed_stream_messages_mention_two_senders(False)
- def test_reply_to_email_in_personal_missed_stream_messages(self) -> None:
- self._reply_to_email_in_personal_missed_stream_messages(False)
+ def test_reply_to_email_in_missed_personal_messages(self) -> None:
+ self._reply_to_email_in_missed_personal_messages(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
def test_extra_context_in_missed_stream_messages_email_notify_as_user(self) -> None:
@@ -964,36 +960,36 @@ def test_extra_context_in_missed_stream_messages_email_notify(self) -> None:
self._extra_context_in_missed_stream_messages_email_notify(False)
@override_settings(EMAIL_GATEWAY_PATTERN="")
- def test_reply_warning_in_personal_missed_stream_messages(self) -> None:
- self._reply_warning_in_personal_missed_stream_messages(False)
+ def test_reply_warning_in_missed_personal_messages(self) -> None:
+ self._reply_warning_in_missed_personal_messages(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_extra_context_in_personal_missed_stream_messages_as_user(self) -> None:
- self._extra_context_in_personal_missed_stream_messages(True)
+ def test_extra_context_in_missed_personal_messages_as_user(self) -> None:
+ self._extra_context_in_missed_personal_messages(True)
- def test_extra_context_in_personal_missed_stream_messages(self) -> None:
- self._extra_context_in_personal_missed_stream_messages(False)
+ def test_extra_context_in_missed_personal_messages(self) -> None:
+ self._extra_context_in_missed_personal_messages(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_extra_context_in_huddle_missed_stream_messages_two_others_as_user(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_two_others(True)
+ def test_extra_context_in_missed_huddle_messages_two_others_as_user(self) -> None:
+ self._extra_context_in_missed_huddle_messages_two_others(True)
- def test_extra_context_in_huddle_missed_stream_messages_two_others(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_two_others(False)
+ def test_extra_context_in_missed_huddle_messages_two_others(self) -> None:
+ self._extra_context_in_missed_huddle_messages_two_others(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_extra_context_in_huddle_missed_stream_messages_three_others_as_user(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_three_others(True)
+ def test_extra_context_in_missed_huddle_messages_three_others_as_user(self) -> None:
+ self._extra_context_in_missed_huddle_messages_three_others(True)
- def test_extra_context_in_huddle_missed_stream_messages_three_others(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_three_others(False)
+ def test_extra_context_in_missed_huddle_messages_three_others(self) -> None:
+ self._extra_context_in_missed_huddle_messages_three_others(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_extra_context_in_huddle_missed_stream_messages_many_others_as_user(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_many_others(True)
+ def test_extra_context_in_missed_huddle_messages_many_others_as_user(self) -> None:
+ self._extra_context_in_missed_huddle_messages_many_others(True)
- def test_extra_context_in_huddle_missed_stream_messages_many_others(self) -> None:
- self._extra_context_in_huddle_missed_stream_messages_many_others(False)
+ def test_extra_context_in_missed_huddle_messages_many_others(self) -> None:
+ self._extra_context_in_missed_huddle_messages_many_others(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
def test_deleted_message_in_missed_stream_messages_as_user(self) -> None:
@@ -1003,18 +999,18 @@ def test_deleted_message_in_missed_stream_messages(self) -> None:
self._deleted_message_in_missed_stream_messages(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_deleted_message_in_personal_missed_stream_messages_as_user(self) -> None:
- self._deleted_message_in_personal_missed_stream_messages(True)
+ def test_deleted_message_in_missed_personal_messages_as_user(self) -> None:
+ self._deleted_message_in_missed_personal_messages(True)
- def test_deleted_message_in_personal_missed_stream_messages(self) -> None:
- self._deleted_message_in_personal_missed_stream_messages(False)
+ def test_deleted_message_in_missed_personal_messages(self) -> None:
+ self._deleted_message_in_missed_personal_messages(False)
@override_settings(SEND_MISSED_MESSAGE_EMAILS_AS_USER=True)
- def test_deleted_message_in_huddle_missed_stream_messages_as_user(self) -> None:
- self._deleted_message_in_huddle_missed_stream_messages(True)
+ def test_deleted_message_in_missed_huddle_messages_as_user(self) -> None:
+ self._deleted_message_in_missed_huddle_messages(True)
- def test_deleted_message_in_huddle_missed_stream_messages(self) -> None:
- self._deleted_message_in_huddle_missed_stream_messages(False)
+ def test_deleted_message_in_missed_huddle_messages(self) -> None:
+ self._deleted_message_in_missed_huddle_messages(False)
def test_realm_message_content_allowed_in_email_notifications(self) -> None:
user = self.example_user("hamlet")
@@ -1029,14 +1025,14 @@ def test_realm_message_content_allowed_in_email_notifications(self) -> None:
user, "message_content_in_email_notifications", True, acting_user=None
)
mail.outbox = []
- self._extra_context_in_personal_missed_stream_messages(False, show_message_content=True)
+ self._extra_context_in_missed_personal_messages(False, show_message_content=True)
# Emails don't have missed message content when message content is disabled by the user
do_change_user_setting(
user, "message_content_in_email_notifications", False, acting_user=None
)
mail.outbox = []
- self._extra_context_in_personal_missed_stream_messages(
+ self._extra_context_in_missed_personal_messages(
False, show_message_content=False, message_content_disabled_by_user=True
)
@@ -1050,7 +1046,7 @@ def test_realm_message_content_allowed_in_email_notifications(self) -> None:
user, "message_content_in_email_notifications", True, acting_user=None
)
mail.outbox = []
- self._extra_context_in_personal_missed_stream_messages(
+ self._extra_context_in_missed_personal_messages(
False, show_message_content=False, message_content_disabled_by_realm=True
)
@@ -1058,7 +1054,7 @@ def test_realm_message_content_allowed_in_email_notifications(self) -> None:
user, "message_content_in_email_notifications", False, acting_user=None
)
mail.outbox = []
- self._extra_context_in_personal_missed_stream_messages(
+ self._extra_context_in_missed_personal_messages(
False,
show_message_content=False,
message_content_disabled_by_user=True,
@@ -1137,7 +1133,7 @@ def test_sender_name_in_missed_message(self) -> None:
assert isinstance(mail.outbox[0], EmailMultiAlternatives)
assert isinstance(mail.outbox[0].alternatives[0][0], str)
- self.assertIn("Iago: @**King Hamlet**\n\n--\nYou are", mail.outbox[0].body)
+ self.assertIn("Iago:\n> @**King Hamlet**\n\n--\nYou are", mail.outbox[0].body)
# If message content starts with <p> tag the sender name is appended inside the <p> tag.
self.assertIn(
'<p><b>Iago</b>: <span class="user-mention"', mail.outbox[0].alternatives[0][0]
@@ -1145,7 +1141,7 @@ def test_sender_name_in_missed_message(self) -> None:
assert isinstance(mail.outbox[1], EmailMultiAlternatives)
assert isinstance(mail.outbox[1].alternatives[0][0], str)
- self.assertIn("Iago: * 1\n *2\n\n--\nYou are receiving", mail.outbox[1].body)
+ self.assertIn("Iago:\n> * 1\n> *2\n\n--\nYou are receiving", mail.outbox[1].body)
# If message content does not starts with <p> tag sender name is appended before the <p> tag
self.assertIn(
" <b>Iago</b>: <div><ul>\n<li>1<br/>\n *2</li>\n</ul></div>\n",
@@ -1154,7 +1150,7 @@ def test_sender_name_in_missed_message(self) -> None:
assert isinstance(mail.outbox[2], EmailMultiAlternatives)
assert isinstance(mail.outbox[2].alternatives[0][0], str)
- self.assertEqual("Hello\n\n--\n\nReply", mail.outbox[2].body[:16])
+ self.assertEqual("> Hello\n\n--\n\nReply", mail.outbox[2].body[:18])
# Sender name is not appended to message for PM missed messages
self.assertIn(
">\n \n <div><p>Hello</p></div>\n",
| Unclear message authors in notification emails
Background: a self-hosted version updated sometime in 2020. Notification emails on missed messages enabled.
The problem: text/plain alternative doesn't adequately mark authors. Example based on a real email with 3 messages, adjusted with random text and names to maintain privacy:
```
Person1: It looks like this is your first time opening an issue in this project! Be sure to review the contributing guidelines and code of conduct.
Person2: @_**Person1** https://github.com/zulip/zulip/issues/new:
quote
Reporting a security vulnerability?
Check out the project's security policy.
@_**Person3** Remember, contributions to this repository should follow its contributing guidelines and code of conduct.
Person4: https://github.com/zulip/zulip/pulls
```
It's easy to miss that there were 3 people in the discussion (I thought it was 2 until I started editing the text), because the author names (`Person1:`, `Person2:`, `Person4`) are all plain.
Suggested solution: send each message as a separate MIME part - preferably separate email, because I'm not sure how well email clients deal with multiple inline text parts in a single email. The reason is that messages can carry arbitrary text, so there's almost always a way to craft a message that appears as multiple ones in the notification email.
Suggested workaround: To solve a lesser version of this (messages accidentally illegible, rather than purposely spoofed), there should be some delineation between separate messages. An additional space should do the trick, or a couple dashes as a visual separator. Perhaps author's name can be combined with an indentation, a symbol, or a timestamp like IRC does.
| An elegant solution would be prefixing every line in each message with `>` which email clients automatically render as quotes, so e.g.
```
Person 1:
> Hello world
Person 2:
> Something
> Something more
```
It however wouldn't prevent spoofing because 1. emails can entirely be spoofed and 2. Zulip allows multiple users to have the same name (see #15688).
Hello @zulip/server-development members, this issue was labeled with the "area: emails" label, so you may want to check it out!
<!-- areaLabelAddition -->
When I mentioned spoofing, I didn't mean crafting an email, but crafting a Zulip message to appear as something else than it is.
Prefixing with ">" would make the contents unspoofable: there's no way to create a Zulip message that would end up without ">" when inside an email.
">" itself, on the other hand, would make the UX just a little bit worse by turning the notifications into a quote delivery mechanism. Emails are meant to contain the content, not just quotes (except mailing list digests, which those notifications don't seem to be). Related: the email sender is always the same, even when the email only contains one message. This makes the emails ever so slightly less useful.
Also related, but kinda rant: placing multiple messages into a single email is like trying to push a square peg through a round hole. The result loses something in relation to a forwarded message (message separation, search by From), but it's not quite a mailing list digest either (notifications coming within minutes of each other, often with just one message inside).
@zulipbot claim. | 2020-08-04T20:45:11 |
zulip/zulip | 16,067 | zulip__zulip-16067 | [
"16066"
] | e789a8bb2034629d90330d592bfb313359b338fd | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -669,8 +669,9 @@ def do_set_realm_property(realm: Realm, name: str, value: Any,
RealmAuditLog.objects.create(
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED, event_time=event_time,
acting_user=acting_user, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': name, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': name, 'value': value}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: value,
+ 'property': name,
}))
if name == "email_address_visibility":
@@ -703,8 +704,9 @@ def do_set_realm_authentication_methods(realm: Realm,
RealmAuditLog.objects.create(
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED, event_time=timezone_now(),
acting_user=acting_user, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': 'authentication_methods', 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': 'authentication_methods', 'value': updated_value}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: updated_value,
+ 'property': 'authentication_methods',
}))
event = dict(
type="realm",
@@ -738,8 +740,9 @@ def do_set_realm_message_editing(realm: Realm,
RealmAuditLog.objects.create(
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED, event_time=event_time,
acting_user=acting_user, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': updated_property, 'value': old_values[updated_property]},
- RealmAuditLog.NEW_VALUE: {'property': updated_property, 'value': updated_value}
+ RealmAuditLog.OLD_VALUE: old_values[updated_property],
+ RealmAuditLog.NEW_VALUE: updated_value,
+ 'property': updated_property,
}))
realm.save(update_fields=list(updated_properties.keys()))
@@ -761,8 +764,9 @@ def do_set_realm_notifications_stream(realm: Realm, stream: Optional[Stream], st
RealmAuditLog.objects.create(
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED, event_time=event_time,
acting_user=acting_user, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': 'notifications_stream', 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': 'notifications_stream', 'value': stream_id}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: stream_id,
+ 'property': 'notifications_stream',
}))
event = dict(
@@ -783,8 +787,9 @@ def do_set_realm_signup_notifications_stream(realm: Realm, stream: Optional[Stre
RealmAuditLog.objects.create(
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED, event_time=event_time,
acting_user=acting_user, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': 'signup_notifications_stream', 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': 'signup_notifications_stream', 'value': stream_id}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: stream_id,
+ 'property': 'signup_notifications_stream',
}))
event = dict(
type="realm",
@@ -3139,8 +3144,9 @@ def do_change_subscription_property(user_profile: UserProfile, sub: Subscription
realm=user_profile.realm, event_type=RealmAuditLog.SUBSCRIPTION_PROPERTY_CHANGED,
event_time=event_time, modified_user=user_profile, acting_user=acting_user,
modified_stream=stream, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': database_property_name, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': database_property_name, 'value': database_value}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: database_value,
+ 'property': database_property_name,
}))
event = dict(type="subscription",
@@ -3761,8 +3767,9 @@ def do_change_notification_settings(user_profile: UserProfile, name: str,
RealmAuditLog.objects.create(
realm=user_profile.realm, event_type=RealmAuditLog.USER_NOTIFICATION_SETTINGS_CHANGED, event_time=event_time,
acting_user=acting_user, modified_user=user_profile, extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': name, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': name, 'value': value}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: value,
+ 'property': name,
}))
send_event(user_profile.realm, event, [user_profile.id])
diff --git a/zerver/migrations/0298_fix_realmauditlog_format.py b/zerver/migrations/0298_fix_realmauditlog_format.py
new file mode 100644
--- /dev/null
+++ b/zerver/migrations/0298_fix_realmauditlog_format.py
@@ -0,0 +1,117 @@
+# Generated by Django 2.2.14 on 2020-08-07 19:13
+
+import json
+
+from django.db import migrations
+from django.db.backends.postgresql.schema import DatabaseSchemaEditor
+from django.db.migrations.state import StateApps
+
+
+def update_realmauditlog_values(apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
+ """
+ This migration fixes two issues with the RealmAuditLog format for certain event types:
+ * The notifications_stream and signup_notifications_stream fields had the
+ Stream objects passed into `ujson.dumps()` and thus marshalled as a giant
+ JSON object, when the intent was to store the stream ID.
+ * The default_sending_stream would also been marshalled wrong, but are part
+ of a feature that nobody should be using, so we simply assert that's the case.
+ * Changes the structure of the extra_data JSON dictionaries for those
+ RealmAuditLog entries with a sub-property field from:
+ {
+ OLD_VALUE: {"property": property, "value": old_value},
+ NEW_VALUE: {"property": property, "value": new_value},
+ }
+
+ to the more natural:
+
+ {
+ OLD_VALUE: old_value,
+ NEW_VALUE: new_value,
+ "property": property,
+ }
+ """
+ RealmAuditLog = apps.get_model('zerver', 'RealmAuditLog')
+ # Constants from models.py
+ USER_DEFAULT_SENDING_STREAM_CHANGED = 129
+ USER_DEFAULT_REGISTER_STREAM_CHANGED = 130
+ USER_DEFAULT_ALL_PUBLIC_STREAMS_CHANGED = 131
+ USER_NOTIFICATION_SETTINGS_CHANGED = 132
+ REALM_PROPERTY_CHANGED = 207
+ SUBSCRIPTION_PROPERTY_CHANGED = 304
+ OLD_VALUE = '1'
+ NEW_VALUE = '2'
+
+ unlikely_event_types = [
+ USER_DEFAULT_SENDING_STREAM_CHANGED,
+ USER_DEFAULT_REGISTER_STREAM_CHANGED,
+ USER_DEFAULT_ALL_PUBLIC_STREAMS_CHANGED,
+ ]
+ # These 3 event types are the ones that used a format with
+ # OLD_VALUE containing a dictionary with a `property` key.
+ affected_event_types = [
+ REALM_PROPERTY_CHANGED,
+ USER_NOTIFICATION_SETTINGS_CHANGED,
+ SUBSCRIPTION_PROPERTY_CHANGED,
+ ]
+ improperly_marshalled_properties = [
+ 'notifications_stream',
+ 'signup_notifications_stream',
+ ]
+
+ # These are also corrupted but are part of a feature nobody uses,
+ # so it's not worth writing code to fix them.
+ assert not RealmAuditLog.objects.filter(event_type__in=unlikely_event_types).exists()
+
+ for ra in RealmAuditLog.objects.filter(event_type__in=affected_event_types):
+ extra_data = json.loads(ra.extra_data)
+ old_key = extra_data[OLD_VALUE]
+ new_key = extra_data[NEW_VALUE]
+
+ # Skip any already-migrated values in case we're running this
+ # migration a second time.
+ if not isinstance(old_key, dict) and not isinstance(new_key, dict):
+ continue
+ if 'value' not in old_key or 'value' not in new_key:
+ continue
+
+ old_value = old_key["value"]
+ new_value = new_key["value"]
+ prop = old_key["property"]
+
+ # The `authentication_methods` key is the only event whose
+ # action value type is expected to be a dictionary. That
+ # property is marshalled properly but still wants the second
+ # migration below.
+ if prop != 'authentication_methods':
+ # For the other properties, we have `stream` rather than `stream['id']`
+ # in the original extra_data object; the fix is simply to extract
+ # the intended ID field via `value = value['id']`.
+ if isinstance(old_value, dict):
+ assert prop in improperly_marshalled_properties
+ old_value = old_value['id']
+ if isinstance(new_value, dict):
+ assert prop in improperly_marshalled_properties
+ new_value = new_value['id']
+
+ # Sanity check that the original event has exactly the keys we expect.
+ assert set(extra_data.keys()) <= set([OLD_VALUE, NEW_VALUE])
+
+ ra.extra_data = json.dumps({
+ OLD_VALUE: old_value,
+ NEW_VALUE: new_value,
+ "property": prop,
+ })
+ ra.save(update_fields=["extra_data"])
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('zerver', '0297_draft'),
+ ]
+
+ operations = [
+ migrations.RunPython(update_realmauditlog_values,
+ reverse_code=migrations.RunPython.noop,
+ elidable=True),
+ ]
| diff --git a/zerver/tests/test_audit_log.py b/zerver/tests/test_audit_log.py
--- a/zerver/tests/test_audit_log.py
+++ b/zerver/tests/test_audit_log.py
@@ -261,7 +261,7 @@ def test_set_realm_authentication_methods(self) -> None:
now = timezone_now()
realm = get_realm('zulip')
user = self.example_user('hamlet')
- expected_old_value = {'property': 'authentication_methods', 'value': realm.authentication_methods_dict()}
+ expected_old_value = realm.authentication_methods_dict()
auth_method_dict = {'Google': False, 'Email': False, 'GitHub': False, 'Apple': False, 'Dev': True, 'SAML': True, 'GitLab': False}
do_set_realm_authentication_methods(realm, auth_method_dict, acting_user=user)
@@ -269,7 +269,7 @@ def test_set_realm_authentication_methods(self) -> None:
event_time__gte=now, acting_user=user)
self.assertEqual(realm_audit_logs.count(), 1)
extra_data = ujson.loads(realm_audit_logs[0].extra_data)
- expected_new_value = {'property': 'authentication_methods', 'value': auth_method_dict}
+ expected_new_value = auth_method_dict
self.assertEqual(extra_data[RealmAuditLog.OLD_VALUE], expected_old_value)
self.assertEqual(extra_data[RealmAuditLog.NEW_VALUE], expected_new_value)
@@ -288,26 +288,25 @@ def test_set_realm_message_editing(self) -> None:
now = timezone_now()
realm = get_realm('zulip')
user = self.example_user('hamlet')
- old_values_expected = [{'property': 'message_content_edit_limit_seconds', 'value': realm.message_content_edit_limit_seconds},
- {'property': 'allow_community_topic_editing', 'value': realm.allow_community_topic_editing}]
+ values_expected = [
+ {
+ 'property': 'message_content_edit_limit_seconds',
+ RealmAuditLog.OLD_VALUE: realm.message_content_edit_limit_seconds,
+ RealmAuditLog.NEW_VALUE: 1000,
+ },
+ {
+ 'property': 'allow_community_topic_editing',
+ RealmAuditLog.OLD_VALUE: True,
+ RealmAuditLog.NEW_VALUE: False,
+ },
+ ]
do_set_realm_message_editing(realm, True, 1000, False, acting_user=user)
realm_audit_logs = RealmAuditLog.objects.filter(realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED,
event_time__gte=now, acting_user=user).order_by("id")
self.assertEqual(realm_audit_logs.count(), 2)
-
- # allow_message_editing was already True.
- new_values_expected = [{'property': 'message_content_edit_limit_seconds', 'value': 1000},
- {'property': 'allow_community_topic_editing', 'value': False}]
- new_values_seen = []
- old_values_seen = []
- for realm_audit_log in realm_audit_logs:
- extra_data = ujson.loads(realm_audit_log.extra_data)
- new_values_seen.append(extra_data[RealmAuditLog.NEW_VALUE])
- old_values_seen.append(extra_data[RealmAuditLog.OLD_VALUE])
-
- self.assertEqual(new_values_seen, new_values_expected)
- self.assertEqual(old_values_seen, old_values_expected)
+ self.assertEqual([ujson.loads(entry.extra_data) for entry in realm_audit_logs],
+ values_expected)
def test_set_realm_notifications_stream(self) -> None:
now = timezone_now()
@@ -322,8 +321,9 @@ def test_set_realm_notifications_stream(self) -> None:
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED,
event_time__gte=now, acting_user=user,
extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': 'notifications_stream', 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': 'notifications_stream', 'value': stream.id}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: stream.id,
+ 'property': 'notifications_stream',
})).count(), 1)
def test_set_realm_signup_notifications_stream(self) -> None:
@@ -339,8 +339,9 @@ def test_set_realm_signup_notifications_stream(self) -> None:
realm=realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED,
event_time__gte=now, acting_user=user,
extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': 'signup_notifications_stream', 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': 'signup_notifications_stream', 'value': stream.id}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: stream.id,
+ 'property': 'signup_notifications_stream',
})).count(), 1)
def test_change_icon_source(self) -> None:
@@ -380,8 +381,11 @@ def test_change_subscription_property(self) -> None:
old_value = getattr(sub, property)
self.assertNotEqual(old_value, value)
do_change_subscription_property(user, sub, stream, property, value, acting_user=user)
- expected_extra_data = {RealmAuditLog.OLD_VALUE: {'property': property, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': property, 'value': value}}
+ expected_extra_data = {
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: value,
+ 'property': property,
+ }
self.assertEqual(RealmAuditLog.objects.filter(
realm=user.realm, event_type=RealmAuditLog.SUBSCRIPTION_PROPERTY_CHANGED,
event_time__gte=now, acting_user=user, modified_user=user,
@@ -455,8 +459,11 @@ def test_change_notification_settings(self) -> None:
old_value = getattr(user, setting)
do_change_notification_settings(user, setting, value, acting_user=user)
- expected_extra_data = {RealmAuditLog.OLD_VALUE: {'property': setting, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': setting, 'value': value}}
+ expected_extra_data = {
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: value,
+ 'property': setting,
+ }
self.assertEqual(RealmAuditLog.objects.filter(
realm=user.realm, event_type=RealmAuditLog.USER_NOTIFICATION_SETTINGS_CHANGED,
event_time__gte=now, acting_user=user, modified_user=user,
diff --git a/zerver/tests/test_events.py b/zerver/tests/test_events.py
--- a/zerver/tests/test_events.py
+++ b/zerver/tests/test_events.py
@@ -2019,8 +2019,9 @@ def do_set_realm_property_test(self, name: str) -> None:
realm=self.user_profile.realm, event_type=RealmAuditLog.REALM_PROPERTY_CHANGED,
event_time__gte=now, acting_user=self.user_profile,
extra_data=ujson.dumps({
- RealmAuditLog.OLD_VALUE: {'property': name, 'value': old_value},
- RealmAuditLog.NEW_VALUE: {'property': name, 'value': val}
+ RealmAuditLog.OLD_VALUE: old_value,
+ RealmAuditLog.NEW_VALUE: val,
+ 'property': name,
})).count(), 1)
check_realm_update('events[0]', events[0], name)
| Add migration to fix RealmAuditLog entries from bad JSON serialization
Apparently, the RealmAuditLog work in #15601 resulted in corrupted entries being stored to the new RealmAuditLog entries (See f8bcf39014e4aa44564998119aea7bdf3f13ab08 for details). We should write a migration to clean up this format. I think there are two changes I'd like to do:
* Move the `property` key to the top-level of `extra_data`
* Find these corrupted keys and fix them.
I'm going to do a bit of investigation to see how many corrupted objects we have.
@arpit551 @mateuszmandera FYI
| The `authentication_methods` variant of this bug was present in the 3.x release series. | 2020-08-07T20:04:36 |
zulip/zulip | 16,183 | zulip__zulip-16183 | [
"16173"
] | 7b62d31c3270c967c8e334fae91783698b49a681 | diff --git a/zerver/webhooks/gitlab/view.py b/zerver/webhooks/gitlab/view.py
--- a/zerver/webhooks/gitlab/view.py
+++ b/zerver/webhooks/gitlab/view.py
@@ -1,7 +1,7 @@
import re
from functools import partial
from inspect import signature
-from typing import Any, Dict, Optional
+from typing import Any, Dict, List, Optional
from django.http import HttpRequest, HttpResponse
@@ -91,8 +91,7 @@ def get_issue_created_event_body(payload: Dict[str, Any],
get_object_url(payload),
payload['object_attributes'].get('iid'),
description,
- get_objects_assignee(payload),
- payload.get('assignees'),
+ assignees=replace_assignees_username_with_name(get_assignees(payload)),
title=payload['object_attributes'].get('title') if include_title else None,
)
@@ -142,22 +141,31 @@ def get_merge_request_open_or_updated_body(payload: Dict[str, Any], action: str,
pull_request.get('source_branch'),
pull_request.get('target_branch'),
pull_request.get('description'),
- get_objects_assignee(payload),
+ assignees=replace_assignees_username_with_name(get_assignees(payload)),
type='MR',
title=payload['object_attributes'].get('title') if include_title else None,
)
-def get_objects_assignee(payload: Dict[str, Any]) -> Optional[str]:
- assignee_object = payload.get('assignee')
- if assignee_object:
- return assignee_object.get('name')
- else:
- assignee_object = payload.get('assignees')
- if assignee_object:
- for assignee in assignee_object:
- return assignee['name']
-
- return None
+def get_assignees(payload: Dict[str, Any]) -> List[Dict[str, str]]:
+ assignee_details = payload.get('assignees')
+ if assignee_details is None:
+ single_assignee_details = payload.get('assignee')
+ if single_assignee_details is None:
+ assignee_details = []
+ else:
+ assignee_details = [single_assignee_details]
+ return assignee_details
+
+def replace_assignees_username_with_name(assignees: List[Dict[str, str]]) -> List[Dict[str, str]]:
+ """Replace the username of each assignee with their (full) name.
+
+ This is a hack-like adaptor so that when assignees are passed to
+ `get_pull_request_event_message` we can use the assignee's name
+ and not their username (for more consistency).
+ """
+ for assignee in assignees:
+ assignee["username"] = assignee["name"]
+ return assignees
def get_commented_commit_event_body(payload: Dict[str, Any]) -> str:
comment = payload['object_attributes']
| diff --git a/zerver/webhooks/gitlab/tests.py b/zerver/webhooks/gitlab/tests.py
--- a/zerver/webhooks/gitlab/tests.py
+++ b/zerver/webhooks/gitlab/tests.py
@@ -123,7 +123,7 @@ def test_create_issue_with_assignee_event_message(self) -> None:
def test_create_issue_with_two_assignees_event_message(self) -> None:
expected_subject = "Zulip GitLab Test / Issue #2 Zulip Test Issue 2"
- expected_message = "Adam Birds created [Issue #2](https://gitlab.com/adambirds/zulip-gitlab-test/issues/2) (assigned to adambirds and eeshangarg):\n\n~~~ quote\nZulip Test Issue 2\n~~~"
+ expected_message = "Adam Birds created [Issue #2](https://gitlab.com/adambirds/zulip-gitlab-test/issues/2) (assigned to Adam Birds and Eeshan Garg):\n\n~~~ quote\nZulip Test Issue 2\n~~~"
self.check_webhook(
"issue_hook__issue_created_with_two_assignees", expected_subject, expected_message
@@ -131,7 +131,7 @@ def test_create_issue_with_two_assignees_event_message(self) -> None:
def test_create_issue_with_three_assignees_event_message(self) -> None:
expected_subject = "Zulip GitLab Test / Issue #2 Zulip Test Issue 2"
- expected_message = "Adam Birds created [Issue #2](https://gitlab.com/adambirds/zulip-gitlab-test/issues/2) (assigned to adambirds, eeshangarg and timabbott):\n\n~~~ quote\nZulip Test Issue 2\n~~~"
+ expected_message = "Adam Birds created [Issue #2](https://gitlab.com/adambirds/zulip-gitlab-test/issues/2) (assigned to Adam Birds, Eeshan Garg and Tim Abbott):\n\n~~~ quote\nZulip Test Issue 2\n~~~"
self.check_webhook(
"issue_hook__issue_created_with_three_assignees", expected_subject, expected_message
@@ -139,7 +139,7 @@ def test_create_issue_with_three_assignees_event_message(self) -> None:
def test_create_confidential_issue_with_assignee_event_message(self) -> None:
expected_subject = "testing / Issue #2 Testing"
- expected_message = "Joe Bloggs created [Issue #2](https://gitlab.example.co.uk/joe.bloggs/testing/issues/2) (assigned to joe.bloggs):\n\n~~~ quote\nTesting\n~~~"
+ expected_message = "Joe Bloggs created [Issue #2](https://gitlab.example.co.uk/joe.bloggs/testing/issues/2) (assigned to Joe Bloggs):\n\n~~~ quote\nTesting\n~~~"
self.check_webhook(
"issue_hook__confidential_issue_created_with_assignee",
@@ -306,11 +306,25 @@ def test_merge_request_created_with_assignee_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #3 New Merge Request"
expected_message = "Tomasz Kolek created [MR #3](https://gitlab.com/tomaszkolek0/my-awesome-project/merge_requests/3) (assigned to Tomasz Kolek) from `tomek` to `master`:\n\n~~~ quote\ndescription of merge request\n~~~"
self.check_webhook(
- "merge_request_hook__merge_request_created_with_assignee",
+ 'merge_request_hook__merge_request_created_with_assignee',
expected_topic,
expected_message,
)
+ def test_merge_request_created_with_multiple_assignees_event_message(self) -> None:
+ expected_topic = "Demo Project / MR #1 Make a trivial change to the README."
+ expected_message = """
+Hemanth V. Alluri created [MR #1](https://gitlab.com/Hypro999/demo-project/-/merge_requests/1) (assigned to Hemanth V. Alluri and Hemanth V. Alluri) from `devel` to `master`:
+
+~~~ quote
+A trivial change that should probably be ignored.
+~~~
+ """.strip()
+ self.check_webhook(
+ 'merge_request_hook__merge_request_created_with_multiple_assignees',
+ expected_topic,
+ expected_message)
+
def test_merge_request_closed_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #2 NEW MR"
expected_message = "Tomasz Kolek closed [MR #2](https://gitlab.com/tomaszkolek0/my-awesome-project/merge_requests/2)."
@@ -356,8 +370,9 @@ def test_merge_request_updated_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #3 New Merge Request"
expected_message = "Tomasz Kolek updated [MR #3](https://gitlab.com/tomaszkolek0/my-awesome-project/merge_requests/3) (assigned to Tomasz Kolek) from `tomek` to `master`:\n\n~~~ quote\nupdated desc\n~~~"
self.check_webhook(
- "merge_request_hook__merge_request_updated", expected_topic, expected_message
- )
+ 'merge_request_hook__merge_request_updated',
+ expected_topic,
+ expected_message)
def test_merge_request_added_commit_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #3 New Merge Request"
@@ -529,8 +544,9 @@ def test_system_merge_request_created_with_assignee_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #3 New Merge Request"
expected_message = "Tomasz Kolek created [MR #3](https://gitlab.com/tomaszkolek0/my-awesome-project/merge_requests/3) (assigned to Tomasz Kolek) from `tomek` to `master`:\n\n~~~ quote\ndescription of merge request\n~~~"
self.check_webhook(
- "system_hook__merge_request_created_with_assignee", expected_topic, expected_message
- )
+ 'system_hook__merge_request_created_with_assignee',
+ expected_topic,
+ expected_message)
def test_system_merge_request_closed_event_message(self) -> None:
expected_topic = "my-awesome-project / MR #2 NEW MR"
| GitLab webhook reports single assignee for MR when there are several
It looks like the GitLab webhook reports assignees for merge requests, but it only gives one assignee, even if there are several. This gets particularly confusing when the only change is in the assignees (say, from A and B to A and C), but the bot reports no change (only A). I think this may be a problem [here](https://github.com/zulip/zulip/blob/ad8943a64aab3f21ba692fe3842232291e222838/zerver/webhooks/gitlab/view.py#L150):
```python
def get_objects_assignee(payload: Dict[str, Any]) -> Optional[str]:
assignee_object = payload.get('assignee')
if assignee_object:
return assignee_object.get('name')
else:
assignee_object = payload.get('assignees')
if assignee_object:
for assignee in assignee_object:
return assignee['name']
```
At least in my MRs, the GitLab response contains both `assignee` and `assignees`, but Zulip takes only the former. It should probably be the other wey around, and it should return all assignees, not just the name of the first one!
Link to chat: https://chat.zulip.org/#narrow/stream/127-integrations/topic/GitLab.20webhook.3A.20MR.20assignees
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim. | 2020-08-21T16:17:36 |
zulip/zulip | 16,242 | zulip__zulip-16242 | [
"16224"
] | 2d30af113e40a7a43369d89cca3866ece6019feb | diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py
--- a/zerver/lib/hotspots.py
+++ b/zerver/lib/hotspots.py
@@ -3,7 +3,7 @@
from typing import Dict, List
from django.conf import settings
-from django.utils.translation import ugettext as _
+from django.utils.translation import ugettext_lazy as _
from zerver.models import UserHotspot, UserProfile
@@ -44,8 +44,8 @@ def get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:
if settings.ALWAYS_SEND_ALL_HOTSPOTS:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0,
} for hotspot in ALL_HOTSPOTS]
@@ -57,8 +57,8 @@ def get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:
if hotspot not in seen_hotspots:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0.5,
}]
| Enable translations for hotspots subsystem
There are unused translations at the hotspots subsystem, which could be enabled due to finished and available translations. At the moment there is a mix of English and the configured user language.
Affected file: zerver/lib/hotspots.py
Example (mixed English/German):

| Hello @zulip/server-i18n members, this issue was labeled with the "area: i18n" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hmm, not sure what the bug is here; @vinitS101 @hackerkid can you investigate?
The "Hab's verstanden" (English: "Got it!") is translated at locale/[language]/translations.json, the rest of the text is translated at locale/[language]/LC_MESSAGES/django.po. Maybe that's the problem - sorry, I cannot investigate this any further :-) | 2020-08-31T15:21:41 |
|
zulip/zulip | 16,319 | zulip__zulip-16319 | [
"16100"
] | 08fbde4e7c0e7650104ed0a8b2f2b96204f2bcdb | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -1234,13 +1234,19 @@ def run(self, root: Element) -> None:
if youtube is not None:
title = self.youtube_title(extracted_data)
if title is not None:
- found_url.family.child.text = title
+ if url == text:
+ found_url.family.child.text = title
+ else:
+ found_url.family.child.text = text
continue
self.add_embed(root, url, extracted_data)
if self.vimeo_id(url):
title = self.vimeo_title(extracted_data)
if title:
- found_url.family.child.text = title
+ if url == text:
+ found_url.family.child.text = title
+ else:
+ found_url.family.child.text = text
class Timestamp(markdown.inlinepatterns.Pattern):
def handleMatch(self, match: Match[str]) -> Optional[Element]:
| diff --git a/zerver/tests/test_link_embed.py b/zerver/tests/test_link_embed.py
--- a/zerver/tests/test_link_embed.py
+++ b/zerver/tests/test_link_embed.py
@@ -741,3 +741,34 @@ def test_youtube_url_title_replaces_url(self) -> None:
msg.refresh_from_db()
expected_content = '<p><a href="https://www.youtube.com/watch?v=eSJTXC7Ixgg">YouTube - Clearer Code at Scale - Static Types at Zulip and Dropbox</a></p>\n<div class="youtube-video message_inline_image"><a data-id="eSJTXC7Ixgg" href="https://www.youtube.com/watch?v=eSJTXC7Ixgg"><img src="https://i.ytimg.com/vi/eSJTXC7Ixgg/default.jpg"></a></div>'
self.assertEqual(expected_content, msg.rendered_content)
+
+ @override_settings(INLINE_URL_EMBED_PREVIEW=True)
+ def test_custom_title_replaces_youtube_url_title(self) -> None:
+ url = '[Youtube link](https://www.youtube.com/watch?v=eSJTXC7Ixgg)'
+ with mock_queue_publish('zerver.lib.actions.queue_json_publish'):
+ msg_id = self.send_personal_message(
+ self.example_user('hamlet'),
+ self.example_user('cordelia'),
+ content=url,
+ )
+ msg = Message.objects.select_related("sender").get(id=msg_id)
+ event = {
+ 'message_id': msg_id,
+ 'urls': [url],
+ 'message_realm_id': msg.sender.realm_id,
+ 'message_content': url}
+
+ mocked_data = {'title': 'Clearer Code at Scale - Static Types at Zulip and Dropbox'}
+ mocked_response = mock.Mock(side_effect=self.create_mock_response(url))
+ with self.settings(TEST_SUITE=False, CACHES=TEST_CACHES):
+ with mock.patch('requests.get', mocked_response), self.assertLogs(level='INFO') as info_logs:
+ with mock.patch('zerver.lib.markdown.link_preview.link_embed_data_from_cache',
+ lambda *args, **kwargs: mocked_data):
+ FetchLinksEmbedData().consume(event)
+ self.assertTrue(
+ 'INFO:root:Time spent on get_link_embed_data for [Youtube link](https://www.youtube.com/watch?v=eSJTXC7Ixgg):' in info_logs.output[0]
+ )
+
+ msg.refresh_from_db()
+ expected_content = '<p><a href="https://www.youtube.com/watch?v=eSJTXC7Ixgg">Youtube link</a></p>\n<div class="youtube-video message_inline_image"><a data-id="eSJTXC7Ixgg" href="https://www.youtube.com/watch?v=eSJTXC7Ixgg"><img src="https://i.ytimg.com/vi/eSJTXC7Ixgg/default.jpg"></a></div>'
+ self.assertEqual(expected_content, msg.rendered_content)
| Youtube preview overrides markdown link titles

Doesn't seem to happen for previews of sites.
| Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
I'd like to work on this. Can someone guide me as to where I can get started?
Check out the [Zulip developer documentation](https://zulip.readthedocs.io/en/latest/development/index.html).
The relevant code seems to be:
https://github.com/zulip/zulip/blob/0d6047840b28efbd9d51397958bc410e1f4399c7/zerver/lib/markdown/__init__.py#L1231-L1233
@aero31aero can likely help investigate.
As @Gittenburg said, the bug is in L1233 above. The easiest fix here would be something like:
```py
# replace title of link if
def maybe_replace_link_title(found_url, title):
(url, text) = found_url.result
if text == url:
found_url.family.child.text = title
```
This same bug should be present with vimeo links too.
For easier manual testing, just comment this check:
```py
if not self.md.url_embed_preview_enabled:
continue
```
Slightly long term, do we really want to modify the original link text and not add the title to the preview like we do for our Twitter previews?
Yeah I think having the title above the preview would be better.
@zulipbot claim
Welcome to Zulip, @akshatdalton! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
I'd like to work on this. Could someone guide me to get started?
@zulipbot claim
@zulipbot claim
Hello @dynamoh, it looks like someone has already claimed this issue! Since we believe multiple assignments to the same issue may cause some confusion, we encourage you to search for other unclaimed issues to work on. However, you can always reclaim this issue if no one is working on it.
We look forward to your valuable contributions!
I see that this is already being worked on. Is there a slack channel I can join?
No, we use Zulip: https://zulip.readthedocs.io/en/latest/contributing/chat-zulip-org.html | 2020-09-09T05:58:34 |
zulip/zulip | 16,433 | zulip__zulip-16433 | [
"16109",
"10484"
] | 87c809c0e31246b0cb4f737d80c1629a7aeba495 | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -44,4 +44,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '110.0'
+PROVISION_VERSION = '111.0'
diff --git a/zerver/openapi/openapi.py b/zerver/openapi/openapi.py
--- a/zerver/openapi/openapi.py
+++ b/zerver/openapi/openapi.py
@@ -30,6 +30,44 @@
("/settings/notifications", "patch"),
}
+# Most of our code expects allOf to be preprocessed away because that is what
+# yamole did. Its algorithm for doing so is not standards compliant, but we
+# replicate it here.
+def naively_merge(a: Dict[str, object], b: Dict[str, object]) -> Dict[str, object]:
+ ret: Dict[str, object] = a.copy()
+ for key, b_value in b.items():
+ if key == "example" or key not in ret:
+ ret[key] = b_value
+ continue
+ a_value = ret[key]
+ if isinstance(b_value, list):
+ assert isinstance(a_value, list)
+ ret[key] = a_value + b_value
+ elif isinstance(b_value, dict):
+ assert isinstance(a_value, dict)
+ ret[key] = naively_merge(a_value, b_value)
+ return ret
+
+def naively_merge_allOf(obj: object) -> object:
+ if isinstance(obj, dict):
+ return naively_merge_allOf_dict(obj)
+ elif isinstance(obj, list):
+ return list(map(naively_merge_allOf, obj))
+ else:
+ return obj
+
+def naively_merge_allOf_dict(obj: Dict[str, object]) -> Dict[str, object]:
+ if "allOf" in obj:
+ ret = obj.copy()
+ subschemas = ret.pop("allOf")
+ ret = naively_merge_allOf_dict(ret)
+ assert isinstance(subschemas, list)
+ for subschema in subschemas:
+ assert isinstance(subschema, dict)
+ ret = naively_merge(ret, naively_merge_allOf_dict(subschema))
+ return ret
+ return {key: naively_merge_allOf(value) for key, value in obj.items()}
+
class OpenAPISpec():
def __init__(self, openapi_path: str) -> None:
self.openapi_path = openapi_path
@@ -39,29 +77,31 @@ def __init__(self, openapi_path: str) -> None:
self._request_validator: Optional[RequestValidator] = None
def check_reload(self) -> None:
- # Because importing yamole (and in turn, yaml) takes
- # significant time, and we only use python-yaml for our API
- # docs, importing it lazily here is a significant optimization
- # to `manage.py` startup.
+ # Because importing yaml takes significant time, and we only
+ # use python-yaml for our API docs, importing it lazily here
+ # is a significant optimization to `manage.py` startup.
#
# There is a bit of a race here...we may have two processes
# accessing this module level object and both trying to
# populate self.data at the same time. Hopefully this will
# only cause some extra processing at startup and not data
# corruption.
- from yamole import YamoleParser
- mtime = os.path.getmtime(self.openapi_path)
- # Using == rather than >= to cover the corner case of users placing an
- # earlier version than the current one
- if self.mtime == mtime:
- return
+ import yaml
+ from jsonref import JsonRef
with open(self.openapi_path) as f:
- yamole_parser = YamoleParser(f)
- self._openapi = yamole_parser.data
- spec = create_spec(self._openapi)
+ mtime = os.fstat(f.fileno()).st_mtime
+ # Using == rather than >= to cover the corner case of users placing an
+ # earlier version than the current one
+ if self.mtime == mtime:
+ return
+
+ openapi = yaml.load(f, Loader=yaml.CSafeLoader)
+
+ spec = create_spec(openapi)
self._request_validator = RequestValidator(spec)
+ self._openapi = naively_merge_allOf_dict(JsonRef.replace_refs(openapi))
self.create_endpoints_dict()
self.mtime = mtime
| OpenAPI schema and validation is built on a fundamental misunderstanding of allOf
The `yamole` library preprocesses our OpenAPI schema by expanding `$ref` (incorrectly: #16106) and then naïvely merging all the objects in an `allOf` array together. This fails to capture the meaning of [`allOf`](https://swagger.io/docs/specification/data-models/oneof-anyof-allof-not/) according to the specification.
For example, consider
```yaml
components:
# …
schemas:
# …
JsonResponse:
type: object
additionalProperties: false
properties:
result:
type: string
JsonSuccess:
allOf:
- $ref: "#/components/schemas/JsonResponse"
- required:
- result
- msg
- properties:
result:
enum:
- success
msg:
type: string
- example: {"msg": "", "result": "success"}
```
In order for `{"msg": "", "result": "success"}` to validate against `JsonSuccess`, it must validate against `JsonSuccess.allOf[0]` and validate against `JsonSuccess.allOf[1]` and validate against `JsonSuccess.allOf[2]` and validate against `JsonSuccess.allOf[3]`. But it does not validate against `JsonSuccess.allOf[0] == JsonResponse` because `JsonResponse` specifies that the object must have a `result` property and no other properties (`additionalProperties: false`).
`yamole`’s merging before validation obscures this failure in a way that’s not supported by the specification: “`allOf` takes an array of object definitions that are used for **independent** validation… To be valid against `allOf`, the data provided by the client must be valid against all of the given subschemas.”
Cc @orientor @YagoGG
Optimize yamole
This is probably an issue for @YagoGG.
It appears that yamole is quite slow in how it parses our 1700+ line zulip.yaml file. Reading the yaml only takes 300ms for me, but then having yamole re-walk the tree with all its deep copies increases the time to 3.5 seconds, so almost a 12x difference.
This isn't a huge priority, since we cache the results internally, but it happens at startup.
It's not clear to me why we need so much `deepcopy` going on in the code.
| Hello @zulip/server-api members, this issue was labeled with the "area: documentation (api and integrations)" label, so you may want to check it out!
<!-- areaLabelAddition -->
Very interesting! This seems likely to be the reason why openapi-generator produces (at least for Go) bindings [where most every method returns just a `JsonSuccess`](https://chat.zulip.org/#narrow/stream/127-integrations/topic/go-zulip-api/near/991574), instead of objects with the appropriate data.
Here's upstream's documentation of a pattern that resembles what we're attempting to express here:
https://swagger.io/docs/specification/data-models/inheritance-and-polymorphism/
```yaml
components:
schemas:
BasicErrorModel:
type: object
required:
- message
- code
properties:
message:
type: string
code:
type: integer
minimum: 100
maximum: 600
ExtendedErrorModel:
allOf: # Combines the BasicErrorModel and the inline model
- $ref: '#/components/schemas/BasicErrorModel'
- type: object
required:
- rootCause
properties:
rootCause:
type: string
```
Happily this looks like it may not be very hard to adapt our incorrect schemas to. The differences that stand out at me are:
* `JsonResponse` shouldn't have `additionalProperties: false`. After all, the point is to have additional properties on it. Then `{"msg": "", "result": "success"}` would validate against `JsonSuccess.allOf[0] == JsonResponse`, as it should.
* Also `JsonResponse` should have its own `required: [result]`, though that isn't the critical point here.
* In `JsonSuccess`, the `required` and `properties` and `example` keys should all go with `type: object` on a single item of the `allOf` list, a sibling of the `$ref`. It doesn't make any sense for them to each be a separate item under `allOf` -- as if they were each their own schema.
@orientor do you have time to work on this restructuring?
@zulipbot claim
Removing additionalProperties will result in loss of gurantee that all properties have been documented and that there are no additional Properties which haven't been documented. On the other hand, we can save the object received from yamole into zulip.yaml file since that would be the most schematically correct spec(without any allOfs), but this will make the file really long. So which method should I proceed with?
We can use `additionalProperties: false`, but only in “leaf” schemas from which other schemas don’t “inherit”. For example, `JsonResponse` and `JsonSuccess` cannot have `additionalProperties: false`, but this can:
```diff
/get_stream_id:
get:
[…]
responses:
"200":
description: Success.
content:
application/json:
schema:
allOf:
- $ref: "#/components/schemas/JsonSuccess"
- properties:
+ result: {}
+ msg: {}
stream_id:
type: integer
description: |
The ID of the given stream.
- - example: {"msg": "", "result": "success", "stream_id": 15}
+ additionalProperties: false
+ example: {"msg": "", "result": "success", "stream_id": 15}
```
It looks like the upcoming [OpenAPI 3.1.0](https://github.com/OAI/OpenAPI-Specification/blob/3.1.0-rc0/versions/3.1.0.md) will, by way of incorporating [JSON Schema 2019-09](https://json-schema.org/draft/2019-09/release-notes.html), support an [`unevaluatedProperties`](https://json-schema.org/draft/2019-09/json-schema-core.html#rfc.section.9.3.2.4) keyword that sees through `allOf` and `$ref` to address this use case. It may be some time before our libraries support this, though.
The [json-schema-merge-allof](https://github.com/mokkabonna/json-schema-merge-allof) library claims to do `allOf` merging correctly. Comparing its output to `yamole`’s might help highlight where we’re relying on `yamole`’s incorrect behavior.
Hello @orientor, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
Hello @zulip/server-api members, this issue was labeled with the "area: documentation (api and integrations)" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hmm, that's really slow. @YagoGG can you investigate this? Running it under `cProfile` might be helpful.
I've been doing some profiling and the results are what one could expect: the final `return deepcopy(obj)` is taking a huge proportion of parsing time.
I won't be able to take care of this in the short term, so if anyone feels like working on this feel free to do so. At least I can try to provide some feedback and help along the process.
So, here's a proposal: What if we just emit a `.json` file as a cache after processing this, and only read that `.json` file if the `.yaml` file isn't newer? That would turn this into a 1-time cost that can happen as part of the production build process, rather than something that is slow for every process startup. JSON parsing is way faster than YAML parsing.
We'd still want to fix the yamole issue, but it'd move the perf problem somewhere a lot less important.
Retagging this as "documentation (api)" -- "documentation (developer)" is for folks working on Zulip core. | 2020-09-29T01:16:21 |
|
zulip/zulip | 16,451 | zulip__zulip-16451 | [
"16373"
] | 26a81ab3aa5bdb81dbdcbe0672d975a03454d743 | diff --git a/tools/lib/provision_inner.py b/tools/lib/provision_inner.py
--- a/tools/lib/provision_inner.py
+++ b/tools/lib/provision_inner.py
@@ -10,6 +10,7 @@
sys.path.append(ZULIP_PATH)
from pygments import __version__ as pygments_version
+from pytz import VERSION as timezones_version
from scripts.lib.zulip_tools import (
ENDC,
@@ -47,6 +48,12 @@ def build_pygments_data_paths() -> List[str]:
]
return paths
+def build_timezones_data_paths() -> List[str]:
+ paths = [
+ "tools/setup/build_timezone_values",
+ ]
+ return paths
+
def compilemessages_paths() -> List[str]:
paths = ['zerver/management/commands/compilemessages.py']
paths += glob.glob('locale/*/LC_MESSAGES/*.po')
@@ -135,6 +142,16 @@ def need_to_run_build_pygments_data() -> bool:
[pygments_version],
)
+def need_to_run_build_timezone_data() -> bool:
+ if not os.path.exists("static/generated/timezones.json"):
+ return True
+
+ return is_digest_obsolete(
+ "build_timezones_data_hash",
+ build_timezones_data_paths(),
+ [timezones_version],
+ )
+
def need_to_run_compilemessages() -> bool:
if not os.path.exists('locale/language_name_map.json'):
# User may have cleaned their git checkout.
@@ -213,6 +230,16 @@ def main(options: argparse.Namespace) -> int:
else:
print("No need to run `tools/setup/build_pygments_data`.")
+ if options.is_force or need_to_run_build_timezone_data():
+ run(["tools/setup/build_timezone_values"])
+ write_new_digest(
+ "build_timezones_data_hash",
+ build_timezones_data_paths(),
+ [timezones_version],
+ )
+ else:
+ print("No need to run `tools/setup/build_timezone_values`.")
+
if options.is_force or need_to_run_inline_email_css():
run(["scripts/setup/inline_email_css.py"])
write_new_digest(
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -43,4 +43,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '124.2'
+PROVISION_VERSION = '125.0'
| diff --git a/frontend_tests/node_tests/people.js b/frontend_tests/node_tests/people.js
--- a/frontend_tests/node_tests/people.js
+++ b/frontend_tests/node_tests/people.js
@@ -2,21 +2,18 @@
const {strict: assert} = require("assert");
+const {parseISO} = require("date-fns");
const _ = require("lodash");
-const moment = require("moment-timezone");
-const rewiremock = require("rewiremock/node");
+const MockDate = require("mockdate");
const {set_global, zrequire} = require("../zjsunit/namespace");
const {run_test} = require("../zjsunit/test");
-const people = rewiremock.proxy(() => zrequire("people"), {
- "moment-timezone": () => moment("20130208T080910"),
-});
-
set_global("message_store", {});
set_global("page_params", {});
set_global("settings_data", {});
+const people = zrequire("people");
const settings_config = zrequire("settings_config");
const visibility = settings_config.email_address_visibility_values;
const admins_only = visibility.admins_only.code;
@@ -28,6 +25,8 @@ function set_email_visibility(code) {
set_email_visibility(admins_only);
+MockDate.set(parseISO("20130208T080910").getTime());
+
const welcome_bot = {
email: "[email protected]",
user_id: 4,
@@ -399,7 +398,7 @@ run_test("user_timezone", () => {
page_params.twenty_four_hour_time = true;
assert.deepEqual(people.get_user_time_preferences(me.user_id), expected_pref);
- expected_pref.format = "h:mm A";
+ expected_pref.format = "h:mm a";
page_params.twenty_four_hour_time = false;
assert.deepEqual(people.get_user_time_preferences(me.user_id), expected_pref);
@@ -1111,3 +1110,6 @@ run_test("get_active_message_people", () => {
active_message_people = people.get_active_message_people();
assert.deepEqual(active_message_people, [steven, maria]);
});
+
+// reset to native Date()
+MockDate.reset();
diff --git a/frontend_tests/node_tests/timerender.js b/frontend_tests/node_tests/timerender.js
--- a/frontend_tests/node_tests/timerender.js
+++ b/frontend_tests/node_tests/timerender.js
@@ -2,7 +2,7 @@
const {strict: assert} = require("assert");
-const moment = require("moment");
+const {getTime} = require("date-fns");
const XDate = require("xdate");
const {set_global, zrequire} = require("../zjsunit/namespace");
@@ -150,10 +150,10 @@ run_test("get_timestamp_for_flatpickr", () => {
Date.now = () => new Date("2020-07-07T10:00:00Z").getTime();
// Invalid timestamps should show current time.
- assert.equal(func("random str").valueOf(), moment().valueOf());
+ assert.equal(func("random str").valueOf(), getTime(new Date()));
// Valid ISO timestamps should return Date objects.
- assert.equal(func(iso_timestamp).valueOf(), moment(unix_timestamp).valueOf());
+ assert.equal(func(iso_timestamp).valueOf(), getTime(new Date(unix_timestamp)));
// Restore the Date object.
Date.now = date_now;
| Replace moment.js with a smaller library with frontend timezone support
We don't use `moment.js` very much, and half the callpoints are hacks to suppress warnings. `moment` is also very large, and that's not going to change (https://momentjs.com/docs/#/-project-status/) as the project is in long-term maintenance mode. So we should look at replacing it with a more modern/smaller library.
```
tabbott@coset:~/zulip$ git grep moment[.]
static/js/markdown.js: moment.suppressDeprecationWarnings = true;
static/js/popovers.js: const dateFormat = moment.localeData().longDateFormat("LL");
static/js/portico/signup.js: $("#timezone").val(moment.tz.guess());
static/js/reminder.js: new_request.tz_guess = moment.tz.guess();
static/js/rendered_markdown.js: moment.suppressDeprecationWarnings = true;
static/js/settings.js: timezones: moment.tz.names(),
static/js/timerender.js: moment.suppressDeprecationWarnings = true;
static/js/timerender.js: moment.suppressDeprecationWarnings = false;
templates/analytics/realm_summary_table.html: $('#utctime')[0].innerHTML = moment.utc(now).format('YYYY-MM-DD HH:mm') + 'Z';
```
Tagging as a priority because this is one of our larger JavaScript dependencies.
| Hello @zulip/server-dependencies members, this issue was labeled with the "area: dependencies" label, so you may want to check it out!
<!-- areaLabelAddition -->
https://github.com/you-dont-need/You-Dont-Need-Momentjs#readme may be of use.
@zulipbot claim
Hey , I am stuck in an error during resolving it and eventually created a topic : https://chat.zulip.org/#narrow/stream/49-development-help/topic/Not.20a.20function.20error
Would love to know my error :)
https://zulip.readthedocs.io/en/latest/subsystems/dependencies.html may be helpful.
We probably also want to replace XDate, which also looks unmaintained, and will almost certainly be redundant with whatever replaces Moment.js.
Oh Cool , Will also look into it . As of now , I am trying to replace <code><a href = 'https://momentjs.com/'>moment.js</a></code> with <code><a href = 'https://date-fns.org/'>date-fns</a></code> . (As , date-fns is having a <code>parsed-size</code> size of 11.5 kb , while that of moment.js is 217 kb) | 2020-10-01T12:36:23 |
zulip/zulip | 16,512 | zulip__zulip-16512 | [
"12144"
] | 2d71ca1fb83c83a07f1014895c757ac479c8cb54 | diff --git a/zerver/management/commands/create_user.py b/zerver/management/commands/create_user.py
--- a/zerver/management/commands/create_user.py
+++ b/zerver/management/commands/create_user.py
@@ -71,7 +71,7 @@ def handle(self, *args: Any, **options: Any) -> None:
try:
if options['password_file'] is not None:
with open(options['password_file']) as f:
- pw = f.read()
+ pw = f.read().strip()
elif options['password'] is not None:
pw = options['password']
else:
| New line character issue when using create_user management command
The create_user management command reads password from a text file created by the server admin. To run this command I tried creating this text file using VIM, nano and echo (` echo pass > password.txt` without using `-n` flag). Each and every time new line character was automatically added to the end of the file. So if I set the content of file as `helloworld` and try to login to the server by entering `helloworld` it would not let me login since `\n` is missing. It was not obvious to me that the extra `\n` added by editors was the reason behind the server rejecting the credentials.
Should we remove the trailing `\n` character while reading the password from file?
| @zulipbot label "area: production"
@timabbott Do you think we should have a label for management commands? I couldn't really find a good label for labelling this issue.
Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out!
<!-- areaLabelAddition -->
I've been using the `production` area label for these if production, or `tooling` if it's development-only. I don't think there's enough issue volume for them to be worth their own label.
I agree, we should definitely `.strip()` the passwords in that command. | 2020-10-09T18:59:27 |
|
zulip/zulip | 16,569 | zulip__zulip-16569 | [
"15205"
] | 54dd612f5ca548612ee6c3e0f3814543f0b278f3 | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -44,4 +44,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '111.3'
+PROVISION_VERSION = '112.1'
diff --git a/zerver/forms.py b/zerver/forms.py
--- a/zerver/forms.py
+++ b/zerver/forms.py
@@ -50,7 +50,7 @@
"Please contact your organization administrator to reactivate it."
PASSWORD_TOO_WEAK_ERROR = "The password is too weak."
AUTHENTICATION_RATE_LIMITED_ERROR = "You're making too many attempts to sign in. " + \
- "Try again in %s seconds or contact your organization administrator " + \
+ "Try again in {} seconds or contact your organization administrator " + \
"for help."
def email_is_not_mit_mailing_list(email: str) -> None:
@@ -351,7 +351,7 @@ def clean(self) -> Dict[str, Any]:
realm=realm, return_data=return_data)
except RateLimited as e:
secs_to_freedom = int(float(str(e)))
- raise ValidationError(AUTHENTICATION_RATE_LIMITED_ERROR % (secs_to_freedom,))
+ raise ValidationError(AUTHENTICATION_RATE_LIMITED_ERROR.format(secs_to_freedom))
if return_data.get("inactive_realm"):
raise AssertionError("Programming error: inactive realm in authentication form")
diff --git a/zerver/lib/markdown/fenced_code.py b/zerver/lib/markdown/fenced_code.py
--- a/zerver/lib/markdown/fenced_code.py
+++ b/zerver/lib/markdown/fenced_code.py
@@ -391,7 +391,7 @@ def format_code(self, lang: str, text: str) -> str:
lang=(lang or None),
noclasses=self.codehilite_conf['noclasses'][0])
- code = highliter.hilite()
+ code = highliter.hilite().rstrip('\n')
else:
code = CODE_WRAP.format(langclass, self._escape(text))
| diff --git a/frontend_tests/node_tests/markdown.js b/frontend_tests/node_tests/markdown.js
--- a/frontend_tests/node_tests/markdown.js
+++ b/frontend_tests/node_tests/markdown.js
@@ -184,7 +184,7 @@ stream_data.add_sub(amp_stream);
run_test("fenced_block_defaults", () => {
const input = "\n```\nfenced code\n```\n\nand then after\n";
const expected =
- '\n\n<div class="codehilite"><pre><span></span><code>fenced code\n</code></pre></div>\n\n\n\nand then after\n\n';
+ '\n\n<div class="codehilite"><pre><span></span><code>fenced code\n</code></pre></div>\n\n\nand then after\n\n';
const output = fenced_code.process_fenced_code(input);
assert.equal(output, expected);
});
@@ -294,13 +294,13 @@ run_test("marked", () => {
{
input: "\n```\nfenced code\n```\n\nand then after\n",
expected:
- '<div class="codehilite"><pre><span></span><code>fenced code\n</code></pre></div>\n\n\n<p>and then after</p>',
+ '<div class="codehilite"><pre><span></span><code>fenced code\n</code></pre></div>\n<p>and then after</p>',
},
{
input:
"\n```\n fenced code trailing whitespace \n```\n\nand then after\n",
expected:
- '<div class="codehilite"><pre><span></span><code> fenced code trailing whitespace\n</code></pre></div>\n\n\n<p>and then after</p>',
+ '<div class="codehilite"><pre><span></span><code> fenced code trailing whitespace\n</code></pre></div>\n<p>and then after</p>',
},
{
input: "* a\n* list \n* here",
@@ -309,12 +309,12 @@ run_test("marked", () => {
{
input: "\n```c#\nfenced code special\n```\n\nand then after\n",
expected:
- '<div class="codehilite" data-code-language="C#"><pre><span></span><code>fenced code special\n</code></pre></div>\n\n\n<p>and then after</p>',
+ '<div class="codehilite" data-code-language="C#"><pre><span></span><code>fenced code special\n</code></pre></div>\n<p>and then after</p>',
},
{
input: "\n```vb.net\nfenced code dot\n```\n\nand then after\n",
expected:
- '<div class="codehilite" data-code-language="VB.net"><pre><span></span><code>fenced code dot\n</code></pre></div>\n\n\n<p>and then after</p>',
+ '<div class="codehilite" data-code-language="VB.net"><pre><span></span><code>fenced code dot\n</code></pre></div>\n<p>and then after</p>',
},
{
input: "Some text first\n* a\n* list \n* here\n\nand then after",
diff --git a/zerver/tests/fixtures/markdown_test_cases.json b/zerver/tests/fixtures/markdown_test_cases.json
--- a/zerver/tests/fixtures/markdown_test_cases.json
+++ b/zerver/tests/fixtures/markdown_test_cases.json
@@ -23,7 +23,7 @@
{
"name": "ampampamp",
"input": "& & &amp;\n~~~~\n& & &amp;\n~~~~\n & & &amp;",
- "expected_output": "<p>& & &amp;</p>\n<div class=\"codehilite\"><pre><span></span><code>& &amp; &amp;amp;\n</code></pre></div>\n\n\n<div class=\"codehilite\"><pre><span></span><code>& &amp; &amp;amp;\n</code></pre></div>"
+ "expected_output": "<p>& & &amp;</p>\n<div class=\"codehilite\"><pre><span></span><code>& &amp; &amp;amp;\n</code></pre></div>\n<div class=\"codehilite\"><pre><span></span><code>& &amp; &amp;amp;\n</code></pre></div>"
},
{
"name": "basic_paragraph",
@@ -34,8 +34,8 @@
{
"name": "codeblock_multiline",
"input": "Hamlet once said\n~~~~\ndef func():\n x = 1\n\n y = 2\n\n z = 3\n~~~~\nAnd all was good.",
- "expected_output": "<p>Hamlet once said</p>\n<div class=\"codehilite\"><pre><span></span><code>def func():\n x = 1\n\n y = 2\n\n z = 3\n</code></pre></div>\n\n\n<p>And all was good.</p>",
- "text_content": "Hamlet once said\ndef func():\n x = 1\n\n y = 2\n\n z = 3\n\n\n\nAnd all was good."
+ "expected_output": "<p>Hamlet once said</p>\n<div class=\"codehilite\"><pre><span></span><code>def func():\n x = 1\n\n y = 2\n\n z = 3\n</code></pre></div>\n<p>And all was good.</p>",
+ "text_content": "Hamlet once said\ndef func():\n x = 1\n\n y = 2\n\n z = 3\n\nAnd all was good."
},
{
"name": "test",
@@ -45,8 +45,8 @@
{
"name": "codeblock_trailing_whitespace",
"input": "Hamlet once said\n~~~~\ndef func():\n x = 1\n\n y = 2\t\t\n\n z = 3 \n~~~~\nAnd all was good.",
- "expected_output": "<p>Hamlet once said</p>\n<div class=\"codehilite\"><pre><span></span><code>def func():\n x = 1\n\n y = 2\n\n z = 3\n</code></pre></div>\n\n\n<p>And all was good.</p>",
- "text_content": "Hamlet once said\ndef func():\n x = 1\n\n y = 2\n\n z = 3\n\n\n\nAnd all was good."
+ "expected_output": "<p>Hamlet once said</p>\n<div class=\"codehilite\"><pre><span></span><code>def func():\n x = 1\n\n y = 2\n\n z = 3\n</code></pre></div>\n<p>And all was good.</p>",
+ "text_content": "Hamlet once said\ndef func():\n x = 1\n\n y = 2\n\n z = 3\n\nAnd all was good."
},
{
"name": "inline_code_spaces",
@@ -63,14 +63,14 @@
{
"name": "codeblock_backticks",
"input": "\n```\nfenced code\n```\n\n```inline code```\n",
- "expected_output": "<div class=\"codehilite\"><pre><span></span><code>fenced code\n</code></pre></div>\n\n\n<p><code>inline code</code></p>",
- "text_content": "fenced code\n\n\n\ninline code"
+ "expected_output": "<div class=\"codehilite\"><pre><span></span><code>fenced code\n</code></pre></div>\n<p><code>inline code</code></p>",
+ "text_content": "fenced code\n\ninline code"
},
{
"name": "hanging_multi_codeblock",
"input": "Hamlet said:\n~~~~\ndef speak(self):\n x = 1\n# Comment to make this code block longer to test Trac #1162\n~~~~\n\nThen he mentioned ````y = 4 + x**2```` and\n~~~~\ndef foobar(self):\n return self.baz()",
- "expected_output": "<p>Hamlet said:</p>\n<div class=\"codehilite\"><pre><span></span><code>def speak(self):\n x = 1\n# Comment to make this code block longer to test Trac #1162\n</code></pre></div>\n\n\n<p>Then he mentioned <code>y = 4 + x**2</code> and</p>\n<div class=\"codehilite\"><pre><span></span><code>def foobar(self):\n return self.baz()\n</code></pre></div>",
- "text_content": "Hamlet said:\ndef speak(self):\n x = 1\n# Comment to make this code block longer to test Trac #1162\n\n\n\nThen he mentioned y = 4 + x**2 and\ndef foobar(self):\n return self.baz()\n"
+ "expected_output": "<p>Hamlet said:</p>\n<div class=\"codehilite\"><pre><span></span><code>def speak(self):\n x = 1\n# Comment to make this code block longer to test Trac #1162\n</code></pre></div>\n<p>Then he mentioned <code>y = 4 + x**2</code> and</p>\n<div class=\"codehilite\"><pre><span></span><code>def foobar(self):\n return self.baz()\n</code></pre></div>",
+ "text_content": "Hamlet said:\ndef speak(self):\n x = 1\n# Comment to make this code block longer to test Trac #1162\n\nThen he mentioned y = 4 + x**2 and\ndef foobar(self):\n return self.baz()\n"
},
{
"name": "fenced_quote",
@@ -87,7 +87,7 @@
{
"name": "complexly_nested_quote",
"input": "I heard about this second hand...\n~~~ quote\n\nHe said:\n~~~ quote\nThe customer is complaining.\n\nThey looked at this code:\n``` \ndef hello(): print 'hello\n```\nThey would prefer:\n~~~\ndef hello()\n puts 'hello'\nend\n~~~\n\nPlease advise.\n~~~\n\nShe said:\n~~~ quote\nJust send them this:\n```\necho \"hello\n\"\n```\n~~~",
- "expected_output": "<p>I heard about this second hand...</p>\n<blockquote>\n<p>He said:</p>\n<blockquote>\n<p>The customer is complaining.</p>\n<p>They looked at this code:</p>\n<div class=\"codehilite\"><pre><span></span><code>def hello(): print 'hello\n</code></pre></div>\n\n\n<p>They would prefer:</p>\n</blockquote>\n<p>def hello()<br>\n puts 'hello'<br>\nend</p>\n</blockquote>\n<p>Please advise.</p>\n<div class=\"codehilite\"><pre><span></span><code>She said:\n~~~ quote\nJust send them this:\n```\necho "hello\n"\n```\n</code></pre></div>",
+ "expected_output": "<p>I heard about this second hand...</p>\n<blockquote>\n<p>He said:</p>\n<blockquote>\n<p>The customer is complaining.</p>\n<p>They looked at this code:</p>\n<div class=\"codehilite\"><pre><span></span><code>def hello(): print 'hello\n</code></pre></div>\n<p>They would prefer:</p>\n</blockquote>\n<p>def hello()<br>\n puts 'hello'<br>\nend</p>\n</blockquote>\n<p>Please advise.</p>\n<div class=\"codehilite\"><pre><span></span><code>She said:\n~~~ quote\nJust send them this:\n```\necho "hello\n"\n```\n</code></pre></div>",
"text_content": "I heard about this second hand...\n> He said:\n> > The customer is complaining.\n> > They looked at this code:\n> > def hello(): print 'hello\n> > They would prefer:\n> def hello()\n> puts 'hello'\n> end\n\nPlease advise.\nShe said:\n~~~ quote\nJust send them this:\n```\necho \"hello\n\"\n```\n"
},
{
@@ -860,39 +860,39 @@
{
"name": "spoilers_fenced_spoiler",
"input": "```spoiler header\ncontent\n```\noutside spoiler\n",
- "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>content</p>\n</div></div>\n\n<p>outside spoiler</p>",
+ "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>content</p>\n</div></div>\n<p>outside spoiler</p>",
"text_content": "header (…)\noutside spoiler"
},
{
"name": "spoilers_empty_header",
"input": "```spoiler\ncontent\n```\noutside spoiler\n",
- "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>content</p>\n</div></div>\n\n<p>outside spoiler</p>",
+ "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>content</p>\n</div></div>\n<p>outside spoiler</p>",
"text_content": "(…)\noutside spoiler"
},
{
"name": "spoilers_script_tags",
"input": "```spoiler <script>alert(1)</script>\n<script>alert(1)</script>\n```",
- "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p><script>alert(1)</script></p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p><script>alert(1)</script></p>\n</div></div>",
- "marked_expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p><script>alert(1)</script>\n\n</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p><script>alert(1)</script>\n\n</p>\n</div></div>",
+ "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p><script>alert(1)</script></p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p><script>alert(1)</script></p>\n</div></div>",
+ "marked_expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p><script>alert(1)</script>\n\n</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p><script>alert(1)</script>\n\n</p>\n</div></div>",
"text_content": "<script>alert(1)</script> (…)\n"
},
{
"name": "spoilers_block_quote",
"input": "~~~quote\n```spoiler header\ncontent\n```\noutside spoiler\n~~~\noutside quote",
- "expected_output": "<blockquote>\n<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>content</p>\n</div></div>\n\n<p>outside spoiler</p>\n</blockquote>\n<p>outside quote</p>",
+ "expected_output": "<blockquote>\n<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>content</p>\n</div></div>\n<p>outside spoiler</p>\n</blockquote>\n<p>outside quote</p>",
"text_content": "> header (…)\n> outside spoiler\n\noutside quote"
},
{
"name": "spoilers_with_header_markdown",
"input": "```spoiler [Header](https://example.com) :smile:\ncontent\n```",
- "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p><a href=\"https://example.com\">Header</a> <span aria-label=\"smile\" class=\"emoji emoji-1f642\" role=\"img\" title=\"smile\">:smile:</span></p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>content</p>\n</div></div>",
+ "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p><a href=\"https://example.com\">Header</a> <span aria-label=\"smile\" class=\"emoji emoji-1f642\" role=\"img\" title=\"smile\">:smile:</span></p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>content</p>\n</div></div>",
"text_content": "Header 🙂 (…)\n"
},
{
"name": "spoiler_with_inline_image",
"input": "```spoiler header\nContent http://example.com/image.png\n```",
- "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>Content <a href=\"http://example.com/image.png\">http://example.com/image.png</a></p>\n<div class=\"message_inline_image\"><a href=\"http://example.com/image.png\"><img data-src-fullsize=\"/thumbnail?url=http%3A%2F%2Fexample.com%2Fimage.png&size=full\" src=\"/thumbnail?url=http%3A%2F%2Fexample.com%2Fimage.png&size=thumbnail\"></a></div></div></div>",
- "marked_expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>Content <a href=\"http://example.com/image.png\">http://example.com/image.png</a></p>\n</div></div>",
+ "expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>Content <a href=\"http://example.com/image.png\">http://example.com/image.png</a></p>\n<div class=\"message_inline_image\"><a href=\"http://example.com/image.png\"><img data-src-fullsize=\"/thumbnail?url=http%3A%2F%2Fexample.com%2Fimage.png&size=full\" src=\"/thumbnail?url=http%3A%2F%2Fexample.com%2Fimage.png&size=thumbnail\"></a></div></div></div>",
+ "marked_expected_output": "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p>header</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>Content <a href=\"http://example.com/image.png\">http://example.com/image.png</a></p>\n</div></div>",
"text_content": "header (…)\n"
}
],
diff --git a/zerver/tests/test_import_export.py b/zerver/tests/test_import_export.py
--- a/zerver/tests/test_import_export.py
+++ b/zerver/tests/test_import_export.py
@@ -990,7 +990,7 @@ def get_userpresence_timestamp(r: Realm) -> Set[Any]:
original_msg = Message.objects.get(content=special_characters_message, sender__realm=original_realm)
self.assertEqual(
original_msg.rendered_content,
- '<div class="codehilite"><pre><span></span><code>'\n</code></pre></div>\n\n\n'
+ '<div class="codehilite"><pre><span></span><code>'\n</code></pre></div>\n'
f'<p><span class="user-mention" data-user-id="{orig_polonius_user.id}">@Polonius</span></p>',
)
imported_polonius_user = UserProfile.objects.get(delivery_email=self.example_email("polonius"),
diff --git a/zerver/tests/test_markdown.py b/zerver/tests/test_markdown.py
--- a/zerver/tests/test_markdown.py
+++ b/zerver/tests/test_markdown.py
@@ -725,7 +725,7 @@ def test_inline_youtube_preview(self) -> None:
msg = """\n```spoiler Check out this Pycon Video\nhttps://www.youtube.com/watch?v=0c46YHS3RY8\n```"""
converted = markdown_convert_wrapper(msg)
- self.assertEqual(converted, '<div class="spoiler-block"><div class="spoiler-header">\n\n<p>Check out this Pycon Video</p>\n</div><div class="spoiler-content" aria-hidden="true">\n\n<p><a href="https://www.youtube.com/watch?v=0c46YHS3RY8">https://www.youtube.com/watch?v=0c46YHS3RY8</a></p>\n<div class="youtube-video message_inline_image"><a data-id="0c46YHS3RY8" href="https://www.youtube.com/watch?v=0c46YHS3RY8"><img src="https://i.ytimg.com/vi/0c46YHS3RY8/default.jpg"></a></div></div></div>')
+ self.assertEqual(converted, '<div class="spoiler-block"><div class="spoiler-header">\n<p>Check out this Pycon Video</p>\n</div><div class="spoiler-content" aria-hidden="true">\n<p><a href="https://www.youtube.com/watch?v=0c46YHS3RY8">https://www.youtube.com/watch?v=0c46YHS3RY8</a></p>\n<div class="youtube-video message_inline_image"><a data-id="0c46YHS3RY8" href="https://www.youtube.com/watch?v=0c46YHS3RY8"><img src="https://i.ytimg.com/vi/0c46YHS3RY8/default.jpg"></a></div></div></div>')
# Test youtube urls in normal messages.
msg = '[Youtube link](https://www.youtube.com/watch?v=0c46YHS3RY8)'
@@ -896,7 +896,7 @@ def make_inline_twitter_preview(url: str, tweet_html: str, image_html: str='') -
msg = '```spoiler secret tweet\nTweet: http://twitter.com/wdaher/status/287977969287315456\n```'
converted = markdown_convert_wrapper(msg)
- rendered_spoiler = "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n\n<p>secret tweet</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n\n<p>Tweet: {}</p>\n{}</div></div>"
+ rendered_spoiler = "<div class=\"spoiler-block\"><div class=\"spoiler-header\">\n<p>secret tweet</p>\n</div><div class=\"spoiler-content\" aria-hidden=\"true\">\n<p>Tweet: {}</p>\n{}</div></div>"
self.assertEqual(converted, rendered_spoiler.format(
make_link('http://twitter.com/wdaher/status/287977969287315456'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315456', normal_tweet_html)))
@@ -2134,8 +2134,6 @@ def test_html_entity_conversion(self) -> None:
<div class="codehilite"><pre><span></span><code>&copy;
&copy;
</code></pre></div>
-
-
<p>Test quote:</p>
<blockquote>
<p>©</p>
| Python-Markdown extensions registered with deprecated interface
Running tests with Python warnings enabled (e.g. `python3 -Wd tools/test-backend zerver.tests.test_auth_backends.GitHubAuthBackendTest.test_config_error_development`) shows these deprecation warnings:
```
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.fenced_code.FencedCodeExtension.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/api_arguments_table_generator.py:27: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
'generate_api_arguments', APIArgumentsTablePreprocessor(md, self.getConfigs()), '_begin'
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.api_arguments_table_generator.MarkdownArgumentsTableGenerator.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/api_return_values_table_generator.py:18: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
'generate_return_values', APIReturnValuesTablePreprocessor(md, self.getConfigs()), '_begin'
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.api_return_values_table_generator.MarkdownReturnValuesTableGenerator.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/nested_code_blocks.py:13: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
'_end'
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.nested_code_blocks.NestedCodeBlocksRenderer.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/tabbed_sections.py:83: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
'tabbed_sections', TabbedSectionsPreprocessor(md, self.getConfigs()), '_end')
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.tabbed_sections.TabbedSectionsGenerator.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/help_settings_links.py:68: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
md.preprocessors.add('setting', Setting(), '_begin')
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.help_settings_links.SettingHelpExtension.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/help_relative_links.py:72: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
md.preprocessors.add('help_relative_links', RelativeLinks(), '_begin')
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.help_relative_links.RelativeLinksHelpExtension.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/help_emoticon_translations_table.py:41: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
md.preprocessors.add('emoticon_translations', EmoticonTranslation(), '_end')
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.help_emoticon_translations_table.EmoticonTranslationsHelpExtension.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
/srv/zulip/zerver/lib/bugdown/include.py:18: DeprecationWarning: Using the add method to register a processor or pattern is deprecated. Use the `register` method instead.
'_begin'
/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py:124: DeprecationWarning: The 'md_globals' parameter of 'zerver.lib.bugdown.include.MarkdownIncludeCustom.extendMarkdown' is deprecated.
ext._extendMarkdown(self)
```
See
https://python-markdown.github.io/change_log/release-3.0/#homegrown-ordereddict-has-been-replaced-with-a-purpose-built-registry
https://python-markdown.github.io/change_log/release-3.0/#md_globals-keyword-deprecated-from-extension-api
| Hello @zulip/server-dependencies, @zulip/server-markdown members, this issue was labeled with the "area: dependencies", "area: markdown" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@aero31aero can you look into these?
@zulipbot claim
Hello @aero31aero, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon! | 2020-10-19T05:17:02 |
zulip/zulip | 16,580 | zulip__zulip-16580 | [
"16498",
"20509",
"20804"
] | 9d460a513e4a76e4942cb81ceb9060868012a926 | diff --git a/zerver/views/video_calls.py b/zerver/views/video_calls.py
--- a/zerver/views/video_calls.py
+++ b/zerver/views/video_calls.py
@@ -10,6 +10,7 @@
import requests
from defusedxml import ElementTree
from django.conf import settings
+from django.core.signing import Signer
from django.http import HttpRequest, HttpResponse
from django.middleware import csrf
from django.shortcuts import redirect, render
@@ -168,35 +169,27 @@ def deauthorize_zoom_user(request: HttpRequest) -> HttpResponse:
return json_success(request)
-def get_bigbluebutton_url(request: HttpRequest, user_profile: UserProfile) -> HttpResponse:
+@has_request_variables
+def get_bigbluebutton_url(
+ request: HttpRequest, user_profile: UserProfile, meeting_name: str = REQ()
+) -> HttpResponse:
# https://docs.bigbluebutton.org/dev/api.html#create for reference on the API calls
# https://docs.bigbluebutton.org/dev/api.html#usage for reference for checksum
id = "zulip-" + str(random.randint(100000000000, 999999999999))
- password = b32encode(secrets.token_bytes(7))[:10].decode()
- checksum = hashlib.sha256(
- (
- "create"
- + "meetingID="
- + id
- + "&moderatorPW="
- + password
- + "&attendeePW="
- + password
- + "a"
- + settings.BIG_BLUE_BUTTON_SECRET
- ).encode()
- ).hexdigest()
- url = append_url_query_string(
- "/calls/bigbluebutton/join",
- urlencode(
- {
- "meeting_id": id,
- "password": password,
- "checksum": checksum,
- }
- ),
+ password = b32encode(secrets.token_bytes(7))[:20].decode()
+
+ # We sign our data here to ensure a Zulip user can not tamper with
+ # the join link to gain access to other meetings that are on the
+ # same bigbluebutton server.
+ signed = Signer().sign_object(
+ {
+ "meeting_id": id,
+ "name": meeting_name,
+ "password": password,
+ }
)
- return json_success(request, data={"url": url})
+ url = append_url_query_string("/calls/bigbluebutton/join", "bigbluebutton=" + signed)
+ return json_success(request, {"url": url})
# We use zulip_login_required here mainly to get access to the user's
@@ -207,55 +200,78 @@ def get_bigbluebutton_url(request: HttpRequest, user_profile: UserProfile) -> Ht
@zulip_login_required
@never_cache
@has_request_variables
-def join_bigbluebutton(
- request: HttpRequest,
- meeting_id: str = REQ(),
- password: str = REQ(),
- checksum: str = REQ(),
-) -> HttpResponse:
+def join_bigbluebutton(request: HttpRequest, bigbluebutton: str = REQ()) -> HttpResponse:
assert request.user.is_authenticated
if settings.BIG_BLUE_BUTTON_URL is None or settings.BIG_BLUE_BUTTON_SECRET is None:
raise JsonableError(_("BigBlueButton is not configured."))
- else:
- try:
- response = VideoCallSession().get(
- append_url_query_string(
- settings.BIG_BLUE_BUTTON_URL + "api/create",
- urlencode(
- {
- "meetingID": meeting_id,
- "moderatorPW": password,
- "attendeePW": password + "a",
- "checksum": checksum,
- }
- ),
- )
- )
- response.raise_for_status()
- except requests.RequestException:
- raise JsonableError(_("Error connecting to the BigBlueButton server."))
-
- payload = ElementTree.fromstring(response.text)
- if payload.find("messageKey").text == "checksumError":
- raise JsonableError(_("Error authenticating to the BigBlueButton server."))
-
- if payload.find("returncode").text != "SUCCESS":
- raise JsonableError(_("BigBlueButton server returned an unexpected error."))
-
- join_params = urlencode(
- {
- "meetingID": meeting_id,
- "password": password,
- "fullName": request.user.full_name,
- },
- quote_via=quote,
- )
- checksum = hashlib.sha256(
- ("join" + join_params + settings.BIG_BLUE_BUTTON_SECRET).encode()
- ).hexdigest()
- redirect_url_base = append_url_query_string(
- settings.BIG_BLUE_BUTTON_URL + "api/join", join_params
+ try:
+ bigbluebutton_data = Signer().unsign_object(bigbluebutton)
+ except Exception:
+ raise JsonableError(_("Invalid signature."))
+
+ create_params = urlencode(
+ {
+ "meetingID": bigbluebutton_data["meeting_id"],
+ "name": bigbluebutton_data["name"],
+ "moderatorPW": bigbluebutton_data["password"],
+ # We generate the attendee password from moderatorPW,
+ # because the BigBlueButton API requires a separate
+ # password. This integration is designed to have all users
+ # join as moderators, so we generate attendeePW by
+ # truncating the moderatorPW while keeping it long enough
+ # to not be vulnerable to brute force attacks.
+ "attendeePW": bigbluebutton_data["password"][:16],
+ },
+ quote_via=quote,
+ )
+
+ checksum = hashlib.sha256(
+ ("create" + create_params + settings.BIG_BLUE_BUTTON_SECRET).encode()
+ ).hexdigest()
+
+ try:
+ response = VideoCallSession().get(
+ append_url_query_string(settings.BIG_BLUE_BUTTON_URL + "api/create", create_params)
+ + "&checksum="
+ + checksum
)
- return redirect(append_url_query_string(redirect_url_base, "checksum=" + checksum))
+ response.raise_for_status()
+ except requests.RequestException:
+ raise JsonableError(_("Error connecting to the BigBlueButton server."))
+
+ payload = ElementTree.fromstring(response.text)
+ if payload.find("messageKey").text == "checksumError":
+ raise JsonableError(_("Error authenticating to the BigBlueButton server."))
+
+ if payload.find("returncode").text != "SUCCESS":
+ raise JsonableError(_("BigBlueButton server returned an unexpected error."))
+
+ join_params = urlencode(
+ {
+ "meetingID": bigbluebutton_data["meeting_id"],
+ # We use the moderator password here to grant ever user
+ # full moderator permissions to the bigbluebutton session.
+ "password": bigbluebutton_data["password"],
+ "fullName": request.user.full_name,
+ # https://docs.bigbluebutton.org/dev/api.html#create
+ # The createTime option is used to have the user redirected to a link
+ # that is only valid for this meeting.
+ #
+ # Even if the same link in Zulip is used again, a new
+ # createTime parameter will be created, as the meeting on
+ # the BigBlueButton server has to be recreated. (after a
+ # few minutes)
+ "createTime": payload.find("createTime").text,
+ },
+ quote_via=quote,
+ )
+
+ checksum = hashlib.sha256(
+ ("join" + join_params + settings.BIG_BLUE_BUTTON_SECRET).encode()
+ ).hexdigest()
+ redirect_url_base = append_url_query_string(
+ settings.BIG_BLUE_BUTTON_URL + "api/join", join_params
+ )
+ return redirect(append_url_query_string(redirect_url_base, "checksum=" + checksum))
| diff --git a/frontend_tests/node_tests/compose_video.js b/frontend_tests/node_tests/compose_video.js
--- a/frontend_tests/node_tests/compose_video.js
+++ b/frontend_tests/node_tests/compose_video.js
@@ -27,6 +27,7 @@ set_global("ResizeObserver", function () {
const server_events_dispatch = zrequire("server_events_dispatch");
const compose_ui = zrequire("compose_ui");
+const compose_closed = zrequire("compose_closed_ui");
const compose = zrequire("compose");
function stub_out_video_calls() {
const $elem = $("#below-compose-content .video_link");
@@ -210,8 +211,11 @@ test("videos", ({override, override_rewire}) => {
page_params.realm_video_chat_provider =
realm_available_video_chat_providers.big_blue_button.id;
+ compose_closed.get_recipient_label = () => "a";
+
channel.get = (options) => {
assert.equal(options.url, "/json/calls/bigbluebutton/create");
+ assert.equal(options.data.meeting_name, "a meeting");
options.success({
url: "/calls/bigbluebutton/join?meeting_id=%22zulip-1%22&password=%22AAAAAAAAAA%22&checksum=%2232702220bff2a22a44aee72e96cfdb4c4091752e%22",
});
diff --git a/zerver/tests/test_create_video_call.py b/zerver/tests/test_create_video_call.py
--- a/zerver/tests/test_create_video_call.py
+++ b/zerver/tests/test_create_video_call.py
@@ -1,9 +1,11 @@
from unittest import mock
import responses
+from django.core.signing import Signer
from django.http import HttpResponseRedirect
from zerver.lib.test_classes import ZulipTestCase
+from zerver.lib.url_encoding import append_url_query_string
class TestVideoCall(ZulipTestCase):
@@ -11,6 +13,15 @@ def setUp(self) -> None:
super().setUp()
self.user = self.example_user("hamlet")
self.login_user(self.user)
+ # Signing for bbb
+ self.signer = Signer()
+ self.signed_bbb_a_object = self.signer.sign_object(
+ {
+ "meeting_id": "a",
+ "name": "a",
+ "password": "a",
+ }
+ )
def test_register_video_request_no_settings(self) -> None:
with self.settings(VIDEO_ZOOM_CLIENT_ID=None):
@@ -168,46 +179,70 @@ def test_deauthorize_zoom_user(self) -> None:
def test_create_bigbluebutton_link(self) -> None:
with mock.patch("zerver.views.video_calls.random.randint", return_value="1"), mock.patch(
- "secrets.token_bytes", return_value=b"\x00" * 7
+ "secrets.token_bytes", return_value=b"\x00" * 12
):
- response = self.client_get("/json/calls/bigbluebutton/create")
+ response = self.client_get(
+ "/json/calls/bigbluebutton/create?meeting_name=general > meeting"
+ )
self.assert_json_success(response)
self.assertEqual(
response.json()["url"],
- "/calls/bigbluebutton/join?meeting_id=zulip-1&password=AAAAAAAAAA"
- "&checksum=d5eb2098bcd0e69a33caf2b18490991b843c8fa6be779316b4303c7990aca687",
+ append_url_query_string(
+ "/calls/bigbluebutton/join",
+ "bigbluebutton="
+ + self.signer.sign_object(
+ {
+ "meeting_id": "zulip-1",
+ "name": "general > meeting",
+ "password": "AAAAAAAAAAAAAAAAAAAA",
+ }
+ ),
+ ),
)
@responses.activate
def test_join_bigbluebutton_redirect(self) -> None:
responses.add(
responses.GET,
- "https://bbb.example.com/bigbluebutton/api/create?meetingID=zulip-1&moderatorPW=a&attendeePW=aa&checksum=check",
- "<response><returncode>SUCCESS</returncode><messageKey/></response>",
+ "https://bbb.example.com/bigbluebutton/api/create?meetingID=a&name=a"
+ "&moderatorPW=a&attendeePW=a&checksum=131bdec35f62fc63d5436e6f791d6d7aed7cf79ef256c03597e51d320d042823",
+ "<response><returncode>SUCCESS</returncode><messageKey/><createTime>0</createTime></response>",
)
response = self.client_get(
- "/calls/bigbluebutton/join",
- {"meeting_id": "zulip-1", "password": "a", "checksum": "check"},
+ "/calls/bigbluebutton/join", {"bigbluebutton": self.signed_bbb_a_object}
)
self.assertEqual(response.status_code, 302)
self.assertEqual(isinstance(response, HttpResponseRedirect), True)
self.assertEqual(
response.url,
- "https://bbb.example.com/bigbluebutton/api/join?meetingID=zulip-1&password=a"
- "&fullName=King%20Hamlet&checksum=ca78d6d3c3e04918bfab9d7d6cbc6e50602ab2bdfe1365314570943346a71a00",
+ "https://bbb.example.com/bigbluebutton/api/join?meetingID=a&"
+ "password=a&fullName=King%20Hamlet&createTime=0&checksum=47ca959b4ff5c8047a5a56d6e99c07e17eac43dbf792afc0a2a9f6491ec0048b",
+ )
+
+ @responses.activate
+ def test_join_bigbluebutton_invalid_signature(self) -> None:
+ responses.add(
+ responses.GET,
+ "https://bbb.example.com/bigbluebutton/api/create?meetingID=a&name=a"
+ "&moderatorPW=a&attendeePW=a&checksum=131bdec35f62fc63d5436e6f791d6d7aed7cf79ef256c03597e51d320d042823",
+ "<response><returncode>SUCCESS</returncode><messageKey/><createTime>0</createTime></response>",
+ )
+ response = self.client_get(
+ "/calls/bigbluebutton/join", {"bigbluebutton": self.signed_bbb_a_object + "zoo"}
)
+ self.assert_json_error(response, "Invalid signature.")
@responses.activate
- def test_join_bigbluebutton_redirect_wrong_check(self) -> None:
+ def test_join_bigbluebutton_redirect_wrong_big_blue_button_checksum(self) -> None:
responses.add(
responses.GET,
- "https://bbb.example.com/bigbluebutton/api/create?meetingID=zulip-1&moderatorPW=a&attendeePW=aa&checksum=check",
+ "https://bbb.example.com/bigbluebutton/api/create?meetingID=a&name=a&moderatorPW=a&attendeePW=a&checksum=131bdec35f62fc63d5436e6f791d6d7aed7cf79ef256c03597e51d320d042823",
"<response><returncode>FAILED</returncode><messageKey>checksumError</messageKey>"
"<message>You did not pass the checksum security check</message></response>",
)
response = self.client_get(
"/calls/bigbluebutton/join",
- {"meeting_id": "zulip-1", "password": "a", "checksum": "check"},
+ {"bigbluebutton": self.signed_bbb_a_object},
)
self.assert_json_error(response, "Error authenticating to the BigBlueButton server.")
@@ -216,13 +251,13 @@ def test_join_bigbluebutton_redirect_server_error(self) -> None:
# Simulate bbb server error
responses.add(
responses.GET,
- "https://bbb.example.com/bigbluebutton/api/create?meetingID=zulip-1&moderatorPW=a&attendeePW=aa&checksum=check",
+ "https://bbb.example.com/bigbluebutton/api/create?meetingID=a&name=a&moderatorPW=a&attendeePW=a&checksum=131bdec35f62fc63d5436e6f791d6d7aed7cf79ef256c03597e51d320d042823",
"",
status=500,
)
response = self.client_get(
"/calls/bigbluebutton/join",
- {"meeting_id": "zulip-1", "password": "a", "checksum": "check"},
+ {"bigbluebutton": self.signed_bbb_a_object},
)
self.assert_json_error(response, "Error connecting to the BigBlueButton server.")
@@ -231,12 +266,12 @@ def test_join_bigbluebutton_redirect_error_by_server(self) -> None:
# Simulate bbb server error
responses.add(
responses.GET,
- "https://bbb.example.com/bigbluebutton/api/create?meetingID=zulip-1&moderatorPW=a&attendeePW=aa&checksum=check",
+ "https://bbb.example.com/bigbluebutton/api/create?meetingID=a&name=a&moderatorPW=a&attendeePW=a&checksum=131bdec35f62fc63d5436e6f791d6d7aed7cf79ef256c03597e51d320d042823",
"<response><returncode>FAILURE</returncode><messageKey>otherFailure</messageKey></response>",
)
response = self.client_get(
"/calls/bigbluebutton/join",
- {"meeting_id": "zulip-1", "password": "a", "checksum": "check"},
+ {"bigbluebutton": self.signed_bbb_a_object},
)
self.assert_json_error(response, "BigBlueButton server returned an unexpected error.")
@@ -244,6 +279,6 @@ def test_join_bigbluebutton_redirect_not_configured(self) -> None:
with self.settings(BIG_BLUE_BUTTON_SECRET=None, BIG_BLUE_BUTTON_URL=None):
response = self.client_get(
"/calls/bigbluebutton/join",
- {"meeting_id": "zulip-1", "password": "a", "checksum": "check"},
+ {"bigbluebutton": self.signed_bbb_a_object},
)
self.assert_json_error(response, "BigBlueButton is not configured.")
| Adjusting ID and Title of Big Blue Button Meeting
We currently generate the ID of a Big Blue Button Meeting for Video Calls randomly.
As suggested by @GhaziTriki we should generate them the following way:
```
- meeting Id:
* private chat: zulip-deployment-id-owner-id-user
* stream topic: zulip-deployment-id-stream-id-topic-id
- meeting name:
* private chat: User Name Room
* stream topic: stream - topic
```
The problem I see in that is that using the formats for the id that would mean that there can not be different meetings in one stream/pm chat. I think a time aspect should be included. (like appending the unix timestamp)
[Bug] Big Blue Button server returned an unexpected error.
I just installed a self-hosted Zulip instance. I encountered a bug when I tried to use the BigBlueButton Server of our organization as video call provider. I followed all the steps in https://zulip.com/integrations/doc/big-blue-button.
**Environment:**
Newest Zulip release (4.8) on a newly installed Ubuntu 20.04
I do not know the version of the BigBlueButton Server (not managed by myself) but it uses BBB's API version 2.0.
**Steps to reproduce:**
Follow steps in https://zulip.com/integrations/doc/big-blue-button.
**Expected result:**
Zulip client should open a browser window and join the BBB meeting.
**Actual Result:**
Zulip opens a browser window but shows an error.
**Solution:**
I did not find anything in the logs but I was able to trace this down to the lines 239-240 in zerver/views/video_calls.py.
I then analyzed the messageKey and message of the payload from the BBB servers response which told me I had to
specify a name.
I then added a meeting name by changing the get request in the join_bigbluebutton function starting on line 208 in zerver/views/video_calls.py to something that is in line with BBB's API (https://docs.bigbluebutton.org/dev/api.html#create):
```
response = requests.get(
add_query_to_redirect_url(
settings.BIG_BLUE_BUTTON_URL + "api/create",
urlencode(
{
"name": "Zulip-Meeting",
"meetingID": meeting_id,
"moderatorPW": password,
"attendeePW": password + "a",
"checksum": checksum,
}
),
)
)
```
i.e. I added a name to the meeting. Of course I also had to add this to the function generating the checksum in the get_bigbluebutton_url function starting on line 169 same file:
```
checksum = hashlib.sha1(
(
"create"
+ "name=Zulip-Meeting&"
+ "meetingID="
+ id
+ "&moderatorPW="
+ password
+ "&attendeePW="
+ password
+ "a"
+ settings.BIG_BLUE_BUTTON_SECRET
).encode()
).hexdigest()
```
After adding the "name" parameter to the API call everything works as expected.
Looks like you have to specify a name for the meeting even though the API of BBB does not mention that "name" is required. At least, this is the case for BBB's API version 2.0 but the API documentation does not mention any changes to said parameter in other versions.
BigBlueButton integration not working since 2.4
The meeting name has now became mandatory !
| @zulipbot claim
I started implementing it.
The problem I have is that in the request to generate the link we do not know which Stream/Private Message/Group Message the Meeting/Message that will be created is in.
I am not sure if the name and id should be generated client side.
I feel like we could generate the IDs still randomly and just configure the name based on the current compose box location at the time the request is made to create them?
Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
@strifel we can simply pass some JSON to the API call and write
```
data = json.loads(request.body.decode("utf-8"))
meeting_title = data['meeting_title']
```
in `zerver/views/video_calls.py:164:def get_bigbluebutton_url`
| 2020-10-20T20:35:34 |
zulip/zulip | 16,582 | zulip__zulip-16582 | [
"2304"
] | 621bef59587d9c1ae524ca4d724bd9591b53a2da | diff --git a/tools/lib/template_parser.py b/tools/lib/template_parser.py
--- a/tools/lib/template_parser.py
+++ b/tools/lib/template_parser.py
@@ -303,6 +303,7 @@ def is_special_html_tag(s: str, tag: str) -> bool:
'input',
'path',
'polygon',
+ 'stop',
]
def is_self_closing_html_tag(s: Text, tag: Text) -> bool:
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -44,4 +44,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '112.1'
+PROVISION_VERSION = '112.2'
| diff --git a/frontend_tests/node_tests/notifications.js b/frontend_tests/node_tests/notifications.js
--- a/frontend_tests/node_tests/notifications.js
+++ b/frontend_tests/node_tests/notifications.js
@@ -1,5 +1,7 @@
"use strict";
+const rewiremock = require("rewiremock/node");
+
// Dependencies
set_global(
"$",
@@ -32,7 +34,9 @@ zrequire("ui");
zrequire("spoilers");
spoilers.hide_spoilers_in_notification = () => {};
-zrequire("notifications");
+rewiremock.proxy(() => zrequire("notifications"), {
+ "../../static/js/favicon": {},
+});
// Not muted streams
const general = {
diff --git a/frontend_tests/node_tests/ui_init.js b/frontend_tests/node_tests/ui_init.js
--- a/frontend_tests/node_tests/ui_init.js
+++ b/frontend_tests/node_tests/ui_init.js
@@ -104,7 +104,9 @@ zrequire("narrow");
zrequire("search_suggestion");
zrequire("search");
zrequire("tutorial");
-zrequire("notifications");
+rewiremock.proxy(() => zrequire("notifications"), {
+ "../../static/js/favicon": {},
+});
zrequire("pm_conversations");
zrequire("pm_list");
zrequire("list_cursor");
| Convert favicons to be <canvas>.
In `./static/images/favicon` we have 100 different icons that were generated a while ago with the number of notifications that a user may have. Instead of calling a particular png every time the favicon is updated to reflect a new number of notifications, we should just generate a favicon with the <canvas> element.
This could be done with a combination of the green Zulip "z" icon and a text rendering with a shadow in the corner of the png. Then, we can kill all 100 favicons and not require any favicons to be loaded on update!
This could also be paired with upgrading favicons to be retina-supported as well.
| Hello @zulip/server-refactoring members, this issue was labeled with the "area: refactoring" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-10-21T05:35:13 |
zulip/zulip | 16,602 | zulip__zulip-16602 | [
"16600"
] | 99e6ec4190a10225a68c7daafe6f50fff19ef6ff | diff --git a/zerver/lib/i18n.py b/zerver/lib/i18n.py
--- a/zerver/lib/i18n.py
+++ b/zerver/lib/i18n.py
@@ -73,13 +73,8 @@ def get_available_language_codes() -> List[str]:
def get_language_translation_data(language: str) -> Dict[str, str]:
if language == 'en':
return {}
- elif language == 'zh-hans':
- language = 'zh_Hans'
- elif language == 'zh-hant':
- language = 'zh_Hant'
- elif language == 'id-id':
- language = 'id_ID'
- path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json')
+ locale = translation.to_locale(language)
+ path = os.path.join(settings.DEPLOY_ROOT, 'locale', locale, 'translations.json')
try:
with open(path, "rb") as reader:
return orjson.loads(reader.read())
| Translation for zh_TW gets ignored in some places
In the webapp, if I try switching to the translation for "Chinese (Taiwan)", a lot of the text on the screen is still untranslated:

That's even though many (at least) of those strings [do have translations in Transifex](https://www.transifex.com/zulip/zulip/translate/#zh_TW/$/67194598?q=text%3Asettings). Those translations have been there for months and do indeed seem to be in the repo, so it's not an issue of not having synced them.
I have a suspicion that the issue is with this code in `zerver/lib/i18n.py`:
```py3
def get_language_translation_data(language: str) -> Dict[str, str]:
if language == 'en':
return {}
elif language == 'zh-hans':
language = 'zh_Hans'
elif language == 'zh-hant':
language = 'zh_Hant'
elif language == 'id-id':
language = 'id_ID'
path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json')
# …
```
That has a handful of special cases to try to translate between two different conventions for locale names. It sure looks like it'd need another one to support `zh_TW` aka `zh-tw`, and that without that this function will fail to do its job on zh_TW.
Better still, of course, would be to make this function stop being made of special cases. Here's a Django utility function that should do the job of all those cases: https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.translation.to_locale
---
~~Likely related, but possibly a separate issue: in the webapp language picker itself, the translation shouldn't be called "Chinese (Taiwan)" but rather something like "中文(台湾)" -- its name is written in English, whereas all the other languages have their names written in themselves.~~ (This other symptom is caused at least in part by #14565.)
(Both issues originally reported [in chat](https://chat.zulip.org/#narrow/stream/58-translation/topic/zh-Hant.20.2F.20zh_TW/near/1045033).)
| Hello @zulip/server-i18n members, this issue was labeled with the "area: i18n" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2020-10-22T22:44:02 |
|
zulip/zulip | 16,612 | zulip__zulip-16612 | [
"17762"
] | 15f78abd68a499afacb0c357198e91b12b4f1865 | diff --git a/zerver/lib/upload.py b/zerver/lib/upload.py
--- a/zerver/lib/upload.py
+++ b/zerver/lib/upload.py
@@ -380,15 +380,37 @@ def get_signed_upload_url(path: str) -> str:
class S3UploadBackend(ZulipUploadBackend):
def __init__(self) -> None:
self.session = boto3.Session(settings.S3_KEY, settings.S3_SECRET_KEY)
-
self.avatar_bucket = get_bucket(settings.S3_AVATAR_BUCKET, self.session)
- network_location = urllib.parse.urlparse(
- self.avatar_bucket.meta.client.meta.endpoint_url
- ).netloc
- self.avatar_bucket_url = f"https://{self.avatar_bucket.name}.{network_location}"
-
self.uploads_bucket = get_bucket(settings.S3_AUTH_UPLOADS_BUCKET, self.session)
+ def get_public_upload_url(
+ self,
+ key: str,
+ ) -> str:
+ # Return the public URL for a key in the S3 Avatar bucket.
+ # For Amazon S3 itself, this will return the following:
+ # f"https://{self.avatar_bucket.name}.{network_location}/{key}"
+ #
+ # However, we need this function to properly handle S3 style
+ # file upload backends that Zulip supports, which can have a
+ # different URL format. Configuring no signature and providing
+ # no access key makes `generate_presigned_url` just return the
+ # normal public URL for a key.
+ config = Config(signature_version=botocore.UNSIGNED)
+ return self.session.client(
+ "s3",
+ region_name=settings.S3_REGION,
+ endpoint_url=settings.S3_ENDPOINT_URL,
+ config=config,
+ ).generate_presigned_url(
+ ClientMethod="get_object",
+ Params={
+ "Bucket": self.avatar_bucket.name,
+ "Key": key,
+ },
+ ExpiresIn=0,
+ )
+
def delete_file_from_s3(self, path_id: str, bucket: ServiceResource) -> bool:
key = bucket.Object(path_id)
@@ -512,12 +534,14 @@ def copy_avatar(self, source_profile: UserProfile, target_profile: UserProfile)
def get_avatar_url(self, hash_key: str, medium: bool = False) -> str:
medium_suffix = "-medium.png" if medium else ""
+ public_url = self.get_public_upload_url(f"{hash_key}{medium_suffix}")
+
# ?x=x allows templates to append additional parameters with &s
- return f"{self.avatar_bucket_url}/{hash_key}{medium_suffix}?x=x"
+ return public_url + "?x=x"
def get_export_tarball_url(self, realm: Realm, export_path: str) -> str:
# export_path has a leading /
- return f"{self.avatar_bucket_url}{export_path}"
+ return self.get_public_upload_url(export_path[1:])
def realm_avatar_and_logo_path(self, realm: Realm) -> str:
return os.path.join(str(realm.id), "realm")
@@ -547,8 +571,8 @@ def upload_realm_icon_image(self, icon_file: File, user_profile: UserProfile) ->
# that users use gravatar.)
def get_realm_icon_url(self, realm_id: int, version: int) -> str:
- # ?x=x allows templates to append additional parameters with &s
- return f"{self.avatar_bucket_url}/{realm_id}/realm/icon.png?version={version}"
+ public_url = self.get_public_upload_url(f"{realm_id}/realm/icon.png")
+ return public_url + f"?version={version}"
def upload_realm_logo_image(
self, logo_file: File, user_profile: UserProfile, night: bool
@@ -581,12 +605,12 @@ def upload_realm_logo_image(
# that users use gravatar.)
def get_realm_logo_url(self, realm_id: int, version: int, night: bool) -> str:
- # ?x=x allows templates to append additional parameters with &s
if not night:
file_name = "logo.png"
else:
file_name = "night_logo.png"
- return f"{self.avatar_bucket_url}/{realm_id}/realm/{file_name}?version={version}"
+ public_url = self.get_public_upload_url(f"{realm_id}/realm/{file_name}")
+ return public_url + f"?version={version}"
def ensure_avatar_image(self, user_profile: UserProfile, is_medium: bool = False) -> None:
# BUG: The else case should be user_avatar_path(user_profile) + ".png".
@@ -640,7 +664,7 @@ def get_emoji_url(self, emoji_file_name: str, realm_id: int) -> str:
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=realm_id, emoji_file_name=emoji_file_name
)
- return f"{self.avatar_bucket_url}/{emoji_path}"
+ return self.get_public_upload_url(emoji_path)
def upload_export_tarball(
self,
@@ -655,22 +679,7 @@ def upload_export_tarball(
key.upload_file(tarball_path, Callback=percent_callback)
- session = botocore.session.get_session()
- config = Config(signature_version=botocore.UNSIGNED)
-
- public_url = session.create_client(
- "s3",
- region_name=settings.S3_REGION,
- endpoint_url=settings.S3_ENDPOINT_URL,
- config=config,
- ).generate_presigned_url(
- "get_object",
- Params={
- "Bucket": self.avatar_bucket.name,
- "Key": key.key,
- },
- ExpiresIn=0,
- )
+ public_url = self.get_public_upload_url(key.key)
return public_url
def delete_export_tarball(self, export_path: str) -> Optional[str]:
| Org Icon Upload Path Incorrect
I've upgraded to the `master` branch on my install and am testing S3 compatibility. Uploading an image in a post works, and the image is rendered as expected. But, when uploading an image as an organization icon, the path is generated incorrectly. See below for the format difference:
Works as expected (in post):
`https://[redacted].compat.objectstorage.us-phoenix-1.oraclecloud.com/zulip-uploads/2/Wa1MeN0_2b9xVK8_3HhFBEMJ/Todd_Head_Square_Small.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=5e4bcb1552096f1544e07925e7b72b8ed501f5d5%2F20210323%2Fus-phoenix-1%2Fs3%2Faws4_request&X-Amz-Date=20210323T191934Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host&X-Amz-Signature=56ca1cec54a2f442801fb5d2dcce5d181303830e5b8f834b26ae0c447a94fb71`
Does not work:
`https://zulip-uploads.[redacted].compat.objectstorage.us-phoenix-1.oraclecloud.com/2/realm/icon.png?version=5`
Notice the position of the bucket (`zulip-uploads`) in each.
| 2020-10-23T10:33:37 |
||
zulip/zulip | 16,617 | zulip__zulip-16617 | [
"16554"
] | 2aec78e954e68f199de8d928642ff2325612c947 | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -397,6 +397,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
WebhookIntegration("insping", ["monitoring"], display_name="Insping"),
WebhookIntegration("intercom", ["customer-support"], display_name="Intercom"),
WebhookIntegration("jira", ["project-management"], display_name="JIRA"),
+ WebhookIntegration("jotform", ["misc"], display_name="Jotform"),
WebhookIntegration("librato", ["monitoring"]),
WebhookIntegration("mention", ["marketing"], display_name="Mention"),
WebhookIntegration("netlify", ["continuous-integration", "deployment"], display_name="Netlify"),
@@ -689,6 +690,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
"insping": [ScreenshotConfig("website_state_available.json")],
"intercom": [ScreenshotConfig("conversation_admin_replied.json")],
"jira": [ScreenshotConfig("created_v1.json")],
+ "jotform": [ScreenshotConfig("response.json")],
"librato": [ScreenshotConfig("three_conditions_alert.json", payload_as_query_param=True)],
"mention": [ScreenshotConfig("webfeeds.json")],
"netlify": [ScreenshotConfig("deploy_building.json")],
diff --git a/zerver/webhooks/jotform/__init__.py b/zerver/webhooks/jotform/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/jotform/view.py b/zerver/webhooks/jotform/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/jotform/view.py
@@ -0,0 +1,31 @@
+# Webhooks for external integrations.
+from typing import Any, Dict
+
+from django.http import HttpRequest, HttpResponse
+
+from zerver.decorator import webhook_view
+from zerver.lib.request import REQ, has_request_variables
+from zerver.lib.response import json_success
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+
+@webhook_view("Jotform")
+@has_request_variables
+def api_jotform_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+ topic = payload["formTitle"]
+ submission_id = payload["submissionID"]
+ fields_dict = list(payload["pretty"].split(", "))
+
+ form_response = f"A new submission (ID {submission_id}) was received:\n"
+ for field in fields_dict:
+ form_response += f"* {field}\n"
+
+ message = form_response.strip()
+
+ check_send_webhook_message(request, user_profile, topic, message)
+ return json_success()
| diff --git a/zerver/webhooks/jotform/tests.py b/zerver/webhooks/jotform/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/jotform/tests.py
@@ -0,0 +1,23 @@
+from zerver.lib.test_classes import WebhookTestCase
+
+
+class JotFormHookTests(WebhookTestCase):
+ STREAM_NAME = "test"
+ URL_TEMPLATE = "/api/v1/external/jotform?stream={stream}&api_key={api_key}"
+ FIXTURE_DIR_NAME = "jotform"
+
+ def test_response(self) -> None:
+ expected_title = "Form"
+ expected_message = """
+A new submission (ID 4791133489169827307) was received:
+* Name:Gaurav Pandey
+* Address:Lampgarden-street wolfsquare Bengaluru Karnataka 165578
+* Signature:uploads/gauravguitarrocks/202944822449057/4791133489169827307/4791133489169827307_signature_4.png
+""".strip()
+
+ self.check_webhook(
+ "response",
+ expected_title,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
| Jotform integration
It might be valuable to implement an integration with [Jotform](https://www.jotform.com/), as we have received feedback from users who would be willing to switch to Zulip if we had that feature available.
It seems like a standard outgoing webhook, with https://www.jotform.com/help/245-how-to-setup-a-webhook-with-jotform providing some introductory information on setting this up.
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
Tagged it as a good first issue; this seems like a great project for someone new to working on Zulip. https://zulip.com/api/incoming-webhooks-overview documents how to write one.
@timabbott Can I please work on this issue parallely with my PR #16524 under review ?
Sure! | 2020-10-23T17:57:26 |
zulip/zulip | 16,809 | zulip__zulip-16809 | [
"16058"
] | 7b2f16bc5c75db7a11d12e7d12051ab4e1d53937 | diff --git a/tools/lib/provision.py b/tools/lib/provision.py
--- a/tools/lib/provision.py
+++ b/tools/lib/provision.py
@@ -177,7 +177,7 @@
SYSTEM_DEPENDENCIES = [
*UBUNTU_COMMON_APT_DEPENDENCIES,
f"postgresql-{POSTGRESQL_VERSION}",
- f"postgresql-{POSTGRESQL_VERSION}-pgroonga",
+ f"postgresql-{POSTGRESQL_VERSION}-pgdg-pgroonga",
*VENV_DEPENDENCIES,
]
elif "rhel" in os_families():
@@ -186,7 +186,7 @@
f"postgresql{POSTGRESQL_VERSION}-server",
f"postgresql{POSTGRESQL_VERSION}",
f"postgresql{POSTGRESQL_VERSION}-devel",
- f"postgresql{POSTGRESQL_VERSION}-pgroonga",
+ f"postgresql{POSTGRESQL_VERSION}-pgdg-pgroonga",
*VENV_DEPENDENCIES,
]
elif "fedora" in os_families():
| upgrade-postgres script fails on Ubuntu 18.04 because of unavailable postgresql-12-pgroonga
The upgrade-postgres script fails on Ubuntu 18.04, because the bionic dist of the Groonga repo has no package `postgresql-12-pgroonga`, only `postgresql-10-pgroonga`.
It fails with:
```
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install postgresql-12-pgroonga' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package postgresql-12-pgroonga
Error: /Stage[main]/Zulip::Postgres_appdb_base/Package[postgresql-12-pgroonga]/ensure: change from 'purged' to 'present' failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install postgresql-12-pgroonga' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package postgresql-12-pgroonga
Notice: /Stage[main]/Zulip::Postgres_appdb_base/File[/usr/share/postgresql/12/pgroonga_setup.sql]: Dependency Package[postgresql-12-pgroonga] has failures: true
Warning: /Stage[main]/Zulip::Postgres_appdb_base/File[/usr/share/postgresql/12/pgroonga_setup.sql]: Skipping because of failed dependencies
Notice: /Stage[main]/Zulip::Postgres_appdb_base/Exec[create_pgroonga_extension]: Dependency Package[postgresql-12-pgroonga] has failures: true
Warning: /Stage[main]/Zulip::Postgres_appdb_base/Exec[create_pgroonga_extension]: Skipping because of failed dependencies
Notice: Applied catalog in 10.23 seconds
```
I fixed this by manually downloading the package from the pool and installing it.
| @imolein thanks for the report; just to check, are you using pgroonga in your installation?
@kou would it be easy for the Groonga project to provide Pgroonga packages built for the postgres.org package repository (https://www.postgresql.org/download/linux/debian/)? We've switched to using that in order to support using modern postgres on somewhat older OS releases (and also allow upgrading postgres independently of upgrading the base OS).
Or do you have another recommendation for how we should get Pgroonga for these cases?
@timabbott Thanks for the fast reply :) Yes, we use pgroonga in our installation.
I think that we can provide them easily. I'll take a look at it.
We need PGroonga packages for Debian 10, Ubuntu 18.04 and Ubuntu 20.04, right?
@kou `postgresql-12-pgroonga` is available for 20.04 via the Groonga repository.
Ah, it may be difficult to provide PGroonga packages for Ubuntu.
Because PGroonga uses https://launchpad.net/~groonga/+archive/ubuntu/ppa . PPA may require that dependencies exist on PPA but the PostgreSQL packages provided by https://www.postgresql.org/download/linux/ubuntu/ don't exist on PPA.
The current `postgresql-12-pgroonga` package on Ubuntu 20.04 is built with the PostgreSQL package provided by Ubuntu not PostgreSQL.
It may have a problem to use the current `postgresql-12-pgroonga` package with the PostgreSQL package provided by PostgreSQL.
Oh, I already created `postgresql-12-pgroonga` for the PostgreSQL package provided by PostgreSQL on Debian 10: https://pgroonga.github.io/install/debian.html#install-on-buster
I've uploaded `postgresql-12-pgdg-pgroonga` packages for Debian GNU/Linux buster, Ubuntu 18.04 and Ubuntu 20.04. They are for the PostgreSQL packages provided by the PostgreSQL Global Development Group.
Note that we need enable both https://launchpad.net/~groonga/+archive/ubuntu/ppa and packages.groonga.org to use `postgresql-12-pgdg-pgroonga` for Ubuntu.
See also:
* Install document for Debian GNU/Linux: https://pgroonga.github.io/install/debian.html#install-on-buster
* Install document for Ubuntu: https://pgroonga.github.io/install/ubuntu.html#install-for-official-postgresql
> The current `postgresql-12-pgroonga` package on Ubuntu 20.04 is built with the PostgreSQL package provided by Ubuntu not PostgreSQL.
Hm, maybe that explains why I don't get any results from the search anymore?
Is it save to deinstall the `postgresql-12-pgroonga` from the PPA and install `postgresql-12-pgdg-pgroonga`? Do I have to run any migration after the installation?
You can just install `postgresql-12-pgdg-pgroonga`. `postgresql-12-pgdg-pgroonga` replaces existing `postgresql-12-pgroonga` automatically.
You just need to disconnect before installing `postgresql-12-pgdg-pgroonga` and connect again after installing `postgresql-12-pgdg-pgroonga`. You don't need to migrate again.
I installed your package, thank you. Unfortunately the search still is kinda broken. For some words, I got 1-2 results, even if there are many more in the least recent messages. But in 99% of the time I got nothing back from the search.
I look into the logfiles, but didn't find any error. Can you or @timabbott give me a hint where I have to look? Thanks :)
Could you try writing "Hello World" message and search by "Hello"?
Does it find the latest "Hello World" message?
@imolein are you actually using PGroonga, or the default search backend? With the latter, you need to run `manage.py audit_fts_indexes` after the upgrade -- that step is included in our upgrade documentation indirectly via a flag to `upgrade-zulip-stage-2`, but it seems possible because of the error you didn't run that as expected.
@kou yeah, it seems like new messages are found correctly
@timabbott I enabled pgroonga right after the installation of my Zulip server a few months ago. I followed the documentation and edited `zulip.conf` and `settings.py` accordingly and the settings are still there. So I guess it is used as the search backend.
This problem hit me on the Zulip 3.2 update again today. It tries to install `postgresql-12-pgroonga`, but couldn't, because I installed `postgresql-12-pgdg-pgroonga`. I guess the puppet manifest need some kind of check, which package is installed, if possible. (I don't know puppet)
To finish my update, is it safe to edit the puppet manifest in the `deployment/next` folder and run the update again?
Yeah, that should be safe. Just be sure to run `zulip-puppet-apply` and any other tools from the `/home/zulip/deployments/next` path. | 2020-11-30T08:17:02 |
|
zulip/zulip | 16,819 | zulip__zulip-16819 | [
"16783"
] | cbb7fb8ac0a460d7893e92fb3643419185ebf5dc | diff --git a/zerver/webhooks/sentry/view.py b/zerver/webhooks/sentry/view.py
--- a/zerver/webhooks/sentry/view.py
+++ b/zerver/webhooks/sentry/view.py
@@ -236,9 +236,10 @@ def transform_webhook_payload(payload: Dict[str, Any]) -> Optional[Dict[str, Any
if not event_id:
return None
- event_path = f"event/{event_id}/"
+ event_path = f"events/{event_id}/"
event['web_url'] = urljoin(payload['url'], event_path)
- event['datetime'] = datetime.fromtimestamp(event['timestamp']).isoformat()
+ timestamp = event.get('timestamp', event['received'])
+ event['datetime'] = datetime.fromtimestamp(timestamp).isoformat()
return payload
| diff --git a/zerver/webhooks/sentry/tests.py b/zerver/webhooks/sentry/tests.py
--- a/zerver/webhooks/sentry/tests.py
+++ b/zerver/webhooks/sentry/tests.py
@@ -85,7 +85,7 @@ def test_event_for_exception_python(self) -> None:
def test_webhook_event_for_exception_python(self) -> None:
expected_topic = "ValueError: new sentry error."
expected_message = """
-**New exception:** [ValueError: new sentry error.](https://sentry.io/organizations/bar-foundation/issues/1972208801/event/c916dccfd58e41dcabaebef0091f0736/)
+**New exception:** [ValueError: new sentry error.](https://sentry.io/organizations/bar-foundation/issues/1972208801/events/c916dccfd58e41dcabaebef0091f0736/)
```quote
**level:** error
**timestamp:** 2020-10-21 23:25:11
@@ -105,6 +105,17 @@ def test_webhook_event_for_exception_python(self) -> None:
```"""
self.check_webhook("webhook_event_for_exception_python", expected_topic, expected_message)
+ def test_webhook_event_for_exception_javascript(self) -> None:
+ expected_topic = 'TypeError: can\'t access property "bar", x.foo is undefined'
+ expected_message = """
+**New exception:** [TypeError: can't access property "bar", x.foo is undefined](https://sentry.io/organizations/foo-bar-org/issues/1982047746/events/f3bf5fc4e354451db9e885a69b2a2b51/)
+```quote
+**level:** error
+**timestamp:** 2020-10-26 16:39:54
+**filename:** None
+```"""
+ self.check_webhook("webhook_event_for_exception_javascript", expected_topic, expected_message)
+
def test_event_for_message_golang(self) -> None:
expected_topic = "A test message event from golang."
expected_message = """
| Sentry integration has incorrect url when configured as a webhook
We have the sentry integration for zulip configured as a webhook, and the issue url in the notification is broken -- it has `event` in the path where it should have `events`. I think the bug is here: https://github.com/zulip/zulip/blob/master/zerver/webhooks/sentry/view.py#L239
We're loving the new integration overall. Thank you!
| 2020-12-02T05:28:41 |
|
zulip/zulip | 16,851 | zulip__zulip-16851 | [
"16843"
] | c4d805a82c94dad0ae2209ea6a23489f879c13bd | diff --git a/zerver/lib/url_preview/parsers/base.py b/zerver/lib/url_preview/parsers/base.py
--- a/zerver/lib/url_preview/parsers/base.py
+++ b/zerver/lib/url_preview/parsers/base.py
@@ -1,13 +1,17 @@
-from typing import Any
+import cgi
+from typing import Any, Optional
class BaseParser:
- def __init__(self, html_source: str) -> None:
+ def __init__(self, html_source: bytes, content_type: Optional[str]) -> None:
# We import BeautifulSoup here, because it's not used by most
# processes in production, and bs4 is big enough that
# importing it adds 10s of milliseconds to manage.py startup.
from bs4 import BeautifulSoup
- self._soup = BeautifulSoup(html_source, "lxml")
+ charset = None
+ if content_type is not None:
+ charset = cgi.parse_header(content_type)[1].get("charset")
+ self._soup = BeautifulSoup(html_source, "lxml", from_encoding=charset)
def extract_data(self) -> Any:
raise NotImplementedError()
diff --git a/zerver/lib/url_preview/preview.py b/zerver/lib/url_preview/preview.py
--- a/zerver/lib/url_preview/preview.py
+++ b/zerver/lib/url_preview/preview.py
@@ -91,12 +91,16 @@ def get_link_embed_data(url: str,
response = requests.get(mark_sanitized(url), stream=True, headers=HEADERS, timeout=TIMEOUT)
if response.ok:
- og_data = OpenGraphParser(response.text).extract_data()
+ og_data = OpenGraphParser(
+ response.content, response.headers.get("Content-Type")
+ ).extract_data()
for key in ['title', 'description', 'image']:
if not data.get(key) and og_data.get(key):
data[key] = og_data[key]
- generic_data = GenericParser(response.text).extract_data() or {}
+ generic_data = GenericParser(
+ response.content, response.headers.get("Content-Type")
+ ).extract_data() or {}
for key in ['title', 'description', 'image']:
if not data.get(key) and generic_data.get(key):
data[key] = generic_data[key]
| diff --git a/zerver/lib/test_helpers.py b/zerver/lib/test_helpers.py
--- a/zerver/lib/test_helpers.py
+++ b/zerver/lib/test_helpers.py
@@ -314,6 +314,7 @@ def get_host(self) -> str:
class MockPythonResponse:
def __init__(self, text: str, status_code: int, headers: Optional[Dict[str, str]]=None) -> None:
+ self.content = text.encode()
self.text = text
self.status_code = status_code
if headers is None:
diff --git a/zerver/tests/test_link_embed.py b/zerver/tests/test_link_embed.py
--- a/zerver/tests/test_link_embed.py
+++ b/zerver/tests/test_link_embed.py
@@ -136,7 +136,7 @@ def test_autodiscovered_oembed_xml_format_html(self) -> None:
class OpenGraphParserTestCase(ZulipTestCase):
def test_page_with_og(self) -> None:
- html = """<html>
+ html = b"""<html>
<head>
<meta property="og:title" content="The Rock" />
<meta property="og:type" content="video.movie" />
@@ -146,14 +146,14 @@ def test_page_with_og(self) -> None:
</head>
</html>"""
- parser = OpenGraphParser(html)
+ parser = OpenGraphParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertIn('title', result)
self.assertEqual(result['title'], 'The Rock')
self.assertEqual(result.get('description'), 'The Rock film')
def test_page_with_evil_og_tags(self) -> None:
- html = """<html>
+ html = b"""<html>
<head>
<meta property="og:title" content="The Rock" />
<meta property="og:type" content="video.movie" />
@@ -165,7 +165,7 @@ def test_page_with_evil_og_tags(self) -> None:
</head>
</html>"""
- parser = OpenGraphParser(html)
+ parser = OpenGraphParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertIn('title', result)
self.assertEqual(result['title'], 'The Rock')
@@ -173,9 +173,30 @@ def test_page_with_evil_og_tags(self) -> None:
self.assertEqual(result.get('oembed'), None)
self.assertEqual(result.get('html'), None)
+ def test_charset_in_header(self) -> None:
+ html = """<html>
+ <head>
+ <meta property="og:title" content="中文" />
+ </head>
+ </html>""".encode("big5")
+ parser = OpenGraphParser(html, "text/html; charset=Big5")
+ result = parser.extract_data()
+ self.assertEqual(result["title"], "中文")
+
+ def test_charset_in_meta(self) -> None:
+ html = """<html>
+ <head>
+ <meta content-type="text/html; charset=Big5" />
+ <meta property="og:title" content="中文" />
+ </head>
+ </html>""".encode("big5")
+ parser = OpenGraphParser(html, "text/html")
+ result = parser.extract_data()
+ self.assertEqual(result["title"], "中文")
+
class GenericParserTestCase(ZulipTestCase):
def test_parser(self) -> None:
- html = """
+ html = b"""
<html>
<head><title>Test title</title></head>
<body>
@@ -184,13 +205,13 @@ def test_parser(self) -> None:
</body>
</html>
"""
- parser = GenericParser(html)
+ parser = GenericParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertEqual(result.get('title'), 'Test title')
self.assertEqual(result.get('description'), 'Description text')
def test_extract_image(self) -> None:
- html = """
+ html = b"""
<html>
<body>
<h1>Main header</h1>
@@ -202,14 +223,14 @@ def test_extract_image(self) -> None:
</body>
</html>
"""
- parser = GenericParser(html)
+ parser = GenericParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertEqual(result.get('title'), 'Main header')
self.assertEqual(result.get('description'), 'Description text')
self.assertEqual(result.get('image'), 'http://test.com/test.jpg')
def test_extract_description(self) -> None:
- html = """
+ html = b"""
<html>
<body>
<div>
@@ -220,22 +241,22 @@ def test_extract_description(self) -> None:
</body>
</html>
"""
- parser = GenericParser(html)
+ parser = GenericParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertEqual(result.get('description'), 'Description text')
- html = """
+ html = b"""
<html>
<head><meta name="description" content="description 123"</head>
<body></body>
</html>
"""
- parser = GenericParser(html)
+ parser = GenericParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertEqual(result.get('description'), 'description 123')
- html = "<html><body></body></html>"
- parser = GenericParser(html)
+ html = b"<html><body></body></html>"
+ parser = GenericParser(html, "text/html; charset=UTF-8")
result = parser.extract_data()
self.assertIsNone(result.get('description'))
| URL preview garbled Chinese
Version: Self hosted 3.3
## Scenario
Server with `URL Preview` enabled
posting this link: https://newtalk.tw/news/view/2020-12-03/503169
Site uses html lang tag `lang=“zh-Hant-TW”`
## Expected Behavior
A URL leading to a Chinese website should produce a Chinese preview.
## Actual Behavior
The preview contains garbled characters like [mojibake](https://revealingerrors.com/mojibake).
> Or here's something equivalent to "interpreting it as if it were ISO-8859-1" which might turn out to be a better description of what's really happening in this bug: taking the bytes of the UTF-8 encoding, and interpreting each single byte as a Unicode codepoint.

```
# Actual content of header meta title
蘋果AirDrop技術爆大漏洞!駭客可遠端操控iPhone | 科技 | 新頭殼 Newtalk
# Actual Content of header meta description
Google Project Zero的網路安全研究人員Ian Beer日前揭露了iPhone過去的重大漏洞,破解蘋果應用在AirDrop的相關技術,使得有心人士得以透過特殊裝置遠段操控附近的iPhone,所幸蘋果已經在今年5月修補了這個iOS漏洞。Beer所發現的系統漏洞存在於蘋果2014年引進的
# Garbled preview version in zulip
èæAirDropæè¡ç大æ¼æ´ï¼é§å®¢å¯é 端ææ§iPhone | ç§æ | æ°é 殼 Newtalk
Google Project Zeroç網路å®å
¨ç 究人å¡Ian Beeræ¥åæé²äºiPhoneéå»çé大æ¼æ´ï¼ç ´è§£èææç¨å¨AirDropçç¸éæè¡ï¼ä½¿å¾æå¿äººå£«å¾ä»¥ééç¹æ®è£ç½®é 段ææ§éè¿çiPhoneï¼æ幸èæå·²ç¶å¨ä»å¹´5æä¿®è£äºéåiOSæ¼æ´ãBeeræç¼ç¾ç系統æ¼æ´åå¨æ¼èæ2014å¹´å¼é²ç
```
## Possible Issue
Spot-checking just the first couple of characters here: these mojibake are consistent with getting something that's actually UTF-8 and interpreting it as if it were ISO-8859-1. Specifically:
> 蘋
This first character is e8 98 8b in UTF-8:
```
$ unicode -s 蘋 --long
U+860B CJK UNIFIED IDEOGRAPH-860B
UTF-8: e8 98 8b UTF-16BE: 860b Decimal: 蘋 Octal: \0103013
蘋
…
```
and interpreting the byte e8 in ISO-8859-1, you get è:
```
$ unicode u+00e8
è U+00E8 LATIN SMALL LETTER E WITH GRAVE
```
## Related Code
https://github.com/zulip/zulip/blob/0d6c771baf9ea0ebd8174c927e50709d59da0f64/zerver/lib/url_preview/preview.py
## Related issue chat
https://chat.zulip.org/#narrow/stream/9-issues/topic/URL.20preview.20Chinese.20Jumbled
| @zulipbot add "bug" "area: general UI"
https://chat.zulip.org/#narrow/stream/9-issues/topic/URL.20preview.20Chinese.20Jumbled/near/1076820 has the bug pointed out by Greg. See it before working on this issue.
Copying here a bit of partial debugging I mentioned in the linked chat thread:
This looks like [mojibake](https://revealingerrors.com/mojibake), and specifically the kind of mojibake you get when taking some data that's actually encoded in UTF-8 and interpreting it as if it were encoded in ISO-8859-1.
Or equivalently: it looks like exactly what we'd get if the data is actually encoded in UTF-8 and we interpret each single byte of it as if it were a Unicode codepoint.
I took a quick look at the code involved and don't see an obvious spot where the bug might be. (I'm not familiar with this area of our code.) The next step is probably for someone to take the given reproducer and do some debugging to find out exactly where in the code the bad data (mis-decoded data) comes from.
I wonder if this has anything to do with beautiful soup (web crawler) used to extract data from the page
Specifically Beautiful soup seems to "guess" the encoding of the source document.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#encodings
No, the problem is the opposite: we prevent Beautiful Soup from detecting the encoding by passing `str` instead of `bytes`. The pages you’ve referenced don’t send a charset in their `Content-Type`, so we need to let Beautiful Soup detect the encoding from the `<meta>` tags in the page. | 2020-12-08T03:33:45 |
zulip/zulip | 16,860 | zulip__zulip-16860 | [
"16482"
] | 417faa046582890c3d15603e90b8e0bb18fbaa5e | diff --git a/zerver/migrations/0330_linkifier_pattern_validator.py b/zerver/migrations/0330_linkifier_pattern_validator.py
new file mode 100644
--- /dev/null
+++ b/zerver/migrations/0330_linkifier_pattern_validator.py
@@ -0,0 +1,18 @@
+# Generated by Django 3.1.8 on 2021-04-18 15:36
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ("zerver", "0329_remove_realm_allow_community_topic_editing"),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name="realmfilter",
+ name="pattern",
+ field=models.TextField(),
+ ),
+ ]
diff --git a/zerver/models.py b/zerver/models.py
--- a/zerver/models.py
+++ b/zerver/models.py
@@ -11,6 +11,7 @@
Dict,
List,
Optional,
+ Pattern,
Sequence,
Set,
Tuple,
@@ -918,7 +919,7 @@ def flush_realm_emoji(sender: Any, **kwargs: Any) -> None:
post_delete.connect(flush_realm_emoji, sender=RealmEmoji)
-def filter_pattern_validator(value: str) -> None:
+def filter_pattern_validator(value: str) -> Pattern[str]:
regex = re.compile(r"^(?:(?:[\w\-#_= /:]*|[+]|[!])(\(\?P<\w+>.+\)))+$")
error_msg = _("Invalid linkifier pattern. Valid characters are {}.").format(
"[ a-zA-Z_#=/:+!-]",
@@ -928,11 +929,13 @@ def filter_pattern_validator(value: str) -> None:
raise ValidationError(error_msg)
try:
- re.compile(value)
+ pattern = re.compile(value)
except re.error:
# Regex is invalid
raise ValidationError(error_msg)
+ return pattern
+
def filter_format_validator(value: str) -> None:
regex = re.compile(r"^([\.\/:a-zA-Z0-9#_?=&;~-]+%\(([a-zA-Z0-9_-]+)\)s)+[/a-zA-Z0-9#_?=&;~-]*$")
@@ -948,12 +951,54 @@ class RealmFilter(models.Model):
id: int = models.AutoField(auto_created=True, primary_key=True, verbose_name="ID")
realm: Realm = models.ForeignKey(Realm, on_delete=CASCADE)
- pattern: str = models.TextField(validators=[filter_pattern_validator])
+ pattern: str = models.TextField()
url_format_string: str = models.TextField(validators=[URLValidator(), filter_format_validator])
class Meta:
unique_together = ("realm", "pattern")
+ def clean(self) -> None:
+ """Validate whether the set of parameters in the URL Format string
+ match the set of parameters in the regular expression.
+
+ Django's `full_clean` calls `clean_fields` followed by `clean` method
+ and stores all ValidationErrors from all stages to return as JSON.
+ """
+
+ # Extract variables present in the pattern
+ pattern = filter_pattern_validator(self.pattern)
+ group_set = set(pattern.groupindex.keys())
+
+ # Extract variables used in the URL format string. Note that
+ # this regex will incorrectly reject patterns that attempt to
+ # escape % using %%.
+ found_group_set: Set[str] = set()
+ group_match_regex = r"%\((?P<group_name>[^()]+)\)s"
+ for m in re.finditer(group_match_regex, self.url_format_string):
+ group_name = m.group("group_name")
+ found_group_set.add(group_name)
+
+ # Report patterns missing in linkifier pattern.
+ missing_in_pattern_set = found_group_set - group_set
+ if len(missing_in_pattern_set) > 0:
+ name = list(missing_in_pattern_set)[0]
+ raise ValidationError(
+ _("Group %(name)r in URL format string is not present in linkifier pattern."),
+ params={"name": name},
+ )
+
+ missing_in_url_set = group_set - found_group_set
+ # Report patterns missing in URL format string.
+ if len(missing_in_url_set) > 0:
+ # We just report the first missing pattern here. Users can
+ # incrementally resolve errors if there are multiple
+ # missing patterns.
+ name = list(missing_in_url_set)[0]
+ raise ValidationError(
+ _("Group %(name)r in linkifier pattern is not present in URL format string."),
+ params={"name": name},
+ )
+
def __str__(self) -> str:
return f"<RealmFilter({self.realm.string_id}): {self.pattern} {self.url_format_string}>"
| diff --git a/frontend_tests/puppeteer_tests/realm-linkifier.ts b/frontend_tests/puppeteer_tests/realm-linkifier.ts
--- a/frontend_tests/puppeteer_tests/realm-linkifier.ts
+++ b/frontend_tests/puppeteer_tests/realm-linkifier.ts
@@ -47,9 +47,9 @@ async function test_add_invalid_linkifier_pattern(page: Page): Promise<void> {
});
await page.click("form.admin-linkifier-form button.button");
- await page.waitForSelector("div#admin-linkifier-pattern-status", {visible: true});
+ await page.waitForSelector("div#admin-linkifier-status", {visible: true});
assert.strictEqual(
- await common.get_text_from_selector(page, "div#admin-linkifier-pattern-status"),
+ await common.get_text_from_selector(page, "div#admin-linkifier-status"),
"Failed: Invalid linkifier pattern. Valid characters are [ a-zA-Z_#=/:+!-].",
);
}
@@ -88,7 +88,7 @@ async function test_edit_invalid_linkifier(page: Page): Promise<void> {
});
await page.click(".submit-linkifier-info-change");
- const edit_linkifier_pattern_status_selector = "div#edit-linkifier-pattern-status";
+ const edit_linkifier_pattern_status_selector = "div#edit-linkifier-status";
await page.waitForSelector(edit_linkifier_pattern_status_selector, {visible: true});
const edit_linkifier_pattern_status = await common.get_text_from_selector(
page,
diff --git a/zerver/tests/test_realm_linkifiers.py b/zerver/tests/test_realm_linkifiers.py
--- a/zerver/tests/test_realm_linkifiers.py
+++ b/zerver/tests/test_realm_linkifiers.py
@@ -94,6 +94,41 @@ def test_create(self) -> None:
self.assert_json_success(result)
self.assertIsNotNone(re.match(data["pattern"], "!123"))
+ # This block of tests is for mismatches between field sets
+ data["pattern"] = r"ZUL-(?P<id>\d+)"
+ data["url_format_string"] = r"https://realm.com/my_realm_filter/%(hello)s"
+ result = self.client_post("/json/realm/filters", info=data)
+ self.assert_json_error(
+ result, "Group 'hello' in URL format string is not present in linkifier pattern."
+ )
+
+ data["pattern"] = r"ZUL-(?P<id>\d+)-(?P<hello>\d+)"
+ data["url_format_string"] = r"https://realm.com/my_realm_filter/%(hello)s"
+ result = self.client_post("/json/realm/filters", info=data)
+ self.assert_json_error(
+ result, "Group 'id' in linkifier pattern is not present in URL format string."
+ )
+
+ data["pattern"] = r"ZULZ-(?P<hello>\d+)-(?P<world>\d+)"
+ data["url_format_string"] = r"https://realm.com/my_realm_filter/%(hello)s/%(world)s"
+ result = self.client_post("/json/realm/filters", info=data)
+ self.assert_json_success(result)
+
+ data["pattern"] = r"ZUL-(?P<id>\d+)-(?P<hello>\d+)-(?P<world>\d+)"
+ data["url_format_string"] = r"https://realm.com/my_realm_filter/%(hello)s"
+ result = self.client_post("/json/realm/filters", info=data)
+ self.assert_json_error(
+ result, "Group 'id' in linkifier pattern is not present in URL format string."
+ )
+
+ # BUG: In theory, this should be valid, since %% should be a
+ # valid escaping method. It's unlikely someone actually wants
+ # to do this, though.
+ data["pattern"] = r"ZUL-(?P<id>\d+)"
+ data["url_format_string"] = r"https://realm.com/my_realm_filter/%%(ignored)s/%(id)s"
+ result = self.client_post("/json/realm/filters", info=data)
+ self.assert_json_error(result, "Invalid URL format string.")
+
data["pattern"] = r"(?P<org>[a-zA-Z0-9_-]+)/(?P<repo>[a-zA-Z0-9_-]+)#(?P<id>[0-9]+)"
data["url_format_string"] = "https://github.com/%(org)s/%(repo)s/issue/%(id)s"
result = self.client_post("/json/realm/filters", info=data)
| Linkifier validation -- non-existant group name breaks rendering
With the following pattern and replacement:
`#(?P<id>[0-9]+)` / `https://github.com/zulip/zulip/issues/%(bogus)s`
...sending messages with `#123` results in:
```
2020-10-06 19:14:23.691 ERR [] Exception in Markdown parser; input (sanitized) was: '#xxx'
(message unknown)
Traceback (most recent call last):
File "/srv/zulip/zerver/lib/markdown/__init__.py", line 2344, in do_convert
rendered_content = timeout(5, _md_engine.convert, content)
File "/srv/zulip/zerver/lib/timeout.py", line 90, in timeout
raise thread.exc_info[1].with_traceback(thread.exc_info[2])
File "/srv/zulip/zerver/lib/timeout.py", line 50, in run
self.result = func(*args, **kwargs)
File "/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/core.py", line 267, in convert
newRoot = treeprocessor.run(root)
File "/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/treeprocessors.py", line 370, in run
self.__handleInline(text), child
File "/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/treeprocessors.py", line 131, in __handleInline
self.inlinePatterns[patternIndex], data, patternIndex, startIndex
File "/srv/zulip-py3-venv/lib/python3.6/site-packages/markdown/treeprocessors.py", line 285, in __applyPattern
node = pattern.handleMatch(match)
File "/srv/zulip/zerver/lib/markdown/__init__.py", line 1634, in handleMatch
self.format_string % m.groupdict(),
KeyError: 'bogus'
```
We should at very least not fail to send the message -- especially since there are already such in existance. But we should also prevent the creation of the linkifier in the first place, so people aren't confused why they silently aren't linking.
| @aero31aero can you help investigate this?
I think #13398 might be what we need to validate this more correctly. I think most correctly we should just block creating such an invalid linkifier, and purge any that have been already created.
@timabbott yes, the first commit of #13398 fixes this. I'll test and amend the commit message to close this issue and rebase the PR while I'm at it.
[This is also possible in topic names](https://sentry.io/share/issue/655e20a2bd4f467cafa4f9c0dd17ad00/). The effects of this are worse if it's in a topic name, as it breaks for more than just a single message.
Let's see if we can get #13398 integrated.
@timabbott can I work on this issue ? Or is this dependant on the PR #13398.
I think ideally we should make a PR that extracts the first commit of #13398 as a separate PR addressing the comments related to that commit; that would allow us to close out this, since #13398 as a whole has a decent number of unresolved comments. | 2020-12-09T08:03:19 |
zulip/zulip | 16,916 | zulip__zulip-16916 | [
"11895"
] | abd959cf6a77096cd52610e9e66465a86a3c6eb1 | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -1625,9 +1625,65 @@ def __init__(self, compiled_re: Pattern[str], md: markdown.Markdown) -> None:
class AutoLink(CompiledPattern):
+ """AutoLink takes care of linkifying link-format strings directly
+ present in the message (i.e. without markdown link syntax). In
+ some hardcoded cases, it will rewrite the label to what the user
+ probably wanted if they'd taken the time to do so.
+ """
+
+ # Ideally, we'd use a dynamic commit prefix length based on the
+ # size of the repository, like Git itself does, but we're
+ # shortening Git commit IDs without looking at the corresponding
+ # repository. It's not essential that the shortenings are
+ # globally unique as they're just shorthand, but 12 characters is
+ # the minimum to be unique for projects with about 1M commits.
+ COMMIT_ID_PREFIX_LENGTH = 12
+
+ def shorten_links(self, href: str) -> Optional[str]:
+ parts = urllib.parse.urlparse(href)
+ scheme, netloc, path, params, query, fragment = parts
+ if scheme == "https" and netloc in ["github.com"]:
+ # Split the path to extract our 4 variables.
+
+ # To do it cleanly without multiple if branches based on which of these
+ # variables are present, we here add a list of ["", "", ""...]
+ # to the result of path.split, which at worst can be []. We also remove
+ # the first empty string we'd get from "/foo/bar".split("/").
+
+ # Example path: "/foo/bar" output: ["foo", "bar", "", "", ""]
+ # path: "" output: ["", "", "", "", ""]
+ organisation, repository, artifact, value, remaining_path = (
+ path.split("/", 5)[1:] + [""] * 5
+ )[:5]
+
+ # Decide what type of links to shorten.
+ if not organisation or not repository or not artifact or not value:
+ return None
+ repo_short_text = "{}/{}".format(organisation, repository)
+
+ if fragment or remaining_path:
+ # We only intend to shorten links for the basic issue, PR, and commit ones.
+ return None
+
+ if netloc == "github.com":
+ return self.shorten_github_links(artifact, repo_short_text, value)
+ return None
+
+ def shorten_github_links(
+ self, artifact: str, repo_short_text: str, value: str
+ ) -> Optional[str]:
+ if artifact in ["pull", "issues"]:
+ return "{}#{}".format(repo_short_text, value)
+ if artifact == "commit":
+ return "{}@{}".format(repo_short_text, value[0 : self.COMMIT_ID_PREFIX_LENGTH])
+ return None
+
def handleMatch(self, match: Match[str]) -> ElementStringNone:
url = match.group("url")
db_data = self.md.zulip_db_data
+ shortened_text = self.shorten_links(url)
+ if shortened_text is not None:
+ return url_to_a(db_data, url, shortened_text)
return url_to_a(db_data, url)
| diff --git a/zerver/tests/fixtures/markdown_test_cases.json b/zerver/tests/fixtures/markdown_test_cases.json
--- a/zerver/tests/fixtures/markdown_test_cases.json
+++ b/zerver/tests/fixtures/markdown_test_cases.json
@@ -932,6 +932,121 @@
"input": "<h1>*<h1>[<h2>Static types in Python</h2>](https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy)</h1>*</h1>",
"expected_output": "<p><h1><em><h1><a href=\"https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy\"><h2>Static types in Python</h2></a></h1></em></h1></p>",
"marked_expected_output": "<p><h1><em><h1><a href=\"https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy\"><h2>Static types in Python</h2></a></h1></em></h1>\n\n</p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_link",
+ "input": "https://github.com/zulip/zulip-mobile",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile\">https://github.com/zulip/zulip-mobile</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_issues_link",
+ "input": "https://github.com/zulip/zulip-mobile/issues/11895",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/issues/11895\">zulip/zulip-mobile#11895</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_pull_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16665",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16665\">zulip/zulip-mobile#16665</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_commit_link",
+ "input": "https://github.com/zulip/zulip-mobile/commit/620e9cbf72ca729534aba41693d7ab2872caa394",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/commit/620e9cbf72ca729534aba41693d7ab2872caa394\">zulip/zulip-mobile@620e9cbf72ca</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_commits_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16860/commits/19e96bcea616e6e9b1a3c10f25930ac1f75d4f92",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16860/commits/19e96bcea616e6e9b1a3c10f25930ac1f75d4f92\">https://github.com/zulip/zulip-mobile/pull/16860/commits/19e96bcea616e6e9b1a3c10f25930ac1f75d4f92</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_pull_initial_comment_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16665#issue-513297835",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16665#issue-513297835\">https://github.com/zulip/zulip-mobile/pull/16665#issue-513297835</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_pull_comment_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16665#issuecomment-719814618",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16665#issuecomment-719814618\">https://github.com/zulip/zulip-mobile/pull/16665#issuecomment-719814618</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_issues_initial_comment_link",
+ "input": "https://github.com/zulip/zulip-mobile/issues/16579#issue-725908927",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/issues/16579#issue-725908927\">https://github.com/zulip/zulip-mobile/issues/16579#issue-725908927</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_issues_comment_link",
+ "input": "https://github.com/zulip/zulip-mobile/issues/16482#issuecomment-726354516",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/issues/16482#issuecomment-726354516\">https://github.com/zulip/zulip-mobile/issues/16482#issuecomment-726354516</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_files_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16860/files",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16860/files\">https://github.com/zulip/zulip-mobile/pull/16860/files</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_files_comment_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull/16860/files#r539133612",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull/16860/files#r539133612\">https://github.com/zulip/zulip-mobile/pull/16860/files#r539133612</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_issues_link",
+ "input": "https://github.com/zulip/zulip-mobile/issues",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/issues\">https://github.com/zulip/zulip-mobile/issues</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_pull_link",
+ "input": "https://github.com/zulip/zulip-mobile/pull",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/pull\">https://github.com/zulip/zulip-mobile/pull</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_commit_link",
+ "input": "https://github.com/zulip/zulip-mobile/commit",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/commit\">https://github.com/zulip/zulip-mobile/commit</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_clone_link",
+ "input": "https://github.com/zulip/zulip-mobile.git",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile.git\">https://github.com/zulip/zulip-mobile.git</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_repo_labels_link",
+ "input": "https://github.com/zulip/zulip-mobile/labels",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/labels\">https://github.com/zulip/zulip-mobile/labels</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_labels_link",
+ "input": "https://github.com/zulip/zulip-mobile/labels/area%3A%20markdown",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/labels/area%3A%20markdown\">https://github.com/zulip/zulip-mobile/labels/area%3A%20markdown</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_tree_link",
+ "input": "https://github.com/zulip/zulip-mobile/tree/chat.zulip.org",
+ "expected_output": "<p><a href=\"https://github.com/zulip/zulip-mobile/tree/chat.zulip.org\">https://github.com/zulip/zulip-mobile/tree/chat.zulip.org</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_marketplace_circleci_link",
+ "input": "https://github.com/marketplace/circleci",
+ "expected_output": "<p><a href=\"https://github.com/marketplace/circleci\">https://github.com/marketplace/circleci</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_foo_bar_baz_link",
+ "input": "https://github.com/foo/bar/baz",
+ "expected_output": "<p><a href=\"https://github.com/foo/bar/baz\">https://github.com/foo/bar/baz</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_foo_bar_baz_zar_link",
+ "input": "https://github.com/foo/bar/baz/zar",
+ "expected_output": "<p><a href=\"https://github.com/foo/bar/baz/zar\">https://github.com/foo/bar/baz/zar</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_marketplace_link",
+ "input": "https://github.com/marketplace",
+ "expected_output": "<p><a href=\"https://github.com/marketplace\">https://github.com/marketplace</a></p>"
+ },
+ {
+ "name": "auto_shorten_github_link",
+ "input": "https://github.com",
+ "expected_output": "<p><a href=\"https://github.com\">https://github.com</a></p>"
}
],
"linkify_tests": [
| Automatically shorten Github URLs
Currently, using [custom linkification filters](https://zulipchat.com/help/add-a-custom-linkification-filter) it's possible to use shortened forms to link to issues or pull requests. It would be nice if GitHub URLs could be automatically transformed into this shorter form. For example, if `https://github.com/org/repo/issues/1234` is used in a message, then it could be automatically shortened into `org/repo#1234` (or into a configured linkification pattern).
Often it is faster to just copy and paste links from Github however then these long URLs tend to clutter the messages so this feature would be nice to have for these cases.
| Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
Yeah, this seems like a nice touch that we should add to what we're doing. I think we might want to do it as a built-in feature of the markdown processor's link handler, not an optional linkification filter, since I don't see a reason anyone would want to turn this off.
@zulipbot claim
Welcome to Zulip, @scholtzan! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* Sign the [Dropbox Contributor License Agreement](https://opensource.dropbox.com/cla/), so that Zulip can use your code.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
this sounds like a very nice addition!
maybe there could be a way to enable this also for self-hosted environments?
we are running a gitlab instance on premise so it would be extremely handy to configure that in the settings in order to get those shortened URL links...
Yeah, a GitLab version of this feature would require a realm-level setting for the GitLab server. We can build the GitHub feature and then plan to extend it to support GitLab as well.
Hello @scholtzan, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
https://github.com/zulip/zulip/pull/11924 | 2020-12-17T15:11:10 |
zulip/zulip | 16,978 | zulip__zulip-16978 | [
"16846"
] | 50121cce5ef5764eb1b516fc3e1552552a67345b | diff --git a/tools/run-dev.py b/tools/run-dev.py
--- a/tools/run-dev.py
+++ b/tools/run-dev.py
@@ -341,7 +341,7 @@ def shutdown_handler(*args: Any, **kwargs: Any) -> None:
def print_listeners() -> None:
external_host = os.getenv('EXTERNAL_HOST', 'localhost')
- print(f"\nStarting Zulip on {CYAN}http://{external_host}:{proxy_port}/{ENDC}. Internal ports:")
+ print(f"\nStarting Zulip on:\n\n\t{CYAN}http://{external_host}:{proxy_port}/{ENDC}\n\nInternal ports:")
ports = [
(proxy_port, 'Development server proxy (connect here)'),
(django_port, 'Django'),
| The terminal message for accessing the server is not very clear.
After running the server, the terminal message does not tells the user to access the server at http://localhost:9991/ . This is very difficult for the first time user to know how to proceed.
****Screenshot****

****Changes Proposed****
There should be an appropriate message stating that "Your Zulip server is running on http://localhost:9991/ ".
****My OS/ Browser****
MacOS/ Chrome
**Zulip chat regarding the issue**
https://chat.zulip.org/#narrow/stream/21-provision-help/topic/Not.20able.20to.20set.20development.20environment
| Can I work on this issue?
@Riken-Shah Can we both work together?
I would like that @Gautam-Arora24, But I don't think this issue needs collaboration(as we just need to change one file). As you opened this issue I think you should work on this issue. If you are stuck anywhere feel free to reach out to me will try to solve it together.
@Riken-Shah Thank you. I will surely work on this issue. Will contact you in case of any doubt.
@zulipbot claim
@hackerkid What should be the correct location for this terminal message?
@Gautam-Arora24 Probably before "quit the server with ctrl c". It can also be in a different color so that it's easily noticeable. Maybe we can take some inspiration from projects like create-react-app or gatsby. I am assuming you have used them since your profile mentions React.
@hackerkid Ya, I will definitely try something that is more colourful and eye catching. Thanks for assigning me :)
My thinking is much of what we want to do is actually to suppress the notices that we don't want (like the webpack notice) that are provided by third-party tools.
Hello @zulip/server-tooling members, this issue was labeled with the "area: tooling" label, so you may want to check it out!
<!-- areaLabelAddition -->
Is this issue solved? Or can I work on it?
@ganpa3 If you want, you can work on this issue
@zulipbot claim
Welcome to Zulip, @ganpa3! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)! | 2020-12-29T10:26:48 |
|
zulip/zulip | 17,014 | zulip__zulip-17014 | [
"16163"
] | 82d6d925e51a35bdaf940033fc467c4141c390cd | diff --git a/zerver/openapi/python_examples.py b/zerver/openapi/python_examples.py
--- a/zerver/openapi/python_examples.py
+++ b/zerver/openapi/python_examples.py
@@ -392,6 +392,23 @@ def get_profile(client: Client) -> None:
validate_against_openapi_schema(result, "/users/me", "get", "200")
+@openapi_test_function("/users/me:delete")
+def deactivate_own_user(client: Client, owner_client: Client) -> None:
+ user_id = client.get_profile()["user_id"]
+
+ # {code_example|start}
+ # Deactivate the account of the current user/bot that requests.
+ result = client.call_endpoint(
+ url="/users/me",
+ method="DELETE",
+ )
+ # {code_example|end}
+
+ # Reactivate the account to avoid polluting other tests.
+ owner_client.reactivate_user_by_id(user_id)
+ validate_against_openapi_schema(result, "/users/me", "delete", "200")
+
+
@openapi_test_function("/get_stream_id:get")
def get_stream_id(client: Client) -> int:
@@ -1295,7 +1312,7 @@ def test_messages(client: Client, nonadmin_client: Client) -> None:
test_delete_message_edit_permission_error(client, nonadmin_client)
-def test_users(client: Client) -> None:
+def test_users(client: Client, owner_client: Client) -> None:
create_user(client)
get_members(client)
@@ -1320,6 +1337,7 @@ def test_users(client: Client) -> None:
get_alert_words(client)
add_alert_words(client)
remove_alert_words(client)
+ deactivate_own_user(client, owner_client)
def test_streams(client: Client, nonadmin_client: Client) -> None:
@@ -1371,10 +1389,10 @@ def test_errors(client: Client) -> None:
test_invalid_stream_error(client)
-def test_the_api(client: Client, nonadmin_client: Client) -> None:
+def test_the_api(client: Client, nonadmin_client: Client, owner_client: Client) -> None:
get_user_agent(client)
- test_users(client)
+ test_users(client, owner_client)
test_streams(client, nonadmin_client)
test_messages(client, nonadmin_client)
test_queues(client)
| diff --git a/zerver/openapi/test_curl_examples.py b/zerver/openapi/test_curl_examples.py
--- a/zerver/openapi/test_curl_examples.py
+++ b/zerver/openapi/test_curl_examples.py
@@ -22,7 +22,7 @@
)
-def test_generated_curl_examples_for_success(client: Client) -> None:
+def test_generated_curl_examples_for_success(client: Client, owner_client: Client) -> None:
authentication_line = f"{client.email}:{client.api_key}"
# A limited Markdown engine that just processes the code example syntax.
realm = get_realm("zulip")
@@ -49,11 +49,22 @@ def test_generated_curl_examples_for_success(client: Client) -> None:
curl_command_html = md_engine.convert(line.strip())
unescaped_html = html.unescape(curl_command_html)
curl_command_text = unescaped_html[len("<p><code>curl\n") : -len("</code></p>")]
-
curl_command_text = curl_command_text.replace(
"BOT_EMAIL_ADDRESS:BOT_API_KEY", authentication_line
)
+ # TODO: This needs_reactivation block is a hack.
+ # However, it's awkward to test the "deactivate
+ # myself" endpoint with how this system tries to use
+ # the same account for all tests without some special
+ # logic for that endpoint; and the hack is better than
+ # just not documenting the endpoint.
+ needs_reactivation = False
+ user_id = 0
+ if file_name == "templates/zerver/api/deactivate-own-user.md":
+ needs_reactivation = True
+ user_id = client.get_profile()["user_id"]
+
print("Testing {} ...".format(curl_command_text.split("\n")[0]))
# Turn the text into an arguments list.
@@ -69,6 +80,8 @@ def test_generated_curl_examples_for_success(client: Client) -> None:
)
response = json.loads(response_json)
assert response["result"] == "success"
+ if needs_reactivation:
+ owner_client.reactivate_user_by_id(user_id)
except (AssertionError, Exception):
error_template = """
Error verifying the success of the API documentation curl example.
| API: deactivate_my_account is undocumented.
This line: https://github.com/zulip/zulip/blob/33d7a2268563cc89dca662369ccacfd0d35a8a7f/zerver/openapi/zulip.yaml#L4383 is undocumented. https://chat.zulip.org/api/deactivate-my-account currently returns 404. This issue was discussed at https://chat.zulip.org/#narrow/stream/127-integrations/topic/go-zulip-api/near/991579.
| Hello @zulip/server-api members, this issue was labeled with the "area: documentation (api and integrations)" label, so you may want to check it out!
<!-- areaLabelAddition -->
@akashaviator would you be up for taking this issue?
I believe the bug is that we just haven't written a /api/ page for this; i.e. we only completed one of the two steps involved in writing API documentation for a function.
@zulipbot claim | 2021-01-05T22:55:10 |
zulip/zulip | 17,019 | zulip__zulip-17019 | [
"16642"
] | 3ef6f6e2e20a40ce4581f457680b1d0f482b3ac2 | diff --git a/zerver/lib/email_mirror.py b/zerver/lib/email_mirror.py
--- a/zerver/lib/email_mirror.py
+++ b/zerver/lib/email_mirror.py
@@ -10,6 +10,7 @@
from django.utils.timezone import timedelta
from zerver.lib.actions import (
+ check_send_message,
internal_send_huddle_message,
internal_send_private_message,
internal_send_stream_message,
@@ -20,7 +21,7 @@
get_email_gateway_message_string_from_address,
)
from zerver.lib.email_notifications import convert_html_to_markdown
-from zerver.lib.exceptions import RateLimited
+from zerver.lib.exceptions import JsonableError, RateLimited
from zerver.lib.message import normalize_body, truncate_topic
from zerver.lib.queue import queue_json_publish
from zerver.lib.rate_limiter import RateLimitedObject
@@ -33,6 +34,7 @@
Recipient,
Stream,
UserProfile,
+ get_client,
get_display_recipient,
get_stream_by_id_in_realm,
get_system_bot,
@@ -210,6 +212,27 @@ def send_zulip(sender: UserProfile, stream: Stream, topic: str, content: str) ->
)
+def send_mm_reply_to_stream(
+ user_profile: UserProfile, stream: Stream, topic: str, body: str
+) -> None:
+ try:
+ check_send_message(
+ sender=user_profile,
+ client=get_client("Internal"),
+ message_type_name="stream",
+ message_to=[stream.id],
+ topic_name=topic,
+ message_content=body,
+ )
+ except JsonableError as error:
+ error_message = "Error sending message to stream {stream} via missed messages email reply:\n{error}".format(
+ stream=stream.name, error=error.msg
+ )
+ internal_send_private_message(
+ get_system_bot(settings.NOTIFICATION_BOT), user_profile, error_message
+ )
+
+
def get_message_part_by_type(message: EmailMessage, content_type: str) -> Optional[str]:
charsets = message.get_charsets()
@@ -429,12 +452,7 @@ def process_missed_message(to: str, message: EmailMessage) -> None:
if recipient.type == Recipient.STREAM:
stream = get_stream_by_id_in_realm(recipient.type_id, user_profile.realm)
- internal_send_stream_message(
- user_profile,
- stream,
- topic,
- body,
- )
+ send_mm_reply_to_stream(user_profile, stream, topic, body)
recipient_str = stream.name
elif recipient.type == Recipient.PERSONAL:
display_recipient = get_display_recipient(recipient)
| diff --git a/zerver/tests/test_email_mirror.py b/zerver/tests/test_email_mirror.py
--- a/zerver/tests/test_email_mirror.py
+++ b/zerver/tests/test_email_mirror.py
@@ -11,7 +11,12 @@
from django.conf import settings
from django.http import HttpResponse
-from zerver.lib.actions import do_deactivate_realm, do_deactivate_user, ensure_stream
+from zerver.lib.actions import (
+ do_change_stream_post_policy,
+ do_deactivate_realm,
+ do_deactivate_user,
+ ensure_stream,
+)
from zerver.lib.email_mirror import (
ZulipEmailForwardError,
create_missed_message_address,
@@ -37,6 +42,7 @@
from zerver.models import (
MissedMessageEmailAddress,
Recipient,
+ Stream,
UserProfile,
get_display_recipient,
get_realm,
@@ -927,6 +933,47 @@ def test_receive_missed_stream_message_email_messages(self) -> None:
self.assertEqual(message.recipient.type, Recipient.STREAM)
self.assertEqual(message.recipient.id, usermessage.message.recipient.id)
+ def test_receive_email_response_for_auth_failures(self) -> None:
+ user_profile = self.example_user("hamlet")
+ self.subscribe(user_profile, "announce")
+ self.login("hamlet")
+ result = self.client_post(
+ "/json/messages",
+ {
+ "type": "stream",
+ "topic": "test topic",
+ "content": "test_receive_email_response_for_auth_failures",
+ "client": "test suite",
+ "to": "announce",
+ },
+ )
+ self.assert_json_success(result)
+
+ stream = get_stream("announce", user_profile.realm)
+ do_change_stream_post_policy(stream, Stream.STREAM_POST_POLICY_ADMINS)
+
+ usermessage = most_recent_usermessage(user_profile)
+
+ mm_address = create_missed_message_address(user_profile, usermessage.message)
+
+ incoming_valid_message = EmailMessage()
+ incoming_valid_message.set_content("TestMissedMessageEmailMessages Body")
+
+ incoming_valid_message["Subject"] = "TestMissedMessageEmailMessages Subject"
+ incoming_valid_message["From"] = user_profile.delivery_email
+ incoming_valid_message["To"] = mm_address
+ incoming_valid_message["Reply-to"] = user_profile.delivery_email
+
+ process_message(incoming_valid_message)
+
+ message = most_recent_message(user_profile)
+
+ self.assertEqual(
+ message.content,
+ "Error sending message to stream announce via missed messages email reply:\nOnly organization administrators can send to this stream.",
+ )
+ self.assertEqual(message.sender, get_system_bot(settings.NOTIFICATION_BOT))
+
def test_missed_stream_message_email_response_tracks_topic_change(self) -> None:
self.subscribe(self.example_user("hamlet"), "Denmark")
self.subscribe(self.example_user("othello"), "Denmark")
| Failures to send message in email gateway should notify the user in some way
Replying to an email on, say, an announcements stream (which only allows admins to post) will attempt to reply on the stream, and error out in `_internal_prep_message` due to the authorization check in `check_message`.
This results in logging an exception, returning `None` from `internal_send_stream_message` in `process_missed_message`, ignoring that value, and then logging `Successfully processed email from user`. From the user's point of view, they receive no feedback that the message failed to send, or why.
If the message is from the email gateway, we should not log an exception for auth failures; rather, we should notify the user (either via a reply email, or via a PM) that their message failed to send.
| Hello @zulip/server-development members, this issue was labeled with the "area: emails" label, so you may want to check it out!
<!-- areaLabelAddition -->
@timabbott I was looking to work on this issue. Shall I go ahead?
@zulipbot claim
Hi @Abhirup-99, I was actually having a start at this issue but could not have a decent progress as my college exams and practicals kicked in. I would like to carry on with this one after the exams if you don't mind. I hope you can find some other good issue to work on. Btw zulip-bot is down these days, so you may not be able to claim issues.
Some chat discussion here: https://chat.zulip.org/#narrow/stream/127-integrations/topic/email-gateway/near/1084269
I wanted to to work on it, @ligmitz are you still working on it?
@m-e-l-u-h-a-n Working on it ! | 2021-01-06T08:51:04 |
zulip/zulip | 17,020 | zulip__zulip-17020 | [
"17015"
] | 6c888977a6f7b319f5e91ba02eab67557d6a377a | diff --git a/zerver/forms.py b/zerver/forms.py
--- a/zerver/forms.py
+++ b/zerver/forms.py
@@ -19,7 +19,7 @@
from two_factor.utils import totp_digits
from zerver.lib.actions import do_change_password, email_not_system_bot
-from zerver.lib.email_validation import email_allowed_for_realm, validate_email_not_already_in_realm
+from zerver.lib.email_validation import email_allowed_for_realm
from zerver.lib.name_restrictions import is_disposable_domain, is_reserved_subdomain
from zerver.lib.rate_limiter import RateLimited, RateLimitedObject
from zerver.lib.request import JsonableError
@@ -178,8 +178,6 @@ def clean_email(self) -> str:
except EmailContainsPlusError:
raise ValidationError(_("Email addresses containing + are not allowed in this organization."))
- validate_email_not_already_in_realm(realm, email)
-
if realm.is_zephyr_mirror_realm:
email_is_not_mit_mailing_list(email)
diff --git a/zerver/views/registration.py b/zerver/views/registration.py
--- a/zerver/views/registration.py
+++ b/zerver/views/registration.py
@@ -163,9 +163,7 @@ def accounts_register(request: HttpRequest) -> HttpResponse:
try:
validate_email_not_already_in_realm(realm, email)
except ValidationError:
- view_url = reverse('login')
- redirect_url = add_query_to_redirect_url(view_url, urlencode({"email": email}))
- return HttpResponseRedirect(redirect_url)
+ return redirect_to_email_login_url(email)
name_validated = False
full_name = None
@@ -486,7 +484,7 @@ def send_confirm_registration_email(email: str, activation_url: str, language: s
def redirect_to_email_login_url(email: str) -> HttpResponseRedirect:
login_url = reverse('login')
- redirect_url = add_query_to_redirect_url(login_url, urlencode({"already_registered": email}))
+ redirect_url = add_query_to_redirect_url(login_url, urlencode({"email": email, "already_registered": 1}))
return HttpResponseRedirect(redirect_url)
def create_realm(request: HttpRequest, creation_key: Optional[str]=None) -> HttpResponse:
@@ -555,6 +553,12 @@ def accounts_home(request: HttpRequest, multiuse_object_key: str="",
form = HomepageForm(request.POST, realm=realm, from_multiuse_invite=from_multiuse_invite)
if form.is_valid():
email = form.cleaned_data['email']
+
+ try:
+ validate_email_not_already_in_realm(realm, email)
+ except ValidationError:
+ return redirect_to_email_login_url(email)
+
activation_url = prepare_activation_url(email, request, streams=streams_to_subscribe,
invited_as=invited_as)
try:
@@ -565,11 +569,6 @@ def accounts_home(request: HttpRequest, multiuse_object_key: str="",
return HttpResponseRedirect(reverse('signup_send_confirm', kwargs={'email': email}))
- email = request.POST['email']
- try:
- validate_email_not_already_in_realm(realm, email)
- except ValidationError:
- return redirect_to_email_login_url(email)
else:
form = HomepageForm(realm=realm)
context = login_context(request)
| diff --git a/zerver/tests/test_signup.py b/zerver/tests/test_signup.py
--- a/zerver/tests/test_signup.py
+++ b/zerver/tests/test_signup.py
@@ -690,6 +690,18 @@ def test_register_deactivated(self) -> None:
with self.assertRaises(UserProfile.DoesNotExist):
self.nonreg_user('test')
+ def test_register_with_invalid_email(self) -> None:
+ """
+ If you try to register with invalid email, you get an invalid email
+ page
+ """
+ invalid_email = "foo\x00bar"
+ result = self.client_post('/accounts/home/', {'email': invalid_email},
+ subdomain="zulip")
+
+ self.assertEqual(result.status_code, 200)
+ self.assertContains(result, "Enter a valid email address")
+
def test_register_deactivated_partway_through(self) -> None:
"""
If you try to register for a deactivated realm, you get a clear error
@@ -1672,7 +1684,7 @@ def test_validate_email_not_already_in_realm(self) -> None:
response = self.client_post(url, {"key": registration_key, "from_confirmation": 1, "full_name": "alice"})
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, reverse('login') + '?' +
- urlencode({"email": email}))
+ urlencode({"email": email, "already_registered": 1}))
class InvitationsTestCase(InviteUserBase):
def test_do_get_user_invites(self) -> None:
| Validate form of emails to /register/
Most paths in `zerver/views/registration.py` validate the email by calling `validators.validate_email(email)` where necessary. However, the registration from `accounts_home` does not, which can [lead to invalid SQL queries when trying to check if the email is already in use](https://sentry.io/share/issue/6b34ad6039df4d029c85230d6b715df7/), and thus a 500.
We should validate the email using `validate_email` before checking if it is in use. This is probably just a case of:
```
diff --git zerver/views/registration.py zerver/views/registration.py
index f55d73026e..6d9a072b17 100644
--- zerver/views/registration.py
+++ zerver/views/registration.py
@@ -566,6 +566,10 @@ def accounts_home(request: HttpRequest, multiuse_object_key: str="",
return HttpResponseRedirect(reverse('signup_send_confirm', kwargs={'email': email}))
email = request.POST['email']
+ try:
+ validators.validate_email(email)
+ except ValidationError:
+ return render(request, "zerver/invalid_email.html", context={"invalid_email": True})
try:
validate_email_not_already_in_realm(realm, email)
except ValidationError:
```
But that is totally untested, and I'd love more coverage on this endpoint.
| @zulipbot claim
Hello @tushar912, it looks like you've currently claimed 1 issue in this repository. We encourage new contributors to focus their efforts on at most 1 issue at a time, so please complete your work on your other claimed issues before trying to claim this issue again.
We look forward to your valuable contributions!
I am working on this
@zulipbot claim
Welcome to Zulip, @prakhar-ai! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@prakhar-ai I have almost done this.
@tushar912 I don't think zulipbot allows you to work on two issues simultaneously, which is why you weren't assigned. I'm sorry if this caused any misunderstanding.
@prakhar-ai No problem ....If you need help do comment here.
I'm unable to run tests due to [OSError: [Errno 26] Text file busy](https://zulip.readthedocs.io/en/latest/development/setup-vagrant.html#oserror-errno-26-text-file-busy). I have unassigned myself. @tushar912 feel free to work on this issue. | 2021-01-06T17:45:37 |
zulip/zulip | 17,031 | zulip__zulip-17031 | [
"15210"
] | 26d97ce7e3a919296582f4cd206932b1ad9e0169 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -204,7 +204,12 @@
)
from zerver.lib.user_mutes import add_user_mute, get_muting_users, get_user_mutes
from zerver.lib.user_status import update_user_status
-from zerver.lib.user_topics import add_topic_mute, get_topic_mutes, remove_topic_mute
+from zerver.lib.user_topics import (
+ add_topic_mute,
+ get_topic_mutes,
+ get_users_muting_topic,
+ remove_topic_mute,
+)
from zerver.lib.users import (
check_bot_name_available,
check_full_name,
@@ -7086,6 +7091,47 @@ def user_info(um: UserMessage) -> Dict[str, Any]:
users_to_be_notified += list(map(subscriber_info, sorted(list(subscriber_ids))))
+ # Migrate muted topic configuration in the following circumstances:
+ #
+ # * If propagate_mode is change_all, do so unconditionally.
+ #
+ # * If propagate_mode is change_later, it's likely that we want to
+ # move these only when it appears that the intent is to move
+ # most of the topic, not just the last 1-2 messages which may
+ # have been "off topic". At present we do so unconditionally.
+ #
+ # * Never move muted topic configuration with change_one.
+ #
+ # We may want more complex behavior in cases where one appears to
+ # be merging topics (E.g. there are existing messages in the
+ # target topic).
+ #
+ # Moving a topic to another stream is complicated in that we want
+ # to avoid creating a UserTopic row for the user in a stream that
+ # they don't have access to; doing so could leak information about
+ # the existence of a private stream to some users. See the
+ # moved_all_visible_messages below for related details.
+ #
+ # So for now, we require new_stream=None for this feature.
+ if topic_name is not None and propagate_mode != "change_one" and new_stream is None:
+ assert stream_being_edited is not None
+ for muting_user in get_users_muting_topic(stream_being_edited.id, orig_topic_name):
+ # TODO: Ideally, this would be a bulk update operation,
+ # because we are doing database operations in a loop here.
+ #
+ # This loop is only acceptable in production because it is
+ # rare for more than a few users to have muted an
+ # individual topic that is being moved; as of this
+ # writing, no individual topic in Zulip Cloud had been
+ # muted by more than 100 users.
+
+ # We call remove_topic_mute rather than do_unmute_topic to
+ # avoid sending two events with new muted topics in
+ # immediate succession; this is correct only because
+ # muted_topics events always send the full set of topics.
+ remove_topic_mute(muting_user, stream_being_edited.id, orig_topic_name)
+ do_mute_topic(muting_user, stream_being_edited, topic_name)
+
send_event(user_profile.realm, event, users_to_be_notified)
if len(changed_messages) > 0 and new_stream is not None and stream_being_edited is not None:
@@ -7906,6 +7952,9 @@ def do_mute_topic(
def do_unmute_topic(user_profile: UserProfile, stream: Stream, topic: str) -> None:
+ # Note: If you add any new code to this function, the
+ # remove_topic_mute call in do_update_message will need to be
+ # updated for correctness.
try:
remove_topic_mute(user_profile, stream.id, topic)
except UserTopic.DoesNotExist:
diff --git a/zerver/lib/user_topics.py b/zerver/lib/user_topics.py
--- a/zerver/lib/user_topics.py
+++ b/zerver/lib/user_topics.py
@@ -1,6 +1,7 @@
import datetime
from typing import Any, Callable, Dict, List, Optional, Tuple
+from django.db.models.query import QuerySet
from django.utils.timezone import now as timezone_now
from sqlalchemy.sql import ClauseElement, and_, column, not_, or_
from sqlalchemy.types import Integer
@@ -152,3 +153,13 @@ def is_muted(recipient_id: int, topic: str) -> bool:
return (recipient_id, topic.lower()) in tups
return is_muted
+
+
+def get_users_muting_topic(stream_id: int, topic_name: str) -> QuerySet[UserProfile]:
+ return UserProfile.objects.select_related("realm").filter(
+ id__in=UserTopic.objects.filter(
+ stream_id=stream_id,
+ visibility_policy=UserTopic.MUTED,
+ topic_name__iexact=topic_name,
+ ).values("user_profile_id")
+ )
| diff --git a/zerver/tests/test_message_edit.py b/zerver/tests/test_message_edit.py
--- a/zerver/tests/test_message_edit.py
+++ b/zerver/tests/test_message_edit.py
@@ -9,6 +9,7 @@
from django.utils.timezone import now as timezone_now
from zerver.lib.actions import (
+ check_update_message,
do_add_reaction,
do_change_realm_plan_type,
do_change_stream_post_policy,
@@ -24,6 +25,12 @@
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import cache_tries_captured, queries_captured
from zerver.lib.topic import RESOLVED_TOPIC_PREFIX, TOPIC_NAME
+from zerver.lib.user_topics import (
+ get_topic_mutes,
+ get_users_muting_topic,
+ set_topic_mutes,
+ topic_is_muted,
+)
from zerver.models import Message, Realm, Stream, UserMessage, UserProfile, get_realm, get_stream
@@ -1236,6 +1243,108 @@ def notify(user_id: int) -> Dict[str, Any]:
users_to_be_notified = list(map(notify, [hamlet.id]))
do_update_message_topic_success(hamlet, message, "Change again", users_to_be_notified)
+ @mock.patch("zerver.lib.actions.send_event")
+ def test_edit_muted_topic(self, mock_send_event: mock.MagicMock) -> None:
+ stream_name = "Stream 123"
+ stream = self.make_stream(stream_name)
+ hamlet = self.example_user("hamlet")
+ cordelia = self.example_user("cordelia")
+ aaron = self.example_user("aaron")
+ self.subscribe(hamlet, stream_name)
+ self.login_user(hamlet)
+ message_id = self.send_stream_message(
+ hamlet, stream_name, topic_name="Topic1", content="Hello World"
+ )
+
+ self.subscribe(cordelia, stream_name)
+ self.login_user(cordelia)
+ self.subscribe(aaron, stream_name)
+ self.login_user(aaron)
+
+ muted_topics = [
+ [stream_name, "Topic1"],
+ [stream_name, "Topic2"],
+ ]
+ set_topic_mutes(hamlet, muted_topics)
+ set_topic_mutes(cordelia, muted_topics)
+
+ # Returns the users that need to be notified when a message topic is changed
+ def notify(user_id: int) -> Dict[str, Any]:
+ um = UserMessage.objects.get(message=message_id)
+ if um.user_profile_id == user_id:
+ return {
+ "id": user_id,
+ "flags": um.flags_list(),
+ }
+
+ else:
+ return {
+ "id": user_id,
+ "flags": ["read"],
+ }
+
+ users_to_be_notified = list(map(notify, [hamlet.id, cordelia.id, aaron.id]))
+ change_all_topic_name = "Topic 1 edited"
+
+ with queries_captured() as queries:
+ check_update_message(
+ user_profile=hamlet,
+ message_id=message_id,
+ stream_id=None,
+ topic_name=change_all_topic_name,
+ propagate_mode="change_all",
+ send_notification_to_old_thread=False,
+ send_notification_to_new_thread=False,
+ content=None,
+ )
+ # This code path adds 9 (1 + 4/user with muted topics) to
+ # the number of database queries for moving a topic.
+ self.assert_length(queries, 18)
+
+ for muting_user in get_users_muting_topic(stream.id, change_all_topic_name):
+ for user in users_to_be_notified:
+ if muting_user.id == user["id"]:
+ user["muted_topics"] = get_topic_mutes(muting_user)
+ break
+
+ self.assertFalse(topic_is_muted(hamlet, stream.id, "Topic1"))
+ self.assertFalse(topic_is_muted(cordelia, stream.id, "Topic1"))
+ self.assertFalse(topic_is_muted(aaron, stream.id, "Topic1"))
+ self.assertTrue(topic_is_muted(hamlet, stream.id, "Topic2"))
+ self.assertTrue(topic_is_muted(cordelia, stream.id, "Topic2"))
+ self.assertFalse(topic_is_muted(aaron, stream.id, "Topic2"))
+ self.assertTrue(topic_is_muted(hamlet, stream.id, change_all_topic_name))
+ self.assertTrue(topic_is_muted(cordelia, stream.id, change_all_topic_name))
+ self.assertFalse(topic_is_muted(aaron, stream.id, change_all_topic_name))
+
+ change_later_topic_name = "Topic 1 edited again"
+ check_update_message(
+ user_profile=hamlet,
+ message_id=message_id,
+ stream_id=None,
+ topic_name=change_later_topic_name,
+ propagate_mode="change_later",
+ send_notification_to_old_thread=False,
+ send_notification_to_new_thread=False,
+ content=None,
+ )
+ self.assertFalse(topic_is_muted(hamlet, stream.id, change_all_topic_name))
+ self.assertTrue(topic_is_muted(hamlet, stream.id, change_later_topic_name))
+
+ change_one_topic_name = "Topic 1 edited change_one"
+ check_update_message(
+ user_profile=hamlet,
+ message_id=message_id,
+ stream_id=None,
+ topic_name=change_one_topic_name,
+ propagate_mode="change_one",
+ send_notification_to_old_thread=False,
+ send_notification_to_new_thread=False,
+ content=None,
+ )
+ self.assertFalse(topic_is_muted(hamlet, stream.id, change_one_topic_name))
+ self.assertTrue(topic_is_muted(hamlet, stream.id, change_later_topic_name))
+
@mock.patch("zerver.lib.actions.send_event")
def test_wildcard_mention(self, mock_send_event: mock.MagicMock) -> None:
stream_name = "Macbeth"
| Renaming whole topic causes unmute
If you mute a topic and someone takes a message out of it by renaming its topic, the extracted messages are "unmuted". Unless I am mistaken, this is a feature that lets you know these messages aren't part of the topic you muted.
However, when the whole topic is renamed *a priori* you still do not want to see the contents and thus the topic should stay muted.
| Hello @zulip/server-message-view members, this issue was labeled with the "area: message-editing" label, so you may want to check it out!
<!-- areaLabelAddition -->
Yeah, I think I agree that in the case that it's a full-topic change, we should update any MutedTopic entries to point to the new topic. This might be a little complicated to do, since we'll need to notify the frontend in addition to updating the backend data model.
Should this still happen when the stream changed as well? Probably not.
@aman566 FYI; this is probably lower priority than the other message editing issues on your plate. | 2021-01-09T08:50:12 |
zulip/zulip | 17,033 | zulip__zulip-17033 | [
"16048"
] | 2e7aaba0dde5517b4a55cb0bd782f009be45e3ba | diff --git a/zerver/openapi/python_examples.py b/zerver/openapi/python_examples.py
--- a/zerver/openapi/python_examples.py
+++ b/zerver/openapi/python_examples.py
@@ -1101,6 +1101,22 @@ def update_notification_settings(client: Client) -> None:
validate_against_openapi_schema(result, "/settings/notifications", "patch", "200")
+@openapi_test_function("/settings/display:patch")
+def update_display_settings(client: Client) -> None:
+
+ # {code_example|start}
+ # Show user list on left sidebar in narrow windows.
+ # Change emoji set used for display to Google modern.
+ request = {
+ "left_side_userlist": True,
+ "emojiset": '"google"',
+ }
+ result = client.call_endpoint("settings/display", method="PATCH", request=request)
+ # {code_example|end}
+
+ validate_against_openapi_schema(result, "/settings/display", "patch", "200")
+
+
@openapi_test_function("/user_uploads:post")
def upload_file(client: Client) -> None:
path_to_file = os.path.join(ZULIP_DIR, "zerver", "tests", "images", "img.jpg")
@@ -1391,6 +1407,7 @@ def test_users(client: Client, owner_client: Client) -> None:
get_subscription_status(client)
get_profile(client)
update_notification_settings(client)
+ update_display_settings(client)
upload_file(client)
get_attachments(client)
set_typing_status(client)
| diff --git a/zerver/tests/test_openapi.py b/zerver/tests/test_openapi.py
--- a/zerver/tests/test_openapi.py
+++ b/zerver/tests/test_openapi.py
@@ -266,8 +266,6 @@ class OpenAPIArgumentsTest(ZulipTestCase):
"/settings",
"/users/me/avatar",
"/users/me/api_key/regenerate",
- # Not very useful outside the UI
- "/settings/display",
# Much more valuable would be an org admin bulk-upload feature.
"/users/me/profile_data",
#### Should be documented as part of interactive bots documentation
| Document display settings in Zulip API documentation
This should look very similar to the existing documentation for the notification settings endpoint.
When we do this, we should remember to update the `/register` documentation added in #16000 for display settings to include ir link to the detailed explanations of the meaning of these settings.
| Hello @zulip/server-api members, this issue was labeled with the "area: documentation (api and integrations)" label, so you may want to check it out!
<!-- areaLabelAddition -->
I am working on it.
@timabbott I have added pr #17033 to address this, it would be great if you could review it. | 2021-01-09T11:46:03 |
zulip/zulip | 17,045 | zulip__zulip-17045 | [
"10830"
] | 99636c36a31ba524857833b47e911dd1e6972a1f | diff --git a/tools/linter_lib/custom_check.py b/tools/linter_lib/custom_check.py
--- a/tools/linter_lib/custom_check.py
+++ b/tools/linter_lib/custom_check.py
@@ -745,6 +745,10 @@
"pattern": '{{t "[^"]+ " }}',
"description": "Translatable strings should not have trailing spaces.",
},
+ {
+ "pattern": r'"{{t "',
+ "description": "Invalid quoting for HTML element with translated string.",
+ },
],
)
| diff --git a/frontend_tests/puppeteer_tests/realm-linkifier.ts b/frontend_tests/puppeteer_tests/realm-linkifier.ts
--- a/frontend_tests/puppeteer_tests/realm-linkifier.ts
+++ b/frontend_tests/puppeteer_tests/realm-linkifier.ts
@@ -35,11 +35,11 @@ async function test_add_linkifier(page: Page): Promise<void> {
}
async function test_delete_linkifier(page: Page): Promise<void> {
- await page.click(".linkifier_row button");
+ await page.click(".linkifier_row .delete");
await page.waitForSelector(".linkifier_row", {hidden: true});
}
-async function test_invalid_linkifier_pattern(page: Page): Promise<void> {
+async function test_add_invalid_linkifier_pattern(page: Page): Promise<void> {
await page.waitForSelector(".admin-linkifier-form", {visible: true});
await common.fill_form(page, "form.admin-linkifier-form", {
pattern: "a$",
@@ -54,14 +54,94 @@ async function test_invalid_linkifier_pattern(page: Page): Promise<void> {
);
}
+async function test_edit_linkifier(page: Page): Promise<void> {
+ await page.click(".linkifier_row .edit");
+ await page.waitForFunction(() => document.activeElement === $("#linkifier-edit-form-modal")[0]);
+ await common.fill_form(page, "form.linkifier-edit-form", {
+ pattern: "(?P<num>[0-9a-f]{40})",
+ url_format_string: "https://trac.example.com/commit/%(num)s",
+ });
+ await page.click(".submit-linkifier-info-change");
+
+ await page.waitForSelector("#linkifier-edit-form-modal", {hidden: true});
+ await page.waitForFunction(() => $(".edit-linkifier-status").text().trim() === "Saved");
+ await page.waitForSelector(".linkifier_row", {visible: true});
+ assert.strictEqual(
+ await common.get_text_from_selector(page, ".linkifier_row span.linkifier_pattern"),
+ "(?P<num>[0-9a-f]{40})",
+ );
+ assert.strictEqual(
+ await common.get_text_from_selector(
+ page,
+ ".linkifier_row span.linkifier_url_format_string",
+ ),
+ "https://trac.example.com/commit/%(num)s",
+ );
+}
+
+async function test_edit_invalid_linkifier(page: Page): Promise<void> {
+ await page.click(".linkifier_row .edit");
+ await page.waitForFunction(() => document.activeElement === $("#linkifier-edit-form-modal")[0]);
+ await common.fill_form(page, "form.linkifier-edit-form", {
+ pattern: "####",
+ url_format_string: "####",
+ });
+ await page.click(".submit-linkifier-info-change");
+
+ const edit_linkifier_pattern_status_selector = "div#edit-linkifier-pattern-status";
+ await page.waitForSelector(edit_linkifier_pattern_status_selector, {visible: true});
+ const edit_linkifier_pattern_status = await common.get_text_from_selector(
+ page,
+ edit_linkifier_pattern_status_selector,
+ );
+ assert.strictEqual(
+ edit_linkifier_pattern_status,
+ "Failed: Invalid linkifier pattern. Valid characters are [ a-zA-Z_#=/:+!-].",
+ );
+
+ const edit_linkifier_format_status_selector = "div#edit-linkifier-format-status";
+ await page.waitForSelector(edit_linkifier_format_status_selector, {visible: true});
+ const edit_linkifier_format_status = await common.get_text_from_selector(
+ page,
+ edit_linkifier_format_status_selector,
+ );
+ assert.strictEqual(
+ edit_linkifier_format_status,
+ "Failed: Enter a valid URL.,Invalid URL format string.",
+ );
+
+ await page.click(".cancel-linkifier-info-change");
+ await page.waitForSelector("#linkifier-edit-form-modal", {hidden: true});
+
+ await page.waitForFunction(
+ () =>
+ $(".edit-linkifier-status").text().trim() ===
+ "Save failed: Invalid linkifier pattern. Valid characters are [ a-zA-Z_#=/:+!-].",
+ );
+ await page.waitForSelector(".linkifier_row", {visible: true});
+ assert.strictEqual(
+ await common.get_text_from_selector(page, ".linkifier_row span.linkifier_pattern"),
+ "(?P<num>[0-9a-f]{40})",
+ );
+ assert.strictEqual(
+ await common.get_text_from_selector(
+ page,
+ ".linkifier_row span.linkifier_url_format_string",
+ ),
+ "https://trac.example.com/commit/%(num)s",
+ );
+}
+
async function linkifier_test(page: Page): Promise<void> {
await common.log_in(page);
await common.manage_organization(page);
await page.click("li[data-section='linkifier-settings']");
await test_add_linkifier(page);
+ await test_edit_linkifier(page);
+ await test_edit_invalid_linkifier(page);
+ await test_add_invalid_linkifier_pattern(page);
await test_delete_linkifier(page);
- await test_invalid_linkifier_pattern(page);
}
common.run_test(linkifier_test);
| Organization linkifiers should be editable in place
It would be nice if linkifications were editable, rather than requiring deletion and re-creation. But I understand it would be harder to check for duplicate regex that way.
| Hello @zulip/server-markdown, @zulip/server-settings members, this issue was labeled with the "area: markdown", "area: settings (admin/org)" labels, so you may want to check it out!
<!-- areaLabelAddition -->
This seems like a reasonable UI to add, though probably not super high priority, just because folks tend to edit these only once when setting up an organization.
So, there are three functions in file zerver/views/realm_filters.py. For listing, creating and deleting. And I think a function for editing filter needs to be created. Should I do so and hit PR?
@zulipbot claim
Welcome to Zulip, @ruchit2801! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* Sign the [Dropbox Contributor License Agreement](https://opensource.dropbox.com/cla/), so that Zulip can use your code.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
That's correct, we need to extend the API for this.
@zulipbot abandon
| 2021-01-12T18:24:50 |
zulip/zulip | 17,091 | zulip__zulip-17091 | [
"17071"
] | 5da304d902bbb2dc4df8948ffdac941b4416bed0 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -4183,7 +4183,9 @@ def do_update_user_status(user_profile: UserProfile,
away: Optional[bool],
status_text: Optional[str],
client_id: int) -> None:
- if away:
+ if away is None:
+ status = None
+ elif away:
status = UserStatus.AWAY
else:
status = UserStatus.NORMAL
| diff --git a/zerver/tests/test_user_status.py b/zerver/tests/test_user_status.py
--- a/zerver/tests/test_user_status.py
+++ b/zerver/tests/test_user_status.py
@@ -230,3 +230,40 @@ def test_endpoints(self) -> None:
get_user_info_dict(realm_id=realm_id),
{},
)
+
+ # Turn on "away" status again.
+ payload = dict(away=orjson.dumps(True).decode())
+
+ event_info = EventInfo()
+ with capture_event(event_info):
+ result = self.client_post('/json/users/me/status', payload)
+ self.assert_json_success(result)
+
+ self.assertEqual(
+ event_info.payload,
+ dict(type='user_status', user_id=hamlet.id, away=True),
+ )
+
+ away_user_ids = get_away_user_ids(realm_id=realm_id)
+ self.assertEqual(away_user_ids, {hamlet.id})
+
+ # And set status text while away.
+ payload = dict(status_text=' at the beach ')
+
+ event_info = EventInfo()
+ with capture_event(event_info):
+ result = self.client_post('/json/users/me/status', payload)
+ self.assert_json_success(result)
+
+ self.assertEqual(
+ event_info.payload,
+ dict(type='user_status', user_id=hamlet.id, status_text='at the beach'),
+ )
+
+ self.assertEqual(
+ user_info(hamlet),
+ dict(status_text='at the beach', away=True),
+ )
+
+ away_user_ids = get_away_user_ids(realm_id=realm_id)
+ self.assertEqual(away_user_ids, {hamlet.id})
| Setting status text revokes "away"
Setting a status text while being "unavailable" revokes the away status.
https://chat.zulip.org/#narrow/stream/9-issues/topic/Setting.20status.20text.20revokes.20away/near/1100185
| @abhijeetbodas2001 You might still want to look at this, as I believe the solution from @ligmitz might not get to the heart of the bug here. | 2021-01-20T18:34:01 |
zulip/zulip | 17,104 | zulip__zulip-17104 | [
"16970"
] | eefa687832949d86a7b954efbcd5d20d87900120 | diff --git a/zerver/lib/url_preview/preview.py b/zerver/lib/url_preview/preview.py
--- a/zerver/lib/url_preview/preview.py
+++ b/zerver/lib/url_preview/preview.py
@@ -27,9 +27,8 @@
# Use Chrome User-Agent, since some sites refuse to work on old browsers
ZULIP_URL_PREVIEW_USER_AGENT = (
- 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ZulipURLPreview/{version}; '
- '+{external_host}) Chrome/81.0.4044.113 Safari/537.36'
-).format(version=ZULIP_VERSION, external_host=settings.EXTERNAL_HOST)
+ 'Mozilla/5.0 (compatible; ZulipURLPreview/{version}; +{external_host})'
+).format(version=ZULIP_VERSION, external_host=settings.ROOT_DOMAIN_URI)
# FIXME: This header and timeout are not used by pyoembed, when trying to autodiscover!
HEADERS = {'User-Agent': ZULIP_URL_PREVIEW_USER_AGENT}
| Incorrect rendering of YouTube Titles
I tried sending some youtube videos with setting: Show previews of linked website - checked, but the **extracted** YouTube title is not as expected for most of the videos, it is `YouTube - YouTube` while it should be `YouTube - [Title of the YouTube video]`. Also **extracted** description is `Share your videos with friends, family and the world` for every video sent.
https://chat.zulip.org/#narrow/stream/3-backend/topic/Markdown.20Youtube.20Title
| Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hi! @akshatdalton can I try this as my first issue ?
@imajit
Yeah you can try to give this a shot but I am still not sure about the behaviour of the headers.
More context in [CZO](https://chat.zulip.org/#narrow/stream/3-backend/topic/Markdown.20Youtube.20Title).
Hey @imajit, are you still working on this issue? If not, I would like to work on this | 2021-01-22T21:10:27 |
|
zulip/zulip | 17,112 | zulip__zulip-17112 | [
"17111",
"17111"
] | abff97df39cc051fbfdb1ee3829443922dc7b716 | diff --git a/zerver/lib/users.py b/zerver/lib/users.py
--- a/zerver/lib/users.py
+++ b/zerver/lib/users.py
@@ -222,18 +222,28 @@ def access_bot_by_id(user_profile: UserProfile, user_id: int) -> UserProfile:
raise JsonableError(_("Insufficient permission"))
return target
-def access_user_by_id(user_profile: UserProfile, user_id: int,
- allow_deactivated: bool=False, allow_bots: bool=False,
- read_only: bool=False) -> UserProfile:
+def access_user_by_id(
+ user_profile: UserProfile,
+ target_user_id: int,
+ *,
+ allow_deactivated: bool=False,
+ allow_bots: bool=False,
+ for_admin: bool,
+) -> UserProfile:
+ """Master function for accessing another user by ID in API code;
+ verifies the user ID is in the same realm, and if requested checks
+ for administrative privileges, with flags for various special
+ cases.
+ """
try:
- target = get_user_profile_by_id_in_realm(user_id, user_profile.realm)
+ target = get_user_profile_by_id_in_realm(target_user_id, user_profile.realm)
except UserProfile.DoesNotExist:
raise JsonableError(_("No such user"))
if target.is_bot and not allow_bots:
raise JsonableError(_("No such user"))
if not target.is_active and not allow_deactivated:
raise JsonableError(_("User is deactivated"))
- if read_only:
+ if not for_admin:
# Administrative access is not required just to read a user.
return target
if not user_profile.can_admin_user(target):
diff --git a/zerver/views/users.py b/zerver/views/users.py
--- a/zerver/views/users.py
+++ b/zerver/views/users.py
@@ -89,7 +89,7 @@ def check_last_owner(user_profile: UserProfile) -> bool:
def deactivate_user_backend(request: HttpRequest, user_profile: UserProfile,
user_id: int) -> HttpResponse:
- target = access_user_by_id(user_profile, user_id)
+ target = access_user_by_id(user_profile, user_id, for_admin=True)
if target.is_realm_owner and not user_profile.is_realm_owner:
raise OrganizationOwnerRequired()
if check_last_owner(target):
@@ -117,7 +117,8 @@ def _deactivate_user_profile_backend(request: HttpRequest, user_profile: UserPro
def reactivate_user_backend(request: HttpRequest, user_profile: UserProfile,
user_id: int) -> HttpResponse:
- target = access_user_by_id(user_profile, user_id, allow_deactivated=True, allow_bots=True)
+ target = access_user_by_id(user_profile, user_id,
+ allow_deactivated=True, allow_bots=True, for_admin=True)
if target.is_bot:
assert target.bot_type is not None
check_bot_creation_policy(user_profile, target.bot_type)
@@ -144,13 +145,19 @@ def update_user_backend(
default=None, validator=check_profile_data,
),
) -> HttpResponse:
- target = access_user_by_id(user_profile, user_id, allow_deactivated=True, allow_bots=True)
+ target = access_user_by_id(user_profile, user_id,
+ allow_deactivated=True, allow_bots=True, for_admin=True)
if role is not None and target.role != role:
- if target.role == UserProfile.ROLE_REALM_OWNER and check_last_owner(user_profile):
- return json_error(_('The owner permission cannot be removed from the only organization owner.'))
+ # Require that the current user has permissions to
+ # grant/remove the role in question. access_user_by_id has
+ # already verified we're an administrator; here we enforce
+ # that only owners can toggle the is_realm_owner flag.
if UserProfile.ROLE_REALM_OWNER in [role, target.role] and not user_profile.is_realm_owner:
raise OrganizationOwnerRequired()
+
+ if target.role == UserProfile.ROLE_REALM_OWNER and check_last_owner(user_profile):
+ return json_error(_('The owner permission cannot be removed from the only organization owner.'))
do_change_user_role(target, role, acting_user=user_profile)
if (full_name is not None and target.full_name != full_name and
@@ -470,7 +477,7 @@ def get_members_backend(request: HttpRequest, user_profile: UserProfile, user_id
target_user = None
if user_id is not None:
target_user = access_user_by_id(user_profile, user_id, allow_deactivated=True,
- allow_bots=True, read_only=True)
+ allow_bots=True, for_admin=False)
members = get_raw_user_data(realm, user_profile, target_user=target_user,
client_gravatar=client_gravatar,
@@ -548,7 +555,7 @@ def get_subscription_backend(request: HttpRequest, user_profile: UserProfile,
user_id: int=REQ(validator=check_int, path_only=True),
stream_id: int=REQ(validator=check_int, path_only=True),
) -> HttpResponse:
- target_user = access_user_by_id(user_profile, user_id, read_only=True)
+ target_user = access_user_by_id(user_profile, user_id, for_admin=False)
(stream, sub) = access_stream_by_id(user_profile, stream_id)
subscription_status = {'is_subscribed': subscribed_to_stream(target_user, stream_id)}
| diff --git a/zerver/tests/test_users.py b/zerver/tests/test_users.py
--- a/zerver/tests/test_users.py
+++ b/zerver/tests/test_users.py
@@ -382,29 +382,30 @@ def test_access_user_by_id(self) -> None:
# Must be a valid user ID in the realm
with self.assertRaises(JsonableError):
- access_user_by_id(iago, 1234)
+ access_user_by_id(iago, 1234, for_admin=False)
with self.assertRaises(JsonableError):
- access_user_by_id(iago, self.mit_user("sipbtest").id)
+ access_user_by_id(iago, self.mit_user("sipbtest").id, for_admin=False)
- # Can only access bot users if allow_deactivated is passed
+ # Can only access bot users if allow_bots is passed
bot = self.example_user("default_bot")
- access_user_by_id(iago, bot.id, allow_bots=True)
+ access_user_by_id(iago, bot.id, allow_bots=True, for_admin=True)
with self.assertRaises(JsonableError):
- access_user_by_id(iago, bot.id)
+ access_user_by_id(iago, bot.id, for_admin=True)
# Can only access deactivated users if allow_deactivated is passed
hamlet = self.example_user("hamlet")
do_deactivate_user(hamlet)
with self.assertRaises(JsonableError):
- access_user_by_id(iago, hamlet.id)
- access_user_by_id(iago, hamlet.id, allow_deactivated=True)
+ access_user_by_id(iago, hamlet.id, for_admin=False)
+ with self.assertRaises(JsonableError):
+ access_user_by_id(iago, hamlet.id, for_admin=True)
+ access_user_by_id(iago, hamlet.id, allow_deactivated=True, for_admin=True)
# Non-admin user can't admin another user
with self.assertRaises(JsonableError):
- access_user_by_id(self.example_user("cordelia"), self.example_user("aaron").id)
+ access_user_by_id(self.example_user("cordelia"), self.example_user("aaron").id, for_admin=True)
# But does have read-only access to it.
- access_user_by_id(self.example_user("cordelia"), self.example_user("aaron").id,
- read_only=True)
+ access_user_by_id(self.example_user("cordelia"), self.example_user("aaron").id, for_admin=False)
def test_change_regular_member_to_guest(self) -> None:
iago = self.example_user("iago")
| Clarify redability issues over access_user_by_id
zerver/lib/users.py has a function named access_user_by_id, which was
used in /users views to fetch a user by it's id. Along with fetching
the user this function also does important validations regarding
checking of required permissions for fetching the target user.
Although current name works fine but it's name gives no hints to
anyone reading the code about the validations part. This can lead to
unnecessary confusion regarding the permissions for accessing users in
views.
An example of this was highlighted and discussed here: [CZO conversation link.](https://chat.zulip.org/#narrow/stream/49-development-help/topic/Admin.20Bot.20.20.2312424/near/1104966)
Clarify redability issues over access_user_by_id
zerver/lib/users.py has a function named access_user_by_id, which was
used in /users views to fetch a user by it's id. Along with fetching
the user this function also does important validations regarding
checking of required permissions for fetching the target user.
Although current name works fine but it's name gives no hints to
anyone reading the code about the validations part. This can lead to
unnecessary confusion regarding the permissions for accessing users in
views.
An example of this was highlighted and discussed here: [CZO conversation link.](https://chat.zulip.org/#narrow/stream/49-development-help/topic/Admin.20Bot.20.20.2312424/near/1104966)
| 2021-01-23T05:53:46 |
|
zulip/zulip | 17,166 | zulip__zulip-17166 | [
"17156"
] | 830f1fa8c58150b85b0b485dd410502667b2d3af | diff --git a/zerver/lib/widget.py b/zerver/lib/widget.py
--- a/zerver/lib/widget.py
+++ b/zerver/lib/widget.py
@@ -3,7 +3,7 @@
from typing import Any, Optional, Tuple
from zerver.lib.message import SendMessageRequest
-from zerver.models import SubMessage
+from zerver.models import Message, SubMessage
def get_widget_data(content: str) -> Tuple[Optional[str], Optional[str]]:
@@ -77,3 +77,8 @@ def do_widget_post_save_actions(send_request: SendMessageRequest) -> None:
)
submessage.save()
send_request.submessages = SubMessage.get_raw_db_rows([message_id])
+
+
+def is_widget_message(message: Message) -> bool:
+ # Right now all messages that are widgetized use submessage, and vice versa.
+ return message.submessage_set.exists()
diff --git a/zerver/views/message_edit.py b/zerver/views/message_edit.py
--- a/zerver/views/message_edit.py
+++ b/zerver/views/message_edit.py
@@ -24,6 +24,7 @@
from zerver.lib.timestamp import datetime_to_timestamp
from zerver.lib.topic import LEGACY_PREV_TOPIC, REQ_topic
from zerver.lib.validator import check_bool, check_string_in, to_non_negative_int
+from zerver.lib.widget import is_widget_message
from zerver.models import Message, Realm, UserProfile
@@ -139,6 +140,10 @@ def update_message_backend(
else:
raise JsonableError(_("You don't have permission to edit this message"))
+ # Right now, we prevent users from editing widgets.
+ if content is not None and is_widget_message(message):
+ return json_error(_("Widgets cannot be edited."))
+
# If there is a change to the content, check that it hasn't been too long
# Allow an extra 20 seconds since we potentially allow editing 15 seconds
# past the limit, and in case there are network issues, etc. The 15 comes
| diff --git a/frontend_tests/node_tests/message_edit.js b/frontend_tests/node_tests/message_edit.js
--- a/frontend_tests/node_tests/message_edit.js
+++ b/frontend_tests/node_tests/message_edit.js
@@ -68,6 +68,10 @@ run_test("get_editability", () => {
// is true, we can edit the topic if there is one.
message.type = "stream";
assert.equal(get_editability(message, 45), editability_types.TOPIC_ONLY);
+ // Right now, we prevent users from editing widgets.
+ message.submessages = ["/poll"];
+ assert.equal(get_editability(message, 45), editability_types.TOPIC_ONLY);
+ delete message.submessages;
message.type = "private";
assert.equal(get_editability(message, 45), editability_types.NO_LONGER);
// If we don't pass a second argument, treat it as 0
diff --git a/zerver/tests/test_message_edit.py b/zerver/tests/test_message_edit.py
--- a/zerver/tests/test_message_edit.py
+++ b/zerver/tests/test_message_edit.py
@@ -187,6 +187,24 @@ def test_fetch_raw_message_private_stream(self) -> None:
result = self.client_get("/json/messages/" + str(msg_id))
self.assert_json_error(result, "Invalid message(s)")
+ # Right now, we prevent users from editing widgets.
+ def test_edit_submessage(self) -> None:
+ self.login("hamlet")
+ msg_id = self.send_stream_message(
+ self.example_user("hamlet"),
+ "Scotland",
+ topic_name="editing",
+ content="/poll Games?\nYES\nNO",
+ )
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "content": "/poll Games?\nYES\nNO\nMaybe",
+ },
+ )
+ self.assert_json_error(result, "Widgets cannot be edited.")
+
def test_edit_message_no_permission(self) -> None:
self.login("hamlet")
msg_id = self.send_stream_message(
| widgets: Prevent edits to /poll and /todo messages.
See #14229 for a more ambitious plan here, but in the short term, we want to prevent users from editing messages that are widgetized, such as "/poll" and "/todo" messages. You can detect widgets by the presence of `submessage` rows attached to the `message`, although there may be something more explicit.
We want to either just remove all the UI ways to perform an edit, or better, have a little popup when users try to edit that says "Sorry you can not edit this message, because it's a widget." We can work on the wording.
We should continue to allow admins to fully delete a widgetized message.
| @zulipbot claim
Topic editing should still be allowed as normal. These messages should behave like messages sent by someone else, where you’re allowed to View source / Edit topic.
Ah, good point on topic edits. So I think the only difference from the "View Source" experience should be that we change the messaging when you hover over the `?` icon next to "Topic editing only".
Hello @zulip/server-integrations, @zulip/server-message-view members, this issue was labeled with the "area: integrations", "area: message-editing" labels, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-01-31T17:28:42 |
zulip/zulip | 17,275 | zulip__zulip-17275 | [
"17238"
] | 57f2b8760a38a59704aa0133a6ea6a14cbbd215d | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -464,8 +464,6 @@ def process_new_human_user(
mit_beta_user = realm.is_zephyr_mirror_realm
if prereg_user is not None:
- prereg_user.status = confirmation_settings.STATUS_ACTIVE
- prereg_user.save(update_fields=["status"])
streams = prereg_user.streams.all()
acting_user: Optional[UserProfile] = prereg_user.referred_by
else:
@@ -501,20 +499,10 @@ def process_new_human_user(
user=f"{user_profile.full_name} <`{user_profile.email}`>"
),
)
- # Mark any other PreregistrationUsers that are STATUS_ACTIVE as
- # inactive so we can keep track of the PreregistrationUser we
- # actually used for analytics
- if prereg_user is not None:
- PreregistrationUser.objects.filter(email__iexact=user_profile.delivery_email).exclude(
- id=prereg_user.id
- ).update(status=confirmation_settings.STATUS_REVOKED)
- if prereg_user.referred_by is not None:
- notify_invites_changed(user_profile)
- else:
- PreregistrationUser.objects.filter(email__iexact=user_profile.delivery_email).update(
- status=confirmation_settings.STATUS_REVOKED
- )
+ revoke_preregistration_users(user_profile, prereg_user, realm_creation)
+ if not realm_creation and prereg_user is not None and prereg_user.referred_by is not None:
+ notify_invites_changed(user_profile)
notify_new_user(user_profile)
# Clear any scheduled invitation emails to prevent them
@@ -530,6 +518,39 @@ def process_new_human_user(
send_initial_pms(user_profile)
+def revoke_preregistration_users(
+ created_user_profile: UserProfile,
+ used_preregistration_user: Optional[PreregistrationUser],
+ realm_creation: bool,
+) -> None:
+ if used_preregistration_user is None:
+ assert not realm_creation, "realm_creation should only happen with a PreregistrationUser"
+
+ if used_preregistration_user is not None:
+ used_preregistration_user.status = confirmation_settings.STATUS_ACTIVE
+ used_preregistration_user.save(update_fields=["status"])
+
+ # In the special case of realm creation, there can be no additional PreregistrationUser
+ # for us to want to modify - because other realm_creation PreregistrationUsers should be
+ # left usable for creating different realms.
+ if realm_creation:
+ return
+
+ # Mark any other PreregistrationUsers in the realm that are STATUS_ACTIVE as
+ # inactive so we can keep track of the PreregistrationUser we
+ # actually used for analytics.
+ if used_preregistration_user is not None:
+ PreregistrationUser.objects.filter(
+ email__iexact=created_user_profile.delivery_email, realm=created_user_profile.realm
+ ).exclude(id=used_preregistration_user.id).update(
+ status=confirmation_settings.STATUS_REVOKED
+ )
+ else:
+ PreregistrationUser.objects.filter(
+ email__iexact=created_user_profile.delivery_email, realm=created_user_profile.realm
+ ).update(status=confirmation_settings.STATUS_REVOKED)
+
+
def notify_created_user(user_profile: UserProfile) -> None:
user_row = user_profile_to_user_row(user_profile)
person = format_user_row(
diff --git a/zerver/worker/queue_processors.py b/zerver/worker/queue_processors.py
--- a/zerver/worker/queue_processors.py
+++ b/zerver/worker/queue_processors.py
@@ -410,19 +410,12 @@ def consume(self, event: Dict[str, Any]) -> None:
@assign_queue("invites")
class ConfirmationEmailWorker(QueueProcessingWorker):
def consume(self, data: Mapping[str, Any]) -> None:
- if "email" in data:
- # When upgrading from a version up through 1.7.1, there may be
- # existing items in the queue with `email` instead of `prereg_id`.
- invitee = filter_to_valid_prereg_users(
- PreregistrationUser.objects.filter(email__iexact=data["email"].strip())
- ).latest("invited_at")
- else:
- invitee = filter_to_valid_prereg_users(
- PreregistrationUser.objects.filter(id=data["prereg_id"])
- ).first()
- if invitee is None:
- # The invitation could have been revoked
- return
+ invitee = filter_to_valid_prereg_users(
+ PreregistrationUser.objects.filter(id=data["prereg_id"])
+ ).first()
+ if invitee is None:
+ # The invitation could have been revoked
+ return
referrer = get_user_profile_by_id(data["referrer_id"])
logger.info(
| diff --git a/zerver/tests/test_queue_worker.py b/zerver/tests/test_queue_worker.py
--- a/zerver/tests/test_queue_worker.py
+++ b/zerver/tests/test_queue_worker.py
@@ -445,8 +445,6 @@ def test_invites_worker(self) -> None:
dict(prereg_id=prereg_alice.id, referrer_id=inviter.id, email_body=None),
# Nonexistent prereg_id, as if the invitation was deleted
dict(prereg_id=-1, referrer_id=inviter.id, email_body=None),
- # Form with `email` is from versions up to Zulip 1.7.1
- dict(email=self.nonreg_email("bob"), referrer_id=inviter.id, email_body=None),
]
for element in data:
fake_client.enqueue("invites", element)
@@ -458,7 +456,7 @@ def test_invites_worker(self) -> None:
"zerver.worker.queue_processors.send_future_email"
) as send_mock:
worker.start()
- self.assertEqual(send_mock.call_count, 2)
+ self.assertEqual(send_mock.call_count, 1)
def test_error_handling(self) -> None:
processed = []
diff --git a/zerver/tests/test_signup.py b/zerver/tests/test_signup.py
--- a/zerver/tests/test_signup.py
+++ b/zerver/tests/test_signup.py
@@ -1772,8 +1772,13 @@ def test_send_more_than_one_invite_to_same_user(self) -> None:
do_invite_users(self.user_profile, ["[email protected]"], streams, False)
do_invite_users(self.user_profile, ["[email protected]"], streams, False)
+ # Also send an invite from a different realm.
+ lear = get_realm("lear")
+ lear_user = self.lear_user("cordelia")
+ do_invite_users(lear_user, ["[email protected]"], [], False)
+
invites = PreregistrationUser.objects.filter(email__iexact="[email protected]")
- self.assertEqual(len(invites), 3)
+ self.assertEqual(len(invites), 4)
do_create_user(
"[email protected]",
@@ -1794,9 +1799,13 @@ def test_send_more_than_one_invite_to_same_user(self) -> None:
self.assertEqual(len(accepted_invite), 1)
self.assertEqual(accepted_invite[0].id, prereg_user.id)
- expected_revoked_invites = set(invites.exclude(id=prereg_user.id))
+ expected_revoked_invites = set(invites.exclude(id=prereg_user.id).exclude(realm=lear))
self.assertEqual(set(revoked_invites), expected_revoked_invites)
+ self.assertEqual(
+ PreregistrationUser.objects.get(email__iexact="[email protected]", realm=lear).status, 0
+ )
+
def test_confirmation_obj_not_exist_error(self) -> None:
"""Since the key is a param input by the user to the registration endpoint,
if it inserts an invalid value, the confirmation object won't be found. This
@@ -2745,6 +2754,83 @@ def test_create_realm_during_free_trial(self) -> None:
self.assertEqual(realm.name, realm_name)
self.assertEqual(realm.subdomain, string_id)
+ @override_settings(OPEN_REALM_CREATION=True)
+ def test_create_two_realms(self) -> None:
+ """
+ Verify correct behavior and PreregistrationUser handling when using
+ two pre-generated realm creation links to create two different realms.
+ """
+ password = "test"
+ first_string_id = "zuliptest"
+ second_string_id = "zuliptest2"
+ email = "[email protected]"
+ first_realm_name = "Test"
+ second_realm_name = "Test"
+
+ # Make sure the realms do not exist
+ with self.assertRaises(Realm.DoesNotExist):
+ get_realm(first_string_id)
+ with self.assertRaises(Realm.DoesNotExist):
+ get_realm(second_string_id)
+
+ # Now we pre-generate two realm creation links
+ result = self.client_post("/new/", {"email": email})
+ self.assertEqual(result.status_code, 302)
+ self.assertTrue(result["Location"].endswith(f"/accounts/new/send_confirm/{email}"))
+ result = self.client_get(result["Location"])
+ self.assert_in_response("Check your email so we can get started.", result)
+ first_confirmation_url = self.get_confirmation_url_from_outbox(email)
+ self.assertEqual(PreregistrationUser.objects.filter(email=email, status=0).count(), 1)
+
+ # Get a second realm creation link.
+ result = self.client_post("/new/", {"email": email})
+ self.assertEqual(result.status_code, 302)
+ self.assertTrue(result["Location"].endswith(f"/accounts/new/send_confirm/{email}"))
+ result = self.client_get(result["Location"])
+ self.assert_in_response("Check your email so we can get started.", result)
+ second_confirmation_url = self.get_confirmation_url_from_outbox(email)
+
+ self.assertNotEqual(first_confirmation_url, second_confirmation_url)
+ self.assertEqual(PreregistrationUser.objects.filter(email=email, status=0).count(), 2)
+
+ # Create and verify the first realm
+ result = self.client_get(first_confirmation_url)
+ self.assertEqual(result.status_code, 200)
+ result = self.submit_reg_form_for_user(
+ email,
+ password,
+ realm_subdomain=first_string_id,
+ realm_name=first_realm_name,
+ key=first_confirmation_url.split("/")[-1],
+ )
+ self.assertEqual(result.status_code, 302)
+ # Make sure the realm is created
+ realm = get_realm(first_string_id)
+ self.assertEqual(realm.string_id, first_string_id)
+ self.assertEqual(realm.name, first_realm_name)
+
+ # One of the PreregistrationUsers should have been used up:
+ self.assertEqual(PreregistrationUser.objects.filter(email=email, status=0).count(), 1)
+
+ # Create and verify the second realm
+ result = self.client_get(second_confirmation_url)
+ self.assertEqual(result.status_code, 200)
+ result = self.submit_reg_form_for_user(
+ email,
+ password,
+ realm_subdomain=second_string_id,
+ realm_name=second_realm_name,
+ key=second_confirmation_url.split("/")[-1],
+ )
+ self.assertEqual(result.status_code, 302)
+ # Make sure the realm is created
+ realm = get_realm(second_string_id)
+ self.assertEqual(realm.string_id, second_string_id)
+ self.assertEqual(realm.name, second_realm_name)
+
+ # The remaining PreregistrationUser should have been used up:
+ self.assertEqual(PreregistrationUser.objects.filter(email=email, status=0).count(), 0)
+
@override_settings(OPEN_REALM_CREATION=True)
def test_mailinator_signup(self) -> None:
result = self.client_post("/new/", {"email": "[email protected]"})
| wrongly cross-expired invites on zulipchat
Here's the reproducable bug:
Step 1: create 2 zulips on *.zulipchat.com:
I created invitebug1.zulipchat.com and invitebug2.zulipchat.com for this demo.
Step 2: invite the same email address on both.
Step 3: Follow one link (I followed the most recent first), confirm and join the zulip invitebug1.
The link was: https://invitebug1.zulipchat.com/accounts/do_confirm/p3ppoywp6sh4fzoshgilmayl
Step 4: Follow the other link, and you'll get:
> The registration link has expired or is not valid.
The link was: https://invitebug2.zulipchat.com/accounts/do_confirm/jmxogh5ppnilhkzylubsbi22
I included the links in case they help tracking down the problem.
| 2021-02-11T17:04:55 |
|
zulip/zulip | 17,409 | zulip__zulip-17409 | [
"16258"
] | 2f5eae5c68038b4d43fb3099bd64298906ba2578 | diff --git a/zerver/webhooks/github/view.py b/zerver/webhooks/github/view.py
--- a/zerver/webhooks/github/view.py
+++ b/zerver/webhooks/github/view.py
@@ -381,6 +381,37 @@ def get_status_body(helper: Helper) -> str:
)
+def get_locked_or_unlocked_pull_request_body(helper: Helper) -> str:
+ payload = helper.payload
+
+ action = payload["action"]
+
+ message = "{sender} has locked [PR #{pr_number}]({pr_url}) as {reason} and limited conversation to collaborators."
+ if action == "unlocked":
+ message = "{sender} has unlocked [PR #{pr_number}]({pr_url})."
+ return message.format(
+ sender=get_sender_name(payload),
+ pr_number=payload["pull_request"]["number"],
+ pr_url=payload["pull_request"]["html_url"],
+ reason=payload["pull_request"]["active_lock_reason"],
+ )
+
+
+def get_pull_request_auto_merge_body(helper: Helper) -> str:
+ payload = helper.payload
+
+ action = payload["action"]
+
+ message = "{sender} has enabled auto merge for [PR #{pr_number}]({pr_url})."
+ if action == "auto_merge_disabled":
+ message = "{sender} has disabled auto merge for [PR #{pr_number}]({pr_url})."
+ return message.format(
+ sender=get_sender_name(payload),
+ pr_number=payload["pull_request"]["number"],
+ pr_url=payload["pull_request"]["html_url"],
+ )
+
+
def get_pull_request_ready_for_review_body(helper: Helper) -> str:
payload = helper.payload
@@ -603,6 +634,8 @@ def get_subject_based_on_type(payload: Dict[str, Any], event: str) -> str:
"pull_request_review": get_pull_request_review_body,
"pull_request_review_comment": get_pull_request_review_comment_body,
"pull_request_review_requested": get_pull_request_review_requested_body,
+ "pull_request_auto_merge": get_pull_request_auto_merge_body,
+ "locked_or_unlocked_pull_request": get_locked_or_unlocked_pull_request_body,
"push_commits": get_push_commits_body,
"push_tags": get_push_tags_body,
"release": get_release_body,
@@ -707,6 +740,10 @@ def get_zulip_event_name(
return "pull_request_review_requested"
if action == "ready_for_review":
return "pull_request_ready_for_review"
+ if action in ("locked", "unlocked"):
+ return "locked_or_unlocked_pull_request"
+ if action in ("auto_merge_enabled", "auto_merge_disabled"):
+ return "pull_request_auto_merge"
if action in IGNORED_PULL_REQUEST_ACTIONS:
return None
elif header_event == "push":
| diff --git a/zerver/webhooks/github/tests.py b/zerver/webhooks/github/tests.py
--- a/zerver/webhooks/github/tests.py
+++ b/zerver/webhooks/github/tests.py
@@ -250,6 +250,24 @@ def test_pull_request_review_comment_with_custom_topic_in_url(self) -> None:
expected_message = "baxterthehacker created [PR Review Comment on #1 Update the README with new information](https://github.com/baxterthehacker/public-repo/pull/1#discussion_r29724692):\n\n~~~ quote\nMaybe you should use more emojji on this line.\n~~~"
self.check_webhook("pull_request_review_comment", expected_topic, expected_message)
+ def test_pull_request_locked(self) -> None:
+ expected_message = "tushar912 has locked [PR #1](https://github.com/tushar912/public-repo/pull/1) as off-topic and limited conversation to collaborators."
+ self.check_webhook("pull_request__locked", TOPIC_PR, expected_message)
+
+ def test_pull_request_unlocked(self) -> None:
+ expected_message = (
+ "tushar912 has unlocked [PR #1](https://github.com/tushar912/public-repo/pull/1)."
+ )
+ self.check_webhook("pull_request__unlocked", TOPIC_PR, expected_message)
+
+ def test_pull_request_auto_merge_enabled(self) -> None:
+ expected_message = "tushar912 has enabled auto merge for [PR #1](https://github.com/tushar912/public-repo/pull/1)."
+ self.check_webhook("pull_request__auto_merge_enabled", TOPIC_PR, expected_message)
+
+ def test_pull_request_auto_merge_disabled(self) -> None:
+ expected_message = "tushar912 has disabled auto merge for [PR #1](https://github.com/tushar912/public-repo/pull/1)."
+ self.check_webhook("pull_request__auto_merge_disabled", TOPIC_PR, expected_message)
+
def test_push_tag_msg(self) -> None:
expected_message = "baxterthehacker pushed tag abc."
self.check_webhook("push__tag", TOPIC_REPO, expected_message)
| github webhook: Re-visit ignored events and actions
Please only try this issue if you have a little bit of knowledge of the github webhook, as it will require making some judgment calls and getting consensus on new features.
We have the following:
~~~ py
522 IGNORED_EVENTS = [
523 "check_suite",
524 "label",
525 "meta",
526 "milestone",
527 "organization",
528 "project_card",
529 "repository_vulnerability_alert",
530 ]
531
532 IGNORED_PULL_REQUEST_ACTIONS = [
533 "approved",
534 "converted_to_draft",
535 "labeled",
536 "review_request_removed",
537 "unlabeled",
538 ]
~~~
(The latter list hasn't been merged at the time of this writing.)
Sometimes when we ignore events (or actions) it's very deliberate, because the events/actions would be too spammy for users. Other times we might just have been lazy.
If you look at the actions for pull request, consider "approved" and "converted_to_draft" as examples. For the "approved" event, that seems like something that folks may want to see, so we might consider getting a good test fixture for it and supporting it in the code. Then the "converted_to_draft" action seems likely to be spammy, and we should probably just continue to ignore it.
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
Also look for:
~~~ py
+IGNORED_TEAM_ACTIONS = [
+ # These are actions that are well documented by github
+ # (https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads)
+ # but we ignore them for now, possibly just due to laziness.
+ # One curious example here is team/added_to_repository, which is
+ # possibly the same as team_add.
+ "added_to_repository",
+ "created",
+ "deleted",
+ "removed_from_repository",
+]
~~~
I think with the GitHub webhook, it's reasonable to support essentially all actions, since GitHub supports filtering events on the integration configuration side.
For convenience, here are the docs: https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads
@showell There is a `submitted` action with state `approved` but I can't seem to find any docs related to `approved` pull request action.
@shanukun I would try to help you, but I don't really know their docs too well. I assume the link I provided above is their most current reference, but it's worth poking around on GitHub to see if you can find out what's up here.
@zulipbot claim | 2021-02-25T13:36:37 |
zulip/zulip | 17,550 | zulip__zulip-17550 | [
"13395"
] | 3a6d44b691b13ca1ea1791a161253dcfc1a81a59 | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -444,6 +444,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
),
WebhookIntegration("slack", ["communication"]),
WebhookIntegration("solano", ["continuous-integration"], display_name="Solano Labs"),
+ WebhookIntegration("sonarqube", ["continuous-integration"], display_name="SonarQube"),
WebhookIntegration("sonarr", ["entertainment"], display_name="Sonarr"),
WebhookIntegration("splunk", ["monitoring"], display_name="Splunk"),
WebhookIntegration("statuspage", ["customer-support"], display_name="Statuspage"),
@@ -780,6 +781,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
],
"slack": [ScreenshotConfig("message_info.txt")],
"solano": [ScreenshotConfig("build_001.json")],
+ "sonarqube": [ScreenshotConfig("error.json")],
"sonarr": [ScreenshotConfig("sonarr_episode_grabbed.json")],
"splunk": [ScreenshotConfig("search_one_result.json")],
"statuspage": [ScreenshotConfig("incident_created.json")],
diff --git a/zerver/webhooks/sonarqube/__init__.py b/zerver/webhooks/sonarqube/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/sonarqube/view.py b/zerver/webhooks/sonarqube/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/sonarqube/view.py
@@ -0,0 +1,132 @@
+# Webhooks for external integrations.
+
+from typing import Any, Dict, List, Mapping
+
+from django.http import HttpRequest, HttpResponse
+
+from zerver.decorator import webhook_view
+from zerver.lib.request import REQ, has_request_variables
+from zerver.lib.response import json_success
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+TOPIC_WITH_BRANCH = "{} / {}"
+
+MESSAGE_WITH_BRANCH_AND_CONDITIONS = "Project [{}]({}) analysis of branch {} resulted in {}:\n"
+MESSAGE_WITH_BRANCH_AND_WITHOUT_CONDITIONS = (
+ "Project [{}]({}) analysis of branch {} resulted in {}."
+)
+MESSAGE_WITHOUT_BRANCH_AND_WITH_CONDITIONS = "Project [{}]({}) analysis resulted in {}:\n"
+MESSAGE_WITHOUT_BRANCH_AND_CONDITIONS = "Project [{}]({}) analysis resulted in {}."
+
+INVERSE_OPERATORS = {
+ "WORSE_THAN": "should be better or equal to",
+ "GREATER_THAN": "should be less than or equal to",
+ "LESS_THAN": "should be greater than or equal to",
+}
+
+TEMPLATES = {
+ "default": "* {}: **{}** {} {} {}.",
+ "no_value": "* {}: **{}**.",
+}
+
+
+def parse_metric_name(metric_name: str) -> str:
+ return " ".join(metric_name.split("_"))
+
+
+def parse_condition(condition: Mapping[str, Any]) -> str:
+ metric = condition["metric"]
+
+ metric_name = parse_metric_name(metric)
+ operator = condition["operator"]
+ operator = INVERSE_OPERATORS.get(operator, operator)
+ value = condition.get("value", "no value")
+ status = condition["status"].lower()
+ threshold = condition["errorThreshold"]
+
+ if value == "no value":
+ return TEMPLATES["no_value"].format(metric_name, status)
+
+ template = TEMPLATES["default"]
+
+ return template.format(metric_name, status, value, operator, threshold)
+
+
+def parse_conditions(conditions: List[Mapping[str, Any]]) -> str:
+ return "\n".join(
+ [
+ parse_condition(condition)
+ for condition in conditions
+ if condition["status"].lower() != "ok" and condition["status"].lower() != "no_value"
+ ]
+ )
+
+
+def render_body_with_branch(payload: Mapping[str, Any]) -> str:
+ project_name = payload["project"]["name"]
+ project_url = payload["project"]["url"]
+ quality_gate_status = payload["qualityGate"]["status"].lower()
+ if quality_gate_status == "ok":
+ quality_gate_status = "success"
+ else:
+ quality_gate_status = "error"
+ branch = payload["branch"]["name"]
+
+ conditions = payload["qualityGate"]["conditions"]
+ conditions = parse_conditions(conditions)
+
+ if not conditions:
+ return MESSAGE_WITH_BRANCH_AND_WITHOUT_CONDITIONS.format(
+ project_name, project_url, branch, quality_gate_status
+ )
+ msg = MESSAGE_WITH_BRANCH_AND_CONDITIONS.format(
+ project_name, project_url, branch, quality_gate_status
+ )
+ msg += conditions
+
+ return msg
+
+
+def render_body_without_branch(payload: Mapping[str, Any]) -> str:
+ project_name = payload["project"]["name"]
+ project_url = payload["project"]["url"]
+ quality_gate_status = payload["qualityGate"]["status"].lower()
+ if quality_gate_status == "ok":
+ quality_gate_status = "success"
+ else:
+ quality_gate_status = "error"
+ conditions = payload["qualityGate"]["conditions"]
+ conditions = parse_conditions(conditions)
+
+ if not conditions:
+ return MESSAGE_WITHOUT_BRANCH_AND_CONDITIONS.format(
+ project_name, project_url, quality_gate_status
+ )
+ msg = MESSAGE_WITHOUT_BRANCH_AND_WITH_CONDITIONS.format(
+ project_name, project_url, quality_gate_status
+ )
+ msg += conditions
+
+ return msg
+
+
+@webhook_view("Sonarqube")
+@has_request_variables
+def api_sonarqube_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+ project = payload["project"]["name"]
+ branch = None
+ if "branch" in payload.keys():
+ branch = payload["branch"].get("name", None)
+ if branch:
+ topic = TOPIC_WITH_BRANCH.format(project, branch)
+ message = render_body_with_branch(payload)
+ else:
+ topic = project
+ message = render_body_without_branch(payload)
+ check_send_webhook_message(request, user_profile, topic, message)
+ return json_success()
| diff --git a/zerver/webhooks/sonarqube/tests.py b/zerver/webhooks/sonarqube/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/sonarqube/tests.py
@@ -0,0 +1,84 @@
+from zerver.lib.test_classes import WebhookTestCase
+
+
+class SonarqubeHookTests(WebhookTestCase):
+ STREAM_NAME = "SonarQube"
+ URL_TEMPLATE = "/api/v1/external/sonarqube?api_key={api_key}&stream={stream}"
+ FIXTURE_DIR_NAME = "sonarqube"
+ WEBHOOK_DIR_NAME = "sonarqube"
+
+ def test_analysis_success(self) -> None:
+ expected_topic = "test-sonar / master"
+
+ expected_message = """
+Project [test-sonar](http://localhost:9000/dashboard?id=test-sonar) analysis of branch master resulted in success.
+ """.strip()
+
+ self.check_webhook(
+ "success",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_analysis_error(self) -> None:
+ expected_topic = "test-sonar / master"
+
+ expected_message = """
+Project [test-sonar](http://localhost:9000/dashboard?id=test-sonar) analysis of branch master resulted in error:
+* coverage: **error** 0.0 should be greater than or equal to 80.
+* duplicated lines density: **error** 89.39828080229226 should be less than or equal to 3.
+ """.strip()
+
+ self.check_webhook(
+ "error",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_analysis_error_no_value(self) -> None:
+ expected_topic = "test-sonar / master"
+
+ expected_message = """
+Project [test-sonar](http://localhost:9000/dashboard?id=test-sonar) analysis of branch master resulted in error:
+* coverage: **error** 0.0 should be greater than or equal to 80.
+* duplicated lines density: **error**.
+ """.strip()
+
+ self.check_webhook(
+ "error_no_value",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_analysis_success_no_branch(self) -> None:
+ expected_topic = "test-sonar"
+
+ expected_message = """
+Project [test-sonar](http://localhost:9000/dashboard?id=test-sonar) analysis resulted in success.
+ """.strip()
+
+ self.check_webhook(
+ "success_no_branch",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_analysis_error_no_branch(self) -> None:
+ expected_topic = "test-sonar"
+
+ expected_message = """
+Project [test-sonar](http://localhost:9000/dashboard?id=test-sonar) analysis resulted in error:
+* coverage: **error** 0.0 should be greater than or equal to 80.
+* duplicated lines density: **error** 89.39828080229226 should be less than or equal to 3.
+ """.strip()
+
+ self.check_webhook(
+ "error_no_branch",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
| integrations: Add Sonar Qube.
Suggest adding Sonar Qube integrations for real-time notifications of code coverage, `#` of bugs ant etc.. I can also work on this since I have made a PR adding grafana integrations.
Sonar Qube webhook
https://docs.sonarqube.org/latest/project-administration/webhooks/
instructions for adding a new webhook integration
https://zulipchat.com/api/incoming-webhooks-overview
| Feel free to work on adding an integration for it, thanks @Zhenye-Na!
Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
Thank you for reaching out, @timabbott !
Definitely would love to make contributions to this project. Could the core developing team or reviewers review my last PR concerning grafana integrations?
Just wanna make sure the last PR is on the right way so that I can follow the same procedure on this issue.
Thanks!
Yeah, can you post a link?
of course, why not!
here is the link: https://github.com/zulip/zulip/pull/13384
Hi @Zhenye-Na are you working on this or planning to? If not can I take this issue?
Go ahead, checkout my working branch and make the changes. Thanks!
On Dec 4, 2019, at 01:28, kostekIV <[email protected]> wrote:
Hi @Zhenye-Na<https://github.com/Zhenye-Na> are you working on this or planning to? If not can I take this issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/zulip/zulip/issues/13395?email_source=notifications&email_token=AHWBFZLNMZPAN4FQMBSZFUTQW5Z4BA5CNFSM4JMFWFEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEF4J6QA#issuecomment-561553216>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHWBFZL2MXQHQ2YQY6Z6G3TQW5Z4BANCNFSM4JMFWFEA>.
Thank you, @zulipbot claim
Hello @kostekIV, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
@kostekIV do you wish to continue your PR or I can take it up. | 2021-03-10T16:24:42 |
zulip/zulip | 17,551 | zulip__zulip-17551 | [
"13086"
] | 9ce900f2b4ff687c567e637443374a54f348f5c7 | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -463,6 +463,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
function="zerver.webhooks.yo.view.api_yo_app_webhook",
display_name="Yo",
),
+ WebhookIntegration("wekan", ["productivity"], display_name="Wekan"),
WebhookIntegration("wordpress", ["marketing"], display_name="WordPress"),
WebhookIntegration("zapier", ["meta-integration"]),
WebhookIntegration("zendesk", ["customer-support"]),
@@ -805,6 +806,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
"trello": [ScreenshotConfig("adding_comment_to_card.json")],
"updown": [ScreenshotConfig("check_multiple_events.json")],
"uptimerobot": [ScreenshotConfig("uptimerobot_monitor_up.json")],
+ "wekan": [ScreenshotConfig("add_comment.json")],
"wordpress": [ScreenshotConfig("publish_post.txt", "wordpress_post_created.png")],
"yo": [
ScreenshotConfig(
diff --git a/zerver/webhooks/wekan/__init__.py b/zerver/webhooks/wekan/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/wekan/view.py b/zerver/webhooks/wekan/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/wekan/view.py
@@ -0,0 +1,50 @@
+from typing import Any, Dict
+
+from django.http import HttpRequest, HttpResponse
+
+from zerver.decorator import webhook_view
+from zerver.lib.request import REQ, has_request_variables
+from zerver.lib.response import json_success
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+LINK_TEMPLATE = "[See in Wekan]({url})"
+MESSAGE_TEMPLATE = "{body}\n\n{footer}"
+
+
+def get_url(text: str) -> str:
+ return text.split("\n")[-1]
+
+
+def get_hyperlinked_url(text: str) -> str:
+ url = get_url(text)
+ return LINK_TEMPLATE.format(url=url)
+
+
+def clean_payload_text(text: str) -> str:
+ url = get_url(text)
+ return text.replace(url, "").replace("\n", "")
+
+
+def get_message_body(payload: Dict[str, Any], action: str) -> str:
+ footer = get_hyperlinked_url(payload["text"])
+ body = process_message_data(payload, action)
+ return MESSAGE_TEMPLATE.format(body=body, footer=footer)
+
+
+def process_message_data(payload: Dict[str, Any], action: str) -> str:
+ payload["text"] = clean_payload_text(payload["text"])
+ return "{text}.".format(**payload)
+
+
+@webhook_view("Wekan")
+@has_request_variables
+def api_wekan_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+ topic = "Wekan Notification"
+ body = get_message_body(payload, payload["description"])
+ check_send_webhook_message(request, user_profile, topic, body)
+ return json_success(request)
| diff --git a/zerver/webhooks/wekan/tests.py b/zerver/webhooks/wekan/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/wekan/tests.py
@@ -0,0 +1,217 @@
+from zerver.lib.test_classes import WebhookTestCase
+
+
+class WekanHookTests(WebhookTestCase):
+ STREAM_NAME = "wekan"
+ URL_TEMPLATE = "/api/v1/external/wekan?stream={stream}&api_key={api_key}"
+ FIXTURE_DIR_NAME = "wekan"
+
+ def test_add_attachment_message(self) -> None:
+ expected_message = 'JohnFish added attachment "hGfm5ksud8k" to card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "add_attachment",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_add_checklist_item_message(self) -> None:
+ expected_message = 'JohnFish added checklist item "merge commit 9dfe" to checklist "To do" at card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "add_checklist_item",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_add_checklist_message(self) -> None:
+ expected_message = 'JohnFish added checklist "To do" to card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "bucked-list".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "add_checklist",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_add_label_message(self) -> None:
+ expected_message = 'JohnFish Added label Language to card "Markdown & emojis" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/TMmjFnQGuZPsbjXzS)'
+ self.check_webhook(
+ "add_label",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_archived_swimlane_message(self) -> None:
+ expected_message = 'JohnFish Swimlane "Default" at board "Bucket List" moved to Archive.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)'
+ self.check_webhook(
+ "archived_swimlane",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_archived_card_message(self) -> None:
+ expected_message = 'JohnFish Card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List" moved to Archive.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "archived_card",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_archived_list_message(self) -> None:
+ expected_message = 'JohnFish List "Design" at swimlane "Default" at board "Bucket List" moved to Archive.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)'
+ self.check_webhook(
+ "archived_list",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_checked_item_message(self) -> None:
+ expected_message = 'JohnFish checked To do of checklist "To do" at card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "bucket-list".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "checked_item",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_add_comment_message(self) -> None:
+ expected_message = 'JohnFish commented on card "Markdown and emoji\'s": "This feature is important" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "add_comment",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_create_card_message(self) -> None:
+ expected_message = 'JohnFish created card "Markdown and emoji\'s" to list "Development & Implementation" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "create_card",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_create_custom_field_message(self) -> None:
+ expected_message = 'JohnFish created custom field Language at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)'
+ self.check_webhook(
+ "create_custom_field",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_create_list_message(self) -> None:
+ expected_message = 'JohnFish added list "Testing & Maintenance" to board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)'
+ self.check_webhook(
+ "create_list",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_create_swimlane_message(self) -> None:
+ expected_message = 'JohnFish created swimlane "Jasper" to board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)'
+ self.check_webhook(
+ "create_swimlane",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_delete_attachment_message(self) -> None:
+ expected_message = 'JohnFish deleted attachment "hGfm5ksud8k.jpg" at card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "delete_attachment",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_join_member_message(self) -> None:
+ expected_message = 'JohnFish added member kokoboss to card "Markdown & emojis" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/TMmjFnQGuZPsbjXzS)'
+ self.check_webhook(
+ "join_member",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_move_card_message(self) -> None:
+ expected_message = 'JohnFish moved card "Markdown and emoji\'s" at board "Bucket List" from list "Development & Implementation" at swimlane "Default" to list "Design" at swimlane "Default".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "move_card",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_remove_list_message(self) -> None:
+ expected_message = "JohnFish act-removeList.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)"
+ self.check_webhook(
+ "remove_list",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_remove_swimlane_message(self) -> None:
+ expected_message = "JohnFish act-removeSwimlane.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list)"
+ self.check_webhook(
+ "remove_swimlane",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_removed_checklist_item_message(self) -> None:
+ expected_message = "JohnFish act-removedChecklistItem.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)"
+ self.check_webhook(
+ "removed_checklist_item",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_removed_checklist_message(self) -> None:
+ expected_message = 'JohnFish removed checklist "To do" from card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "removed_checklist",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_restored_card_message(self) -> None:
+ expected_message = 'JohnFish restored card "Markdown and emoji\'s" to list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "restored_card",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_set_custom_field_message(self) -> None:
+ expected_message = "JohnFish act-setCustomField.\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)"
+ self.check_webhook(
+ "set_custom_field",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_uncomplete_checklist_message(self) -> None:
+ expected_message = 'JohnFish uncompleted checklist To do at card "Markdown and emoji\'s" at list "Design" at swimlane "Default" at board "Bucket List".\n\n[See in Wekan](http://127.0.0.1/b/Jinj4Xj7qnHLRmrTY/bucket-list/pMtu7kPZvMuhhC4hL)'
+ self.check_webhook(
+ "uncomplete_checklist",
+ "Wekan Notification",
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def get_body(self, fixture_name: str) -> str:
+ return self.webhook_fixture_data("wekan", fixture_name, file_type="json")
| Integration of open source project management tool Wekan
We are using Wekan as open source alternative for trello at work and I'm looking for a way to integrate notifications into zulip - just like the trello integration.
- Has anyone experiences in doing so and can help?
If this is not the right place for this question please tell me where to ask.
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
@kratsching I'm not aware of an existing integration. It looks like Wekan does support webhooks:
https://github.com/wekan/wekan/wiki/Webhook-data
So I expect it should be possible to write a Zulip integration following this guide: https://zulipchat.com/api/incoming-webhooks-overview
If you can do the fixture collection part, possibly someone from the community would be up for doing the formatting pieces.
@timabbott thank you for your answer - I will check that out in october. If I find a solution I will come back here.
| 2021-03-10T16:28:31 |
zulip/zulip | 17,553 | zulip__zulip-17553 | [
"17535"
] | 23088b5d78099b989104c366be84bfca2a3ee3b4 | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -147,7 +147,15 @@ def normal_compile(pattern: str) -> Pattern[str]:
@one_time
def get_compiled_stream_link_regex() -> Pattern[str]:
- return verbose_compile(STREAM_LINK_REGEX)
+ # Not using verbose_compile as it adds ^(.*?) and
+ # (.*?)$ which cause extra overhead of matching
+ # pattern which is not required.
+ # With new InlineProcessor these extra patterns
+ # are not required.
+ return re.compile(
+ STREAM_LINK_REGEX,
+ re.DOTALL | re.UNICODE | re.VERBOSE,
+ )
STREAM_TOPIC_LINK_REGEX = r"""
@@ -162,7 +170,15 @@ def get_compiled_stream_link_regex() -> Pattern[str]:
@one_time
def get_compiled_stream_topic_link_regex() -> Pattern[str]:
- return verbose_compile(STREAM_TOPIC_LINK_REGEX)
+ # Not using verbose_compile as it adds ^(.*?) and
+ # (.*?)$ which cause extra overhead of matching
+ # pattern which is not required.
+ # With new InlineProcessor these extra patterns
+ # are not required.
+ return re.compile(
+ STREAM_TOPIC_LINK_REGEX,
+ re.DOTALL | re.UNICODE | re.VERBOSE,
+ )
LINK_REGEX: Optional[Pattern[str]] = None
@@ -1796,8 +1812,10 @@ def handleMatch(self, m: Match[str]) -> Union[Element, str]:
)
-class UserMentionPattern(markdown.inlinepatterns.Pattern):
- def handleMatch(self, m: Match[str]) -> Optional[Element]:
+class UserMentionPattern(markdown.inlinepatterns.InlineProcessor):
+ def handleMatch( # type: ignore[override] # supertype incompatible with supersupertype
+ self, m: Match[str], data: str
+ ) -> Union[Tuple[None, None, None], Tuple[Element, int, int]]:
match = m.group("match")
silent = m.group("silent") == "_"
@@ -1806,7 +1824,7 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
if match.startswith("**") and match.endswith("**"):
name = match[2:-2]
else:
- return None
+ return None, None, None
wildcard = mention.user_mention_matches_wildcard(name)
@@ -1827,7 +1845,7 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
user_id = str(user["id"])
else:
# Don't highlight @mentions that don't refer to a valid user
- return None
+ return None, None, None
el = Element("span")
el.set("data-user-id", user_id)
@@ -1838,15 +1856,17 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
el.set("class", "user-mention")
text = f"@{text}"
el.text = markdown.util.AtomicString(text)
- return el
- return None
-
+ return el, m.start(), m.end()
+ return None, None, None
-class UserGroupMentionPattern(markdown.inlinepatterns.Pattern):
- def handleMatch(self, m: Match[str]) -> Optional[Element]:
- match = m.group(2)
+class UserGroupMentionPattern(markdown.inlinepatterns.InlineProcessor):
+ def handleMatch( # type: ignore[override] # supertype incompatible with supersupertype
+ self, m: Match[str], data: str
+ ) -> Union[Tuple[None, None, None], Tuple[Element, int, int]]:
+ match = m.group(1)
db_data = self.md.zulip_db_data
+
if self.md.zulip_message and db_data is not None:
name = extract_user_group(match)
user_group = db_data["mention_data"].get_user_group(name)
@@ -1857,18 +1877,25 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
else:
# Don't highlight @-mentions that don't refer to a valid user
# group.
- return None
+ return None, None, None
el = Element("span")
el.set("class", "user-group-mention")
el.set("data-user-group-id", user_group_id)
text = f"@{name}"
el.text = markdown.util.AtomicString(text)
- return el
- return None
+ return el, m.start(), m.end()
+ return None, None, None
-class StreamPattern(CompiledPattern):
+class StreamPattern(markdown.inlinepatterns.InlineProcessor):
+ def __init__(self, compiled_re: Pattern[str], md: markdown.Markdown) -> None:
+ # This is similar to the superclass's small __init__ function,
+ # but we skip the compilation step and let the caller give us
+ # a compiled regex.
+ self.compiled_re = compiled_re
+ self.md = md
+
def find_stream_by_name(self, name: str) -> Optional[Dict[str, Any]]:
db_data = self.md.zulip_db_data
if db_data is None:
@@ -1876,13 +1903,15 @@ def find_stream_by_name(self, name: str) -> Optional[Dict[str, Any]]:
stream = db_data["stream_names"].get(name)
return stream
- def handleMatch(self, m: Match[str]) -> Optional[Element]:
+ def handleMatch( # type: ignore[override] # supertype incompatible with supersupertype
+ self, m: Match[str], data: str
+ ) -> Union[Tuple[None, None, None], Tuple[Element, int, int]]:
name = m.group("stream_name")
if self.md.zulip_message:
stream = self.find_stream_by_name(name)
if stream is None:
- return None
+ return None, None, None
el = Element("a")
el.set("class", "stream")
el.set("data-stream-id", str(stream["id"]))
@@ -1895,11 +1924,18 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
el.set("href", f"/#narrow/stream/{stream_url}")
text = f"#{name}"
el.text = markdown.util.AtomicString(text)
- return el
- return None
+ return el, m.start(), m.end()
+ return None, None, None
-class StreamTopicPattern(CompiledPattern):
+class StreamTopicPattern(markdown.inlinepatterns.InlineProcessor):
+ def __init__(self, compiled_re: Pattern[str], md: markdown.Markdown) -> None:
+ # This is similar to the superclass's small __init__ function,
+ # but we skip the compilation step and let the caller give us
+ # a compiled regex.
+ self.compiled_re = compiled_re
+ self.md = md
+
def find_stream_by_name(self, name: str) -> Optional[Dict[str, Any]]:
db_data = self.md.zulip_db_data
if db_data is None:
@@ -1907,14 +1943,16 @@ def find_stream_by_name(self, name: str) -> Optional[Dict[str, Any]]:
stream = db_data["stream_names"].get(name)
return stream
- def handleMatch(self, m: Match[str]) -> Optional[Element]:
+ def handleMatch( # type: ignore[override] # supertype incompatible with supersupertype
+ self, m: Match[str], data: str
+ ) -> Union[Tuple[None, None, None], Tuple[Element, int, int]]:
stream_name = m.group("stream_name")
topic_name = m.group("topic_name")
if self.md.zulip_message:
stream = self.find_stream_by_name(stream_name)
if stream is None or topic_name is None:
- return None
+ return None, None, None
el = Element("a")
el.set("class", "stream-topic")
el.set("data-stream-id", str(stream["id"]))
@@ -1924,8 +1962,8 @@ def handleMatch(self, m: Match[str]) -> Optional[Element]:
el.set("href", link)
text = f"#{stream_name} > {topic_name}"
el.text = markdown.util.AtomicString(text)
- return el
- return None
+ return el, m.start(), m.end()
+ return None, None, None
def possible_linked_stream_names(content: str) -> Set[str]:
| diff --git a/zerver/tests/test_markdown.py b/zerver/tests/test_markdown.py
--- a/zerver/tests/test_markdown.py
+++ b/zerver/tests/test_markdown.py
@@ -1845,6 +1845,36 @@ def test_mention_silent(self) -> None:
)
self.assertEqual(msg.mentions_user_ids, set())
+ def test_mention_invalid_followed_by_valid(self) -> None:
+ sender_user_profile = self.example_user("othello")
+ user_profile = self.example_user("hamlet")
+ msg = Message(sender=sender_user_profile, sending_client=get_client("test"))
+ user_id = user_profile.id
+
+ content = "@**Invalid user** and @**King Hamlet**"
+ self.assertEqual(
+ render_markdown(msg, content),
+ '<p>@<strong>Invalid user</strong> and <span class="user-mention" '
+ f'data-user-id="{user_id}">'
+ "@King Hamlet</span></p>",
+ )
+ self.assertEqual(msg.mentions_user_ids, {user_profile.id})
+
+ def test_silent_mention_invalid_followed_by_valid(self) -> None:
+ sender_user_profile = self.example_user("othello")
+ user_profile = self.example_user("hamlet")
+ msg = Message(sender=sender_user_profile, sending_client=get_client("test"))
+ user_id = user_profile.id
+
+ content = "@_**Invalid user** and @_**King Hamlet**"
+ self.assertEqual(
+ render_markdown(msg, content),
+ '<p>@_<strong>Invalid user</strong> and <span class="user-mention silent" '
+ f'data-user-id="{user_id}">'
+ "King Hamlet</span></p>",
+ )
+ self.assertEqual(msg.mentions_user_ids, set())
+
def test_possible_mentions(self) -> None:
def assert_mentions(content: str, names: Set[str], has_wildcards: bool = False) -> None:
self.assertEqual(possible_mentions(content), (names, has_wildcards))
@@ -2026,6 +2056,27 @@ def test_user_group_mention_single(self) -> None:
self.assertEqual(msg.mentions_user_ids, {user_profile.id})
self.assertEqual(msg.mentions_user_group_ids, {user_group.id})
+ def test_invalid_user_group_followed_by_valid_mention_single(self) -> None:
+ sender_user_profile = self.example_user("othello")
+ user_profile = self.example_user("hamlet")
+ msg = Message(sender=sender_user_profile, sending_client=get_client("test"))
+ user_id = user_profile.id
+ user_group = self.create_user_group_for_test("support")
+
+ content = "@**King Hamlet** @*Invalid user group* @*support*"
+ self.assertEqual(
+ render_markdown(msg, content),
+ '<p><span class="user-mention" '
+ f'data-user-id="{user_id}">'
+ "@King Hamlet</span> "
+ "@<em>Invalid user group</em> "
+ '<span class="user-group-mention" '
+ f'data-user-group-id="{user_group.id}">'
+ "@support</span></p>",
+ )
+ self.assertEqual(msg.mentions_user_ids, {user_profile.id})
+ self.assertEqual(msg.mentions_user_group_ids, {user_group.id})
+
def test_user_group_mention_atomic_string(self) -> None:
sender_user_profile = self.example_user("othello")
realm = get_realm("zulip")
@@ -2155,6 +2206,18 @@ def test_stream_single(self) -> None:
),
)
+ def test_invalid_stream_followed_by_valid_mention(self) -> None:
+ denmark = get_stream("Denmark", get_realm("zulip"))
+ sender_user_profile = self.example_user("othello")
+ msg = Message(sender=sender_user_profile, sending_client=get_client("test"))
+ content = "#**Invalid** and #**Denmark**"
+ self.assertEqual(
+ render_markdown(msg, content),
+ '<p>#<strong>Invalid</strong> and <a class="stream" data-stream-id="{d.id}" href="/#narrow/stream/{d.id}-Denmark">#{d.name}</a></p>'.format(
+ d=denmark,
+ ),
+ )
+
def test_stream_multiple(self) -> None:
sender_user_profile = self.example_user("othello")
msg = Message(sender=sender_user_profile, sending_client=get_client("test"))
| inline meniton for users, groups and streams fail if they come after an invalid mention
This bug was reported here:
[https://chat.zulip.org/#narrow/stream/9-issues/topic/markdown.20mention.20bug](https://chat.zulip.org/#narrow/stream/9-issues/topic/markdown.20mention.20bug)
If there is an invalid user name written inside mention syntax then all the subsequent valid mentions are also treated as invalid. This is true for user groups, and streams mentions too.
**An example**
for source:
```
@**invalid** @**iago** @*invalid user group* @*hamletcharacters* #**invalid streams>little URLS isn't loading quickly** #**sales>little URLS isn't loading quickly**
```
output:

| @zulipbot claim | 2021-03-10T20:10:22 |
zulip/zulip | 17,560 | zulip__zulip-17560 | [
"15375"
] | 55de66f944533085993618a190123cebd3c2200d | diff --git a/tools/linter_lib/custom_check.py b/tools/linter_lib/custom_check.py
--- a/tools/linter_lib/custom_check.py
+++ b/tools/linter_lib/custom_check.py
@@ -630,12 +630,6 @@
},
{
"pattern": r'title="[^{\:]',
- "exclude_line": {
- (
- "templates/zerver/app/markdown_help.html",
- '<td class="rendered_markdown"><img alt=":heart:" class="emoji" src="/static/generated/emoji/images/emoji/heart.png" title=":heart:" /></td>',
- ),
- },
"exclude": {
"templates/zerver/emails",
"templates/analytics/realm_details.html",
@@ -649,7 +643,6 @@
"exclude": {
"static/templates/settings/display_settings.hbs",
"templates/zerver/app/keyboard_shortcuts.html",
- "templates/zerver/app/markdown_help.html",
},
"good_lines": ['<img src="{{source_url}}" alt="{{ _(name) }}" />', '<img alg="" />'],
"bad_lines": ['<img alt="Foo Image" />'],
@@ -679,8 +672,6 @@
+ "'"
+ "](display: ?none|background: {{|color: {{|background-color: {{).*",
"exclude": {
- # KaTeX output uses style attribute
- "templates/zerver/app/markdown_help.html",
# 5xx page doesn't have external CSS
"static/html/5xx.html",
# exclude_pattern above handles color, but have other issues:
| Move markdown help to a frontend template without hardcoded rendered markdown
Our markdown help file `templates/zerver/app/markdown_help.html`, has a bunch of copied HTML in it, which isn't a robust/maintainable strategy. We have a frontend markdown processor, we should be able to just have a much simpler file HTML containing the raw markdown, and then JavaScript that does extracts the code from each `preserve_spaces` element and installs the rendered markdown based on that in the right column.
Tagging as a priority since this seems like a pretty high-value code cleanup.
| Hello @zulip/server-markdown, @zulip/server-refactoring members, this issue was labeled with the "area: markdown", "area: refactoring" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
Hello @Krishna-Sivakumar, it looks like you've currently claimed 1 issue in this repository. We encourage new contributors to focus their efforts on at most 1 issue at a time, so please complete your work on your other claimed issues before trying to claim this issue again.
We look forward to your valuable contributions!
@zulipbot claim
Welcome to Zulip, @lokesh576! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
hello! I am contributing for the first time to open source. Can someone please tell me where I can start on this project and what all it is about?
@khushboogupta13 You can go through [Contributing guidelines](https://github.com/zulip/zulip/blob/master/CONTRIBUTING.md) and to get help or chat with zulip community check out [zulip community](https://chat.zulip.org).
@timabbott where would I find the markdown processor? From what I understand, you want to use the markdown processor to render the raw HTML, instead of specifying it manually.
`static/js/markdown.js`; see also https://zulip.readthedocs.io/en/latest/subsystems/markdown.html. We can use the frontend processor for this.
Hello @shubham00jain, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
@zulipbot claim
Welcome to Zulip, @AT1452! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@zulipbot abandon
@timabbott I would like to work on this issue, if it's not taken. Please show me where to begin. Thanks!
@timabbott Is this issue already fixed? Would like to collaborate.
This has not been fixed, and the issue description as well as my other comments here should explain what you need to know.
@zulipbot claim
@zulipbot claim
Welcome to Zulip, @kuroriplus! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
I am working on this one. | 2021-03-11T02:47:32 |
|
zulip/zulip | 17,677 | zulip__zulip-17677 | [
"8005"
] | daabc52a786d969e036651b934f4282a7f186b5d | diff --git a/zerver/context_processors.py b/zerver/context_processors.py
--- a/zerver/context_processors.py
+++ b/zerver/context_processors.py
@@ -49,6 +49,13 @@ def common_context(user: UserProfile) -> Dict[str, Any]:
}
+def get_zulip_version_name(zulip_version: str) -> str:
+ if zulip_version.endswith("+git"):
+ return "Zulip " + zulip_version[:-4]
+
+ return "Zulip " + zulip_version
+
+
def get_realm_from_request(request: HttpRequest) -> Optional[Realm]:
if hasattr(request, "user") and hasattr(request.user, "realm"):
return request.user.realm
@@ -135,6 +142,8 @@ def zulip_default_context(request: HttpRequest) -> Dict[str, Any]:
"request_language": get_language(),
}
+ ZULIP_VERSION_NAME = get_zulip_version_name(ZULIP_VERSION)
+
context = {
"root_domain_landing_page": settings.ROOT_DOMAIN_LANDING_PAGE,
"custom_logo_url": settings.CUSTOM_LOGO_URL,
@@ -160,6 +169,7 @@ def zulip_default_context(request: HttpRequest) -> Dict[str, Any]:
"password_min_length": settings.PASSWORD_MIN_LENGTH,
"password_min_guesses": settings.PASSWORD_MIN_GUESSES,
"zulip_version": ZULIP_VERSION,
+ "zulip_version_name": ZULIP_VERSION_NAME,
"user_is_authenticated": user_is_authenticated,
"settings_path": settings_path,
"secrets_path": secrets_path,
| diff --git a/zerver/tests/test_context_processors.py b/zerver/tests/test_context_processors.py
new file mode 100644
--- /dev/null
+++ b/zerver/tests/test_context_processors.py
@@ -0,0 +1,8 @@
+from zerver.context_processors import get_zulip_version_name
+from zerver.lib.test_classes import ZulipTestCase
+
+
+class TestContextProcessors(ZulipTestCase):
+ def test_get_zulip_version_name(self) -> None:
+ self.assertEqual(get_zulip_version_name("4.0-dev+git"), "Zulip 4.0-dev")
+ self.assertEqual(get_zulip_version_name("4.0"), "Zulip 4.0")
| No way to discover Zulip version number from within UI.
As I was adding a comment to issue #6977 just now, I wanted to include the version number of the Zulip instance we're running. But I couldn't find that information anywhere in the UI. Traditionally, the version number is either in the page footer or in an "About" page somewhere. But it's not in the footers and I couldn't find an About page, nor any other page that has the instance's version number. I poked around in the documentation pages and settings pages, but couldn't find it anywhere in there either.
I ended up shelling into the server and looking at the `deployments/2017-10-12-17-22-56/version` file.
| @kfogel you can check the version number using the api endpoint - `yourserver/api/v1/server_settings`
Example - https://chat.zulip.org/api/v1/server_settings
@zulipbot claim
It's at https://chat.zulip.org/history/, which is parked there until we turn it into a proper /about page and link to it from the footer. @skunkmb, I suspect the solution here is just to finish that project, since that's where people will expect it.
Hello @skunkmb, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
We [discussed this just now](https://chat.zulip.org/#narrow/stream/general/subject/zulipchat.2Ecom/near/494918) in Zulip developer chat.
> places like https://chat.zulip.org/history/ and https://chat.zulip.org/api/v1/server_settings say "1.7.1+git" which is not as precise as I'd like.
@timabbott said, specifically about exposing the current commit ID for zulipchat.com organizations,
> It's somewhat useless to do a commit ID (because there's usually a few operational-related or debugging or revert or whatever commits on zulipchat.com that aren't in the main repo), but we could do a count of commits pretty easily. Probably the ideal thing would be to have the merge-base of `upstream/master` with the current commit, but that gets a bit annoying to construct (but still probably doable).
@zulipbot claim
Hello @rightauth!
Thanks for your interest in Zulip! You have attempted to claim an issue without the labels "help wanted", "good first issue". Since you're a new contributor, you can only claim and submit pull requests for issues with the [help wanted](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+no%3Aassignee+label%3A%22help+wanted%22) or [good first issue](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+no%3Aassignee+label%3A%22good+first+issue%22) labels.
If this is your first time here, we recommend reading our [guide for new contributors](https://zulip.readthedocs.io/en/latest/overview/contributing.html) before getting started.
@zulipbot claim | 2021-03-17T23:43:14 |
zulip/zulip | 17,703 | zulip__zulip-17703 | [
"17607"
] | f18d7fa7ed7bd97a368f00f05ddb89c1eecbbb32 | diff --git a/zerver/models.py b/zerver/models.py
--- a/zerver/models.py
+++ b/zerver/models.py
@@ -1821,6 +1821,11 @@ def __str__(self) -> str:
get_client_cache: Dict[str, Client] = {}
+def clear_client_cache() -> None: # nocoverage
+ global get_client_cache
+ get_client_cache = {}
+
+
def get_client(name: str) -> Client:
# Accessing KEY_PREFIX through the module is necessary
# because we need the updated value of the variable.
diff --git a/zerver/views/development/cache.py b/zerver/views/development/cache.py
new file mode 100644
--- /dev/null
+++ b/zerver/views/development/cache.py
@@ -0,0 +1,21 @@
+import os
+
+from django.http import HttpRequest, HttpResponse
+from django.views.decorators.csrf import csrf_exempt
+
+from zerver.decorator import require_post
+from zerver.lib.cache import get_cache_backend
+from zerver.lib.response import json_success
+from zerver.models import clear_client_cache, flush_per_request_caches
+
+ZULIP_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../../")
+
+# This is used only by the Puppeteer Tests to clear all the cache after each run.
+@csrf_exempt
+@require_post
+def remove_caches(request: HttpRequest) -> HttpResponse:
+ cache = get_cache_backend(None)
+ cache.clear()
+ clear_client_cache()
+ flush_per_request_caches()
+ return json_success()
diff --git a/zproject/dev_urls.py b/zproject/dev_urls.py
--- a/zproject/dev_urls.py
+++ b/zproject/dev_urls.py
@@ -10,6 +10,7 @@
from django.views.static import serve
from zerver.views.auth import config_error, login_page
+from zerver.views.development.cache import remove_caches
from zerver.views.development.email_log import clear_emails, email_page, generate_all_emails
from zerver.views.development.integrations import (
check_send_webhook_fixture_message,
@@ -73,6 +74,8 @@
path("devtools/integrations/<integration_name>/fixtures", get_fixtures),
path("config-error/<error_category_name>", config_error, name="config_error"),
path("config-error/remoteuser/<error_category_name>", config_error),
+ # Special endpoint to remove all the server-side caches.
+ path("flush_caches", remove_caches),
]
# Serve static assets via the Django server
| diff --git a/zerver/lib/test_fixtures.py b/zerver/lib/test_fixtures.py
--- a/zerver/lib/test_fixtures.py
+++ b/zerver/lib/test_fixtures.py
@@ -386,3 +386,49 @@ def remove_test_run_directories(expiry_time: int = 60 * 60) -> int:
except FileNotFoundError:
pass
return removed
+
+
+def reset_zulip_test_database() -> None:
+ """
+ This function is used to reset the zulip_test database fastest way possible,
+ i.e. First, it deletes the database and then clones it from zulip_test_template.
+ This function is used with puppeteer tests, so it can quickly reset the test
+ database after each run.
+ """
+ from zerver.lib.test_runner import destroy_test_databases
+
+ # Make sure default database is 'zulip_test'.
+ assert connections["default"].settings_dict["NAME"] == "zulip_test"
+
+ # Clearing all the active PSQL sessions with 'zulip_test'.
+ run(
+ [
+ "env",
+ "PGHOST=localhost",
+ "PGUSER=zulip_test",
+ "scripts/setup/terminate-psql-sessions",
+ "zulip_test",
+ ]
+ )
+
+ destroy_test_databases()
+ # Pointing default database to test database template, so we can instantly clone it.
+ settings.DATABASES["default"]["NAME"] = settings.BACKEND_DATABASE_TEMPLATE
+ connection = connections["default"]
+ clone_database_suffix = "clone"
+ connection.creation.clone_test_db(
+ suffix=clone_database_suffix,
+ )
+ settings_dict = connection.creation.get_test_db_clone_settings(clone_database_suffix)
+ # We manually rename the clone database to 'zulip_test' because when cloning it,
+ # its name is set to original database name + some suffix.
+ # Also, we need it to be 'zulip_test' so that our running server can recognize it.
+ with connection.cursor() as cursor:
+ cursor.execute("ALTER DATABASE zulip_test_template_clone RENAME TO zulip_test;")
+ settings_dict["NAME"] = "zulip_test"
+ # connection.settings_dict must be updated in place for changes to be
+ # reflected in django.db.connections. If the following line assigned
+ # connection.settings_dict = settings_dict, new threads would connect
+ # to the default database instead of the appropriate clone.
+ connection.settings_dict.update(settings_dict)
+ connection.close()
diff --git a/zerver/lib/test_helpers.py b/zerver/lib/test_helpers.py
--- a/zerver/lib/test_helpers.py
+++ b/zerver/lib/test_helpers.py
@@ -495,6 +495,7 @@ def find_pattern(pattern: Any, prefixes: List[str]) -> None:
"docs/(?P<path>.+)",
"casper/(?P<path>.+)",
"static/(?P<path>.+)",
+ "flush_caches",
*(webhook.url for webhook in WEBHOOK_INTEGRATIONS if not include_webhooks),
}
| Some Puppeteer tests fail when repeated during the same run
The `tools/test-js-with-puppeteer --interactive` mechanism for debugging failing tests depends on the ability to repeatedly run the same test without restarting the test server. A shell command for trying every test repeated twice is:
```bash
tools/test-js-with-puppeteer {00..16}{,}
```
This uncovers several failures, such as:
```console
$ tools/test-js-with-puppeteer 02 02
…
AssertionError [ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected ... Lines skipped
[
[
'Verona > test',
[
+ 'verona test d',
'verona test a',
...
]
]
]
at CommonUtils.check_messages_sent (/srv/zulip/frontend_tests/puppeteer_lib/common.ts:453:16)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at expect_verona_stream (/srv/zulip/frontend_tests/puppeteer_tests/02-message-basics.ts:27:5)
at test_navigations_from_home (/srv/zulip/frontend_tests/puppeteer_tests/02-message-basics.ts:96:5)
at message_basic_tests (/srv/zulip/frontend_tests/puppeteer_tests/02-message-basics.ts:450:5)
at CommonUtils.run_test (/srv/zulip/frontend_tests/puppeteer_lib/common.ts:510:13) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: [Array],
expected: [Array],
operator: 'deepStrictEqual'
}
Waiting for children to stop...
The Puppeteer frontend tests failed!
For help debugging, read:
https://zulip.readthedocs.io/en/latest/testing/testing-with-puppeteer.html
or report and ask for help in chat.zulip.org
It's also worthy to see screenshots generated on failure stored under var/puppeteer/*.png
```
```console
$ tools/test-js-with-puppeteer 04 04
…
AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
+ actual - expected
+ 'Verona'
- 'Puppeteer'
at create_stream (/srv/zulip/frontend_tests/puppeteer_tests/04-subscriptions.ts:174:12)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at test_stream_creation (/srv/zulip/frontend_tests/puppeteer_tests/04-subscriptions.ts:217:5)
at subscriptions_tests (/srv/zulip/frontend_tests/puppeteer_tests/04-subscriptions.ts:256:5)
at CommonUtils.run_test (/srv/zulip/frontend_tests/puppeteer_lib/common.ts:510:13) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: 'Verona',
expected: 'Puppeteer',
operator: 'strictEqual'
}
Waiting for children to stop...
The Puppeteer frontend tests failed!
For help debugging, read:
https://zulip.readthedocs.io/en/latest/testing/testing-with-puppeteer.html
or report and ask for help in chat.zulip.org
It's also worthy to see screenshots generated on failure stored under var/puppeteer/*.png
```
| Hello @zulip/server-testing members, this issue was labeled with the "area: testing-infrastructure" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
I think the best path forward for this might be to reset the frontend testing database after each individual test file is completed, rather than only during startup. | 2021-03-19T14:26:25 |
zulip/zulip | 17,870 | zulip__zulip-17870 | [
"17795"
] | 37b4d0793472afca86c75d6aefbf20e54f6452af | diff --git a/zerver/openapi/curl_param_value_generators.py b/zerver/openapi/curl_param_value_generators.py
--- a/zerver/openapi/curl_param_value_generators.py
+++ b/zerver/openapi/curl_param_value_generators.py
@@ -20,11 +20,15 @@
from zerver.lib.events import do_events_register
from zerver.lib.initial_password import initial_password
from zerver.lib.test_classes import ZulipTestCase
-from zerver.models import Client, Message, UserGroup, UserPresence, get_realm
+from zerver.lib.users import get_api_key
+from zerver.models import Client, Message, UserGroup, UserPresence, get_realm, get_user
GENERATOR_FUNCTIONS: Dict[str, Callable[[], Dict[str, object]]] = {}
REGISTERED_GENERATOR_FUNCTIONS: Set[str] = set()
CALLED_GENERATOR_FUNCTIONS: Set[str] = set()
+# This is a List rather than just a string in order to make it easier
+# to write to it from another module.
+AUTHENTICATION_LINE: List[str] = [""]
helpers = ZulipTestCase()
@@ -310,3 +314,22 @@ def deactivate_user() -> Dict[str, object]:
acting_user=None,
)
return {"user_id": user_profile.id}
+
+
+@openapi_param_value_generator(["/users/me:delete"])
+def deactivate_own_user() -> Dict[str, object]:
+ test_user_email = "[email protected]"
+ deactivate_test_user = do_create_user(
+ test_user_email,
+ "secret",
+ get_realm("zulip"),
+ "Mr. Delete",
+ role=200,
+ acting_user=None,
+ )
+ realm = get_realm("zulip")
+ test_user = get_user(test_user_email, realm)
+ test_user_api_key = get_api_key(test_user)
+ # change authentication line to allow test_client to delete itself.
+ AUTHENTICATION_LINE[0] = f"{deactivate_test_user.email}:{test_user_api_key}"
+ return {}
| diff --git a/zerver/openapi/test_curl_examples.py b/zerver/openapi/test_curl_examples.py
--- a/zerver/openapi/test_curl_examples.py
+++ b/zerver/openapi/test_curl_examples.py
@@ -16,11 +16,14 @@
from zerver.models import get_realm
from zerver.openapi import markdown_extension
-from zerver.openapi.curl_param_value_generators import assert_all_helper_functions_called
+from zerver.openapi.curl_param_value_generators import (
+ AUTHENTICATION_LINE,
+ assert_all_helper_functions_called,
+)
-def test_generated_curl_examples_for_success(client: Client, owner_client: Client) -> None:
- authentication_line = f"{client.email}:{client.api_key}"
+def test_generated_curl_examples_for_success(client: Client) -> None:
+ default_authentication_line = f"{client.email}:{client.api_key}"
# A limited Markdown engine that just processes the code example syntax.
realm = get_realm("zulip")
md_engine = markdown.Markdown(
@@ -35,6 +38,10 @@ def test_generated_curl_examples_for_success(client: Client, owner_client: Clien
for file_name in sorted(glob.glob("templates/zerver/api/*.md")):
with open(file_name) as f:
for line in f:
+ # Set AUTHENTICATION_LINE to default_authentication_line.
+ # Set this every iteration, because deactivate_own_user
+ # will override this for its test.
+ AUTHENTICATION_LINE[0] = default_authentication_line
# A typical example from the Markdown source looks like this:
# {generate_code_example(curl, ...}
if not line.startswith("{generate_code_example(curl"):
@@ -47,21 +54,9 @@ def test_generated_curl_examples_for_success(client: Client, owner_client: Clien
unescaped_html = html.unescape(curl_command_html)
curl_command_text = unescaped_html[len("<p><code>curl\n") : -len("</code></p>")]
curl_command_text = curl_command_text.replace(
- "BOT_EMAIL_ADDRESS:BOT_API_KEY", authentication_line
+ "BOT_EMAIL_ADDRESS:BOT_API_KEY", AUTHENTICATION_LINE[0]
)
- # TODO: This needs_reactivation block is a hack.
- # However, it's awkward to test the "deactivate
- # myself" endpoint with how this system tries to use
- # the same account for all tests without some special
- # logic for that endpoint; and the hack is better than
- # just not documenting the endpoint.
- needs_reactivation = False
- user_id = 0
- if file_name == "templates/zerver/api/deactivate-own-user.md":
- needs_reactivation = True
- user_id = client.get_profile()["user_id"]
-
print("Testing {} ...".format(curl_command_text.split("\n")[0]))
# Turn the text into an arguments list.
@@ -77,8 +72,6 @@ def test_generated_curl_examples_for_success(client: Client, owner_client: Clien
)
response = json.loads(response_json)
assert response["result"] == "success"
- if needs_reactivation:
- owner_client.reactivate_user_by_id(user_id)
except (AssertionError, Exception):
error_template = """
Error verifying the success of the API documentation curl example.
| Clean up code for hacky OpenAPI curl test
After testing `deactivate_own_account` endpoint, we need to reactivate the client so that other tests are not affected by the deactivated client. In `test_curl_examples`, this has been hackily implemented and should be replaced by cleaner code. More details at https://github.com/zulip/zulip/pull/17014#discussion_r601173277
| Hello @zulip/server-testing members, this issue was labeled with the "area: testing-infrastructure" label, so you may want to check it out!
<!-- areaLabelAddition -->
@MSurfer20 Hey, are you working on this issue?
@100RABHpy I had planned to take this up after completing work on some other PRs. Feel free to claim this;I'd be happy to review your work :)
Thanks @MSurfer20,
Can you explain a bit on (like what error you get when tried to delete any other user):
>it's awkward to test the "deactivate myself" endpoint with how this system tries to use the same account for all tests without some special logic for that endpoint
>
I was thinking to create a new administrator user and delete this in `deactivate_own_user` function.
@zulipbot claim
If you deactivate a user, then that user can't call any other endpoints(since it's deactivated), which leads to all remaining tests failing.
>I was thinking to create a new administrator user and delete this in deactivate_own_user function.
This seems to be good!
Oh, Okay, that can be handle.
Also, If I'm getting it right only reason of adding owner_client is for reactivation. Right?
Yes, it was just added for reactivating the client after the deactivation. | 2021-03-29T17:13:45 |
zulip/zulip | 17,872 | zulip__zulip-17872 | [
"17763"
] | f73d101854c3792b1fc947fe16997b7dab1bcf61 | diff --git a/zerver/decorator.py b/zerver/decorator.py
--- a/zerver/decorator.py
+++ b/zerver/decorator.py
@@ -32,7 +32,9 @@
OrganizationAdministratorRequired,
OrganizationMemberRequired,
OrganizationOwnerRequired,
+ RealmDeactivatedError,
UnsupportedWebhookEventType,
+ UserDeactivatedError,
)
from zerver.lib.queue import queue_json_publish
from zerver.lib.rate_limiter import RateLimitedUser
@@ -268,9 +270,9 @@ def validate_api_key(
def validate_account_and_subdomain(request: HttpRequest, user_profile: UserProfile) -> None:
if user_profile.realm.deactivated:
- raise JsonableError(_("This organization has been deactivated"))
+ raise RealmDeactivatedError()
if not user_profile.is_active:
- raise JsonableError(_("Account is deactivated"))
+ raise UserDeactivatedError()
# Either the subdomain matches, or we're accessing Tornado from
# and to localhost (aka spoofing a request as the user).
diff --git a/zerver/lib/exceptions.py b/zerver/lib/exceptions.py
--- a/zerver/lib/exceptions.py
+++ b/zerver/lib/exceptions.py
@@ -50,6 +50,8 @@ class ErrorCode(AbstractEnum):
UNAUTHENTICATED_USER = ()
NONEXISTENT_SUBDOMAIN = ()
RATE_LIMIT_HIT = ()
+ USER_DEACTIVATED = ()
+ REALM_DEACTIVATED = ()
class JsonableError(Exception):
@@ -271,6 +273,30 @@ def msg_format() -> str:
return _("Must be an organization or stream administrator")
+class UserDeactivatedError(JsonableError):
+ code: ErrorCode = ErrorCode.USER_DEACTIVATED
+ http_status_code = 403
+
+ def __init__(self) -> None:
+ pass
+
+ @staticmethod
+ def msg_format() -> str:
+ return _("Account is deactivated")
+
+
+class RealmDeactivatedError(JsonableError):
+ code: ErrorCode = ErrorCode.REALM_DEACTIVATED
+ http_status_code = 403
+
+ def __init__(self) -> None:
+ pass
+
+ @staticmethod
+ def msg_format() -> str:
+ return _("This organization has been deactivated")
+
+
class MarkdownRenderingException(Exception):
pass
diff --git a/zerver/openapi/python_examples.py b/zerver/openapi/python_examples.py
--- a/zerver/openapi/python_examples.py
+++ b/zerver/openapi/python_examples.py
@@ -1224,6 +1224,24 @@ def test_missing_request_argument(client: Client) -> None:
validate_against_openapi_schema(result, "/rest-error-handling", "post", "400_1")
+def test_user_account_deactivated(client: Client) -> None:
+ request = {
+ "content": "**foo**",
+ }
+ result = client.render_message(request)
+
+ validate_against_openapi_schema(result, "/rest-error-handling", "post", "403_0")
+
+
+def test_realm_deactivated(client: Client) -> None:
+ request = {
+ "content": "**foo**",
+ }
+ result = client.render_message(request)
+
+ validate_against_openapi_schema(result, "/rest-error-handling", "post", "403_1")
+
+
def test_invalid_stream_error(client: Client) -> None:
result = client.get_stream_id("nonexistent")
| diff --git a/zerver/tests/test_decorators.py b/zerver/tests/test_decorators.py
--- a/zerver/tests/test_decorators.py
+++ b/zerver/tests/test_decorators.py
@@ -1066,7 +1066,9 @@ def test_send_deactivated_realm(self) -> None:
"to": self.example_email("othello"),
},
)
- self.assert_json_error_contains(result, "has been deactivated", status_code=400)
+ self.assert_json_error_contains(
+ result, "This organization has been deactivated", status_code=403
+ )
result = self.api_post(
self.example_user("hamlet"),
@@ -1078,7 +1080,9 @@ def test_send_deactivated_realm(self) -> None:
"to": self.example_email("othello"),
},
)
- self.assert_json_error_contains(result, "has been deactivated", status_code=401)
+ self.assert_json_error_contains(
+ result, "This organization has been deactivated", status_code=401
+ )
def test_fetch_api_key_deactivated_realm(self) -> None:
"""
@@ -1094,7 +1098,9 @@ def test_fetch_api_key_deactivated_realm(self) -> None:
realm.deactivated = True
realm.save()
result = self.client_post("/json/fetch_api_key", {"password": test_password})
- self.assert_json_error_contains(result, "has been deactivated", status_code=400)
+ self.assert_json_error_contains(
+ result, "This organization has been deactivated", status_code=403
+ )
def test_webhook_deactivated_realm(self) -> None:
"""
@@ -1107,7 +1113,9 @@ def test_webhook_deactivated_realm(self) -> None:
url = f"/api/v1/external/jira?api_key={api_key}&stream=jira_custom"
data = self.webhook_fixture_data("jira", "created_v2")
result = self.client_post(url, data, content_type="application/json")
- self.assert_json_error_contains(result, "has been deactivated", status_code=400)
+ self.assert_json_error_contains(
+ result, "This organization has been deactivated", status_code=403
+ )
class LoginRequiredTest(ZulipTestCase):
@@ -1209,7 +1217,7 @@ def test_send_deactivated_user(self) -> None:
"to": self.example_email("othello"),
},
)
- self.assert_json_error_contains(result, "Account is deactivated", status_code=400)
+ self.assert_json_error_contains(result, "Account is deactivated", status_code=403)
result = self.api_post(
self.example_user("hamlet"),
@@ -1238,7 +1246,7 @@ def test_fetch_api_key_deactivated_user(self) -> None:
change_user_is_active(user_profile, False)
result = self.client_post("/json/fetch_api_key", {"password": test_password})
- self.assert_json_error_contains(result, "Account is deactivated", status_code=400)
+ self.assert_json_error_contains(result, "Account is deactivated", status_code=403)
def test_login_deactivated_user(self) -> None:
"""
@@ -1304,7 +1312,7 @@ def test_webhook_deactivated_user(self) -> None:
url = f"/api/v1/external/jira?api_key={api_key}&stream=jira_custom"
data = self.webhook_fixture_data("jira", "created_v2")
result = self.client_post(url, data, content_type="application/json")
- self.assert_json_error_contains(result, "Account is deactivated", status_code=400)
+ self.assert_json_error_contains(result, "Account is deactivated", status_code=403)
class TestIncomingWebhookBot(ZulipTestCase):
@@ -1647,7 +1655,9 @@ def test_authenticated_json_post_view_if_user_is_not_active(self) -> None:
self.login_user(user_profile)
# we deactivate user manually because do_deactivate_user removes user session
change_user_is_active(user_profile, False)
- self.assert_json_error_contains(self._do_test(user_profile), "Account is deactivated")
+ self.assert_json_error_contains(
+ self._do_test(user_profile), "Account is deactivated", status_code=403
+ )
do_reactivate_user(user_profile, acting_user=None)
def test_authenticated_json_post_view_if_user_realm_is_deactivated(self) -> None:
@@ -1657,7 +1667,9 @@ def test_authenticated_json_post_view_if_user_realm_is_deactivated(self) -> None
user_profile.realm.deactivated = True
user_profile.realm.save()
self.assert_json_error_contains(
- self._do_test(user_profile), "This organization has been deactivated"
+ self._do_test(user_profile),
+ "This organization has been deactivated",
+ status_code=403,
)
do_reactivate_realm(user_profile.realm)
| Add error codes in `validate_account_and_subdomain`
The error cases in `validate_account_and_subdomain` are ones that clients are likely to want to be able to parse, e.g. the mobile/terminal apps will want to know if an authentication error is because of a deactivated realm for error handling. REALM_DEACTIVATED and USER_DEACTIVATED are probably reasonable error code names.
We should open zulip-mobile and zulip-terminal issues to support the new error codes once we resolve this.
See `ErrorCode` in `zerver/lib/exceptions.py` for how to register these.
@gnprice FYI.
| Hello @zulip/server-api members, this issue was labeled with the "area: api" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-03-29T17:42:47 |
zulip/zulip | 17,899 | zulip__zulip-17899 | [
"17662"
] | bac96cae8004293608dda674df3ec667264ec064 | diff --git a/scripts/lib/zulip_tools.py b/scripts/lib/zulip_tools.py
--- a/scripts/lib/zulip_tools.py
+++ b/scripts/lib/zulip_tools.py
@@ -581,8 +581,10 @@ def get_deploy_options(config_file: configparser.RawConfigParser) -> List[str]:
def run_psql_as_postgres(
+ config_file: configparser.RawConfigParser,
sql_query: str,
) -> None:
+ dbname = get_config(config_file, "postgresql", "database_name", "zulip")
subcmd = " ".join(
map(
shlex.quote,
@@ -590,8 +592,8 @@ def run_psql_as_postgres(
"psql",
"-v",
"ON_ERROR_STOP=1",
- # TODO: Stop hardcoding the database name.
- "zulip",
+ "-d",
+ dbname,
"-c",
sql_query,
],
diff --git a/scripts/setup/generate_secrets.py b/scripts/setup/generate_secrets.py
--- a/scripts/setup/generate_secrets.py
+++ b/scripts/setup/generate_secrets.py
@@ -7,6 +7,7 @@
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(BASE_DIR)
from scripts.lib.setup_path import setup_path
+from scripts.lib.zulip_tools import get_config, get_config_file
setup_path()
@@ -101,6 +102,13 @@ def add_secret(name: str, value: str) -> None:
if development and need_secret("local_database_password"):
add_secret("local_database_password", random_token())
+ # We only need a secret if the database username does not match
+ # the OS username, as identd auth works in that case.
+ if get_config(
+ get_config_file(), "postgresql", "database_user", "zulip"
+ ) != "zulip" and need_secret("postgres_password"):
+ add_secret("postgres_password", random_token())
+
# The core Django SECRET_KEY setting, used by Django internally to
# secure sessions. If this gets changed, all users will be logged out.
if need_secret("secret_key"):
diff --git a/zproject/computed_settings.py b/zproject/computed_settings.py
--- a/zproject/computed_settings.py
+++ b/zproject/computed_settings.py
@@ -281,8 +281,8 @@ def get_dirs(self) -> List[PosixPath]:
DATABASES: Dict[str, Dict[str, Any]] = {
"default": {
"ENGINE": "django.db.backends.postgresql",
- "NAME": "zulip",
- "USER": "zulip",
+ "NAME": get_config("postgresql", "database_name", "zulip"),
+ "USER": get_config("postgresql", "database_user", "zulip"),
# Password = '' => peer/certificate authentication (no password)
"PASSWORD": "",
# Host = '' => connect to localhost by default
@@ -314,7 +314,12 @@ def get_dirs(self) -> List[PosixPath]:
DATABASES["default"]["OPTIONS"]["sslmode"] = REMOTE_POSTGRES_SSLMODE
else:
DATABASES["default"]["OPTIONS"]["sslmode"] = "verify-full"
-
+elif get_config("postgresql", "database_user") != "zulip":
+ if get_secret("postgres_password") is not None:
+ DATABASES["default"].update(
+ PASSWORD=get_secret("postgres_password"),
+ HOST="localhost",
+ )
POSTGRESQL_MISSING_DICTIONARIES = bool(get_config("postgresql", "missing_dictionaries", None))
DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
| Add Support for Custom Database Names & Users
Add Support for Custom Database Names & Users
As per - https://chat.zulip.org/#narrow/stream/2-general/topic/Change.20DB.20user.20and.20name & https://github.com/zulip/docker-zulip/issues/289
@zulipbot claim
| @zulipbot add "area: production installer"
Hello @zulip/server-production members, this issue was labeled with the "area: production installer" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hello @adambirds, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning --> | 2021-03-31T01:33:56 |
|
zulip/zulip | 17,906 | zulip__zulip-17906 | [
"16971"
] | 88262a484c9d3ec87022d25fce5366586367aae6 | diff --git a/zerver/openapi/python_examples.py b/zerver/openapi/python_examples.py
--- a/zerver/openapi/python_examples.py
+++ b/zerver/openapi/python_examples.py
@@ -424,19 +424,19 @@ def get_stream_id(client: Client) -> int:
@openapi_test_function("/streams/{stream_id}:delete")
-def delete_stream(client: Client, stream_id: int) -> None:
+def archive_stream(client: Client, stream_id: int) -> None:
result = client.add_subscriptions(
streams=[
{
- "name": "stream to be deleted",
+ "name": "stream to be archived",
"description": "New stream for testing",
},
],
)
# {code_example|start}
- # Delete the stream named 'stream to be deleted'
- stream_id = client.get_stream_id("stream to be deleted")["stream_id"]
+ # Archive the stream named 'stream to be archived'
+ stream_id = client.get_stream_id("stream to be archived")["stream_id"]
result = client.delete_stream(stream_id)
# {code_example|end}
validate_against_openapi_schema(result, "/streams/{stream_id}", "delete", "200")
@@ -1368,7 +1368,7 @@ def test_streams(client: Client, nonadmin_client: Client) -> None:
update_subscription_settings(client)
update_notification_settings(client)
get_stream_topics(client, 1)
- delete_stream(client, stream_id)
+ archive_stream(client, stream_id)
test_user_not_authorized_error(nonadmin_client)
test_authorization_errors_fatal(client, nonadmin_client)
diff --git a/zproject/urls.py b/zproject/urls.py
--- a/zproject/urls.py
+++ b/zproject/urls.py
@@ -764,6 +764,11 @@
template_name="zerver/documentation_main.html", path_template="/zerver/api/%s.md"
)
urls += [
+ # Redirects due to us having moved the docs:
+ path(
+ "help/delete-a-stream", RedirectView.as_view(url="/help/archive-a-stream", permanent=True)
+ ),
+ path("api/delete-stream", RedirectView.as_view(url="/api/archive-stream", permanent=True)),
path("help/", help_documentation_view),
path("help/<path:article>", help_documentation_view),
path("api/", api_documentation_view),
| diff --git a/frontend_tests/node_tests/stream_data.js b/frontend_tests/node_tests/stream_data.js
--- a/frontend_tests/node_tests/stream_data.js
+++ b/frontend_tests/node_tests/stream_data.js
@@ -418,7 +418,7 @@ test("delete_sub", () => {
assert(!stream_data.get_sub("Canada"));
assert(!stream_data.get_sub_by_id(canada.stream_id));
- blueslip.expect("warn", "Failed to delete stream 99999");
+ blueslip.expect("warn", "Failed to archive stream 99999");
stream_data.delete_sub(99999);
});
diff --git a/zerver/lib/test_helpers.py b/zerver/lib/test_helpers.py
--- a/zerver/lib/test_helpers.py
+++ b/zerver/lib/test_helpers.py
@@ -496,6 +496,8 @@ def find_pattern(pattern: Any, prefixes: List[str]) -> None:
"confirmation_key/",
"node-coverage/(?P<path>.+)",
"docs/(?P<path>.+)",
+ "help/delete-a-stream",
+ "api/delete-stream",
"casper/(?P<path>.+)",
"static/(?P<path>.+)",
"flush_caches",
diff --git a/zerver/tests/test_subs.py b/zerver/tests/test_subs.py
--- a/zerver/tests/test_subs.py
+++ b/zerver/tests/test_subs.py
@@ -1305,17 +1305,17 @@ def test_stream_message_retention_days_on_stream_creation(self) -> None:
self.assertEqual(result[1][2].name, "new_stream3")
self.assertEqual(result[1][2].message_retention_days, None)
- def set_up_stream_for_deletion(
+ def set_up_stream_for_archiving(
self, stream_name: str, invite_only: bool = False, subscribed: bool = True
) -> Stream:
"""
- Create a stream for deletion by an administrator.
+ Create a stream for archiving by an administrator.
"""
user_profile = self.example_user("hamlet")
self.login_user(user_profile)
stream = self.make_stream(stream_name, invite_only=invite_only)
- # For testing deleting streams you aren't on.
+ # For testing archiving streams you aren't on.
if subscribed:
self.subscribe(user_profile, stream_name)
@@ -1323,9 +1323,9 @@ def set_up_stream_for_deletion(
return stream
- def delete_stream(self, stream: Stream) -> None:
+ def archive_stream(self, stream: Stream) -> None:
"""
- Delete the stream and assess the result.
+ Archive the stream and assess the result.
"""
active_name = stream.name
realm = stream.realm
@@ -1402,28 +1402,28 @@ def test_delete_public_stream(self) -> None:
When an administrator deletes a public stream, that stream is not
visible to users at all anymore.
"""
- stream = self.set_up_stream_for_deletion("newstream")
- self.delete_stream(stream)
+ stream = self.set_up_stream_for_archiving("newstream")
+ self.archive_stream(stream)
def test_delete_private_stream(self) -> None:
"""
Administrators can delete private streams they are on.
"""
- stream = self.set_up_stream_for_deletion("newstream", invite_only=True)
- self.delete_stream(stream)
+ stream = self.set_up_stream_for_archiving("newstream", invite_only=True)
+ self.archive_stream(stream)
- def test_delete_streams_youre_not_on(self) -> None:
+ def test_archive_streams_youre_not_on(self) -> None:
"""
Administrators can delete public streams they aren't on, including
private streams in their realm.
"""
- pub_stream = self.set_up_stream_for_deletion("pubstream", subscribed=False)
- self.delete_stream(pub_stream)
+ pub_stream = self.set_up_stream_for_archiving("pubstream", subscribed=False)
+ self.archive_stream(pub_stream)
- priv_stream = self.set_up_stream_for_deletion(
+ priv_stream = self.set_up_stream_for_archiving(
"privstream", subscribed=False, invite_only=True
)
- self.delete_stream(priv_stream)
+ self.archive_stream(priv_stream)
def attempt_unsubscribe_of_principal(
self,
@@ -4369,7 +4369,7 @@ def test_gather_subscriptions_excludes_deactivated_streams(self) -> None:
self.subscribe(non_admin_user, stream_name)
self.subscribe(self.example_user("othello"), stream_name)
- def delete_stream(stream_name: str) -> None:
+ def archive_stream(stream_name: str) -> None:
stream_id = get_stream(stream_name, realm).id
result = self.client_delete(f"/json/streams/{stream_id}")
self.assert_json_success(result)
@@ -4379,7 +4379,7 @@ def delete_stream(stream_name: str) -> None:
non_admin_before_delete = gather_subscriptions_helper(non_admin_user)
# Delete our stream
- delete_stream("stream1")
+ archive_stream("stream1")
# Get subs after delete
admin_after_delete = gather_subscriptions_helper(admin_user)
| Starred messages view shows messages in inaccessible streams
If you star a message in a stream and either:
1. Unsub from the stream.
2. Deactivate the stream.
The message will stay visible in the Starred messages view even though it shouldn't. E.g. in the screenshot below you can see messages I starred - first one in a deactivated stream, second one in a private stream I unsubscribed from.

| @sahil839 Would you be interested in looking into this?
Sure, I will fix this.
@zulipbot claim.
@sahil839 , Are you still working on this? I would like to contribute too.
Yes I am working on this.
I am busy with college work, so I am abandoning this for now. WIll get back to this when I get time (if this will not be solved till then). Anyone who is taking this can have a look on the discussion [here](https://chat.zulip.org/#narrow/stream/9-issues/topic/.2316971.20starred.20messages.20view). @zulipbot abandon
@Pranav2612000 you can take this if you want.
I have gone through the discussions on Zulip Community , i think i can fix this
@zulipbot claim.
Hello @deto-5420!
Thanks for your interest in Zulip! You have attempted to claim an issue without the labels "help wanted", "good first issue". Since you're a new contributor, you can only claim and submit pull requests for issues with the [help wanted](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+no%3Aassignee+label%3A%22help+wanted%22) or [good first issue](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+no%3Aassignee+label%3A%22good+first+issue%22) labels.
If this is your first time here, we recommend reading our [guide for new contributors](https://zulip.readthedocs.io/en/latest/overview/contributing.html) before getting started.
@deto-5420 are you working on this issue. Can I start working over it, if you haven't?
@zulipbot claim
Looks like there is some misconversation @m-e-l-u-h-a-n .I am working on it which I mentioned in Czo. Thanks in advance.
Will let you know if I dismiss it.
Actually I just looked into the thread mentioned by sahil above but couldn't find your message there, sorry if you have started over it(please confirm) I will unclaim it.
Ohh I think you clarified you are still working on it in above message itself. Sorry I will unclaim it.
Hi @aryanshridhar , I hope everything is going well. I suggest claiming the issue and assigning it to you so it would be clear someone is working on it.
@aryanshridhar are you still working on this, Can I take this up if you are busy? | 2021-03-31T14:21:07 |
zulip/zulip | 18,016 | zulip__zulip-18016 | [
"17898"
] | 3e5e89991dd1fd307bf60a9011353f7a503dc9e1 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -1928,6 +1928,15 @@ def do_send_messages(
about changing the next line.
"""
user_ids = send_request.active_user_ids | set(user_flags.keys())
+ sender_id = send_request.message.sender_id
+
+ # We make sure the sender is listed first in the `users` list;
+ # this results in the sender receiving the message first if
+ # there are thousands of recipients, decreasing perceived latency.
+ if sender_id in user_ids:
+ user_list = [sender_id] + list(user_ids - {sender_id})
+ else:
+ user_list = list(user_ids)
users = [
dict(
@@ -1938,7 +1947,7 @@ def do_send_messages(
stream_email_notify=(user_id in send_request.stream_email_user_ids),
wildcard_mention_notify=(user_id in send_request.wildcard_mention_user_ids),
)
- for user_id in user_ids
+ for user_id in user_list
]
if send_request.message.is_stream_message():
| Improve perceived latency for sender when sending/editing a message in very large streams
When sending messages to a stream with 15k+ subscribers, it can take 300ms between when the first user receives the message and the last does. It would be desirable for the sender to be "first in line" for this purpose, since the UX for the sender is not ideal while waiting for the message to send (especially with edits).
The logic for this system involves several `Set` data structures, starting with this part of `do_send_messages`:
```
user_ids = send_request.active_user_ids | set(user_flags.keys())
users = [
dict(
id=user_id,
flags=user_flags.get(user_id, []),
always_push_notify=(user_id in send_request.push_notify_user_ids),
stream_push_notify=(user_id in send_request.stream_push_user_ids),
stream_email_notify=(user_id in send_request.stream_email_user_ids),
wildcard_mention_notify=(user_id in send_request.wildcard_mention_user_ids),
)
for user_id in user_ids
]
send_event(send_request.realm, event, users)
```
That then runs to `zerver/tornado/events_queue.py`, and in particular `get_client_info_for_message_event` (and `process_message_event`, which actually does the work). I think if we do a bit of work to make sure the sender is first in the list sent from `do_send_messages` and then modify the Tornado code to use an OrderedDict or something (actually I think modern Python dicts all?) to avoid changing the iteration order, we can arrange it so that the order in the event sent from `do_send_messages` is also the processing order.
A fix should be testable by just adding `print(user_id)` type statements in `process_message_event` (with a bit of extra work to make sure `send_to_clients` doesn't break the ordering).
https://zulip.readthedocs.io/en/latest/subsystems/sending-messages.html has relevant background.
| Hello @zulip/server-compose, @zulip/server-message-view members, this issue was labeled with the "area: compose", "area: message-editing" labels, so you may want to check it out!
<!-- areaLabelAddition -->
I have less idea about the backend. But this one will be interesting to try on as related to compose.
@zulipbot claim
I think this should be doable for you -- I've mentioned the code paths involved and I think it just takes some print-debugging and care with sort order. | 2021-04-06T21:17:27 |
|
zulip/zulip | 18,021 | zulip__zulip-18021 | [
"18021"
] | 60c8b0123f986adb346ece7e2cba0831f47064af | diff --git a/zerver/lib/markdown/api_arguments_table_generator.py b/zerver/lib/markdown/api_arguments_table_generator.py
--- a/zerver/lib/markdown/api_arguments_table_generator.py
+++ b/zerver/lib/markdown/api_arguments_table_generator.py
@@ -124,7 +124,14 @@ def render_table(self, arguments: Sequence[Mapping[str, Any]]) -> List[str]:
# (path, querystring, form data...). We should document this detail.
example = ""
if "example" in argument:
- example = argument["example"]
+ # We use this style without explicit JSON encoding for
+ # integers, strings, and booleans.
+ # * For booleans, JSON encoding correctly corrects for Python's
+ # str(True)="True" not matching the encoding of "true".
+ # * For strings, doing so nicely results in strings being quoted
+ # in the documentation, improving readability.
+ # * For integers, it is a noop, since json.dumps(3) == str(3) == "3".
+ example = json.dumps(argument["example"])
else:
example = json.dumps(argument["content"]["application/json"]["example"])
diff --git a/zerver/openapi/markdown_extension.py b/zerver/openapi/markdown_extension.py
--- a/zerver/openapi/markdown_extension.py
+++ b/zerver/openapi/markdown_extension.py
@@ -208,6 +208,7 @@ def get_openapi_param_example_value_as_string(
# union type. But for this logic's purpose, it's good enough
# to just check the first parameter.
param_type = param["schema"]["oneOf"][0]["type"]
+
if param_type in ["object", "array"]:
example_value = param.get("example", None)
if not example_value:
@@ -225,7 +226,9 @@ def get_openapi_param_example_value_as_string(
else:
example_value = param.get("example", DEFAULT_EXAMPLE[param_type])
if isinstance(example_value, bool):
- example_value = str(example_value).lower()
+ # Booleans are effectively JSON-encoded, in that we pass
+ # true/false, not the Python str(True) = "True"
+ jsonify = True
if jsonify:
example_value = json.dumps(example_value)
if curl_argument:
| openapi: Fix display of boolean types in examples.
Fixed boolean statements being displayed in camel case in API
documentations to match JSON format.
<!-- What's this PR for? (Just a link to an issue is fine.) -->
Fixes #18010
**Testing plan:** <!-- How have you tested? -->
Manually tested in browser
**GIFs or screenshots:** <!-- If a UI change. See:
https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html
-->

<!-- Also be sure to make clear, coherent commits:
https://zulip.readthedocs.io/en/latest/contributing/version-control.html
-->
| 2021-04-07T12:52:50 |
||
zulip/zulip | 18,051 | zulip__zulip-18051 | [
"14499"
] | de6bd22ee91bae31d8f2c43d6fd222391b5943ad | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -143,6 +143,7 @@
access_stream_by_id,
access_stream_for_send_message,
can_access_stream_user_ids,
+ check_stream_access_based_on_stream_post_policy,
check_stream_name,
create_stream_if_needed,
get_default_value_for_history_public_to_subscribers,
@@ -2802,7 +2803,7 @@ def check_update_message(
if stream_id is not None:
if not message.is_stream_message():
raise JsonableError(_("Message must be a stream message"))
- if not user_profile.is_realm_admin:
+ if not user_profile.can_move_messages_between_streams():
raise JsonableError(_("You don't have permission to move this message"))
try:
access_stream_by_id(user_profile, message.recipient.type_id)
@@ -2816,6 +2817,7 @@ def check_update_message(
raise JsonableError(_("Cannot change message content while changing stream"))
new_stream = access_stream_by_id(user_profile, stream_id, require_active=True)[0]
+ check_stream_access_based_on_stream_post_policy(user_profile, new_stream)
number_changed = do_update_message(
user_profile,
| diff --git a/frontend_tests/node_tests/settings_data.js b/frontend_tests/node_tests/settings_data.js
--- a/frontend_tests/node_tests/settings_data.js
+++ b/frontend_tests/node_tests/settings_data.js
@@ -107,117 +107,58 @@ run_test("user_can_change_logo", () => {
assert.equal(can_change_logo(), false);
});
-run_test("user_can_invite_others_to_realm", () => {
- const can_invite_others_to_realm = settings_data.user_can_invite_others_to_realm;
-
- page_params.is_admin = true;
- page_params.realm_invite_to_realm_policy =
- settings_config.common_policy_values.by_admins_only.code;
- assert.equal(can_invite_others_to_realm(), true);
-
- page_params.is_admin = false;
- assert.equal(can_invite_others_to_realm(), false);
-
- page_params.is_moderator = true;
- page_params.realm_invite_to_realm_policy =
- settings_config.common_policy_values.by_moderators_only.code;
- assert.equal(can_invite_others_to_realm(), true);
-
- page_params.is_moderator = false;
- assert.equal(can_invite_others_to_realm(), false);
-
- page_params.is_guest = true;
- page_params.realm_invite_to_realm_policy = settings_config.common_policy_values.by_members.code;
- assert.equal(can_invite_others_to_realm(), false);
-
- page_params.is_guest = false;
- assert.equal(can_invite_others_to_realm(), true);
-
- page_params.realm_invite_to_realm_policy =
- settings_config.common_policy_values.by_full_members.code;
- page_params.user_id = 30;
- people.add_active_user(isaac);
- isaac.date_joined = new Date(Date.now());
- page_params.realm_waiting_period_threshold = 10;
- assert.equal(can_invite_others_to_realm(), false);
-
- isaac.date_joined = new Date(Date.now() - 20 * 86400000);
- assert.equal(can_invite_others_to_realm(), true);
-});
-
-run_test("user_can_subscribe_other_users", () => {
- const can_subscribe_other_users = settings_data.user_can_subscribe_other_users;
-
- page_params.is_admin = true;
- page_params.realm_invite_to_stream_policy =
- settings_config.common_policy_values.by_admins_only.code;
- assert.equal(can_subscribe_other_users(), true);
-
- page_params.is_admin = false;
- assert.equal(can_subscribe_other_users(), false);
-
- page_params.is_moderator = true;
- page_params.realm_invite_to_stream_policy =
- settings_config.common_policy_values.by_moderators_only.code;
- assert.equal(can_subscribe_other_users(), true);
-
- page_params.is_moderator = false;
- assert.equal(can_subscribe_other_users(), false);
-
- page_params.is_guest = true;
- page_params.realm_invite_to_stream_policy =
- settings_config.common_policy_values.by_members.code;
- assert.equal(can_subscribe_other_users(), false);
-
- page_params.is_guest = false;
- assert.equal(can_subscribe_other_users(), true);
-
- page_params.realm_invite_to_stream_policy =
- settings_config.common_policy_values.by_full_members.code;
- page_params.user_id = 30;
- people.add_active_user(isaac);
- isaac.date_joined = new Date(Date.now());
- page_params.realm_waiting_period_threshold = 10;
- assert.equal(can_subscribe_other_users(), false);
-
- isaac.date_joined = new Date(Date.now() - 20 * 86400000);
- assert.equal(can_subscribe_other_users(), true);
-});
-
-run_test("user_can_create_streams", () => {
- const can_create_streams = settings_data.user_can_create_streams;
-
- page_params.is_admin = true;
- page_params.realm_create_stream_policy =
- settings_config.common_policy_values.by_admins_only.code;
- assert.equal(can_create_streams(), true);
-
- page_params.is_admin = false;
- assert.equal(can_create_streams(), false);
-
- page_params.is_moderator = true;
- page_params.realm_create_stream_policy =
- settings_config.common_policy_values.by_moderators_only.code;
- assert.equal(can_create_streams(), true);
-
- page_params.is_moderator = false;
- assert.equal(can_create_streams(), false);
-
- page_params.is_guest = true;
- page_params.realm_create_stream_policy = settings_config.common_policy_values.by_members.code;
- assert.equal(can_create_streams(), false);
-
- page_params.is_guest = false;
- assert.equal(can_create_streams(), true);
-
- page_params.realm_create_stream_policy =
- settings_config.common_policy_values.by_full_members.code;
- page_params.user_id = 30;
- people.add_active_user(isaac);
- isaac.date_joined = new Date(Date.now());
- page_params.realm_waiting_period_threshold = 10;
- assert.equal(can_create_streams(), false);
-
- isaac.date_joined = new Date(Date.now() - 20 * 86400000);
- assert.equal(can_create_streams(), true);
-});
+function test_policy(label, policy, validation_func) {
+ run_test(label, () => {
+ page_params.is_admin = true;
+ page_params[policy] = settings_config.common_policy_values.by_admins_only.code;
+ assert.equal(validation_func(), true);
+
+ page_params.is_admin = false;
+ assert.equal(validation_func(), false);
+
+ page_params.is_moderator = true;
+ page_params[policy] = settings_config.common_policy_values.by_moderators_only.code;
+ assert.equal(validation_func(), true);
+
+ page_params.is_moderator = false;
+ assert.equal(validation_func(), false);
+
+ page_params.is_guest = true;
+ page_params[policy] = settings_config.common_policy_values.by_members.code;
+ assert.equal(validation_func(), false);
+
+ page_params.is_guest = false;
+ assert.equal(validation_func(), true);
+
+ page_params[policy] = settings_config.common_policy_values.by_full_members.code;
+ page_params.user_id = 30;
+ people.add_active_user(isaac);
+ isaac.date_joined = new Date(Date.now());
+ page_params.realm_waiting_period_threshold = 10;
+ assert.equal(validation_func(), false);
+
+ isaac.date_joined = new Date(Date.now() - 20 * 86400000);
+ assert.equal(validation_func(), true);
+ });
+}
+
+test_policy(
+ "user_can_create_streams",
+ "realm_create_stream_policy",
+ settings_data.user_can_create_streams,
+);
+test_policy(
+ "user_can_subscribe_other_users",
+ "realm_invite_to_stream_policy",
+ settings_data.user_can_subscribe_other_users,
+);
+test_policy(
+ "user_can_invite_others_to_realm",
+ "realm_invite_to_realm_policy",
+ settings_data.user_can_invite_others_to_realm,
+);
+test_policy(
+ "user_can_move_messages_between_streams",
+ "realm_move_messages_between_streams_policy",
+ settings_data.user_can_move_messages_between_streams,
+);
diff --git a/zerver/tests/test_message_edit.py b/zerver/tests/test_message_edit.py
--- a/zerver/tests/test_message_edit.py
+++ b/zerver/tests/test_message_edit.py
@@ -1,6 +1,6 @@
import datetime
from operator import itemgetter
-from typing import Any, Dict, List, Tuple
+from typing import Any, Dict, List, Optional, Tuple
from unittest import mock
import orjson
@@ -8,6 +8,8 @@
from django.http import HttpResponse
from zerver.lib.actions import (
+ do_change_stream_post_policy,
+ do_change_user_role,
do_set_realm_property,
do_update_message,
get_topic_messages,
@@ -17,7 +19,7 @@
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import cache_tries_captured, queries_captured
from zerver.lib.topic import LEGACY_PREV_TOPIC, TOPIC_NAME
-from zerver.models import Message, Stream, UserMessage, UserProfile, get_realm, get_stream
+from zerver.models import Message, Realm, Stream, UserMessage, UserProfile, get_realm, get_stream
class EditMessageTest(ZulipTestCase):
@@ -1214,6 +1216,58 @@ def test_move_message_realm_admin_cant_move_from_private_stream_without_subscrip
"You don't have permission to move this message due to missing access to its stream",
)
+ def test_move_message_from_private_stream_message_access_checks(
+ self,
+ ) -> None:
+ hamlet = self.example_user("hamlet")
+ user_profile = self.example_user("iago")
+ self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_ADMINISTRATOR)
+ self.login("iago")
+
+ private_stream = self.make_stream(
+ "privatestream", invite_only=True, history_public_to_subscribers=False
+ )
+ self.subscribe(hamlet, "privatestream")
+ original_msg_id = self.send_stream_message(hamlet, "privatestream", topic_name="test123")
+ self.subscribe(user_profile, "privatestream")
+ new_msg_id = self.send_stream_message(user_profile, "privatestream", topic_name="test123")
+
+ # Now we unsub and hamlet sends a new message (we won't have access to it even after re-subbing!)
+ self.unsubscribe(user_profile, "privatestream")
+ new_inaccessible_msg_id = self.send_stream_message(
+ hamlet, "privatestream", topic_name="test123"
+ )
+
+ # Re-subscribe and send another message:
+ self.subscribe(user_profile, "privatestream")
+ newest_msg_id = self.send_stream_message(
+ user_profile, "privatestream", topic_name="test123"
+ )
+
+ verona = get_stream("Verona", user_profile.realm)
+
+ result = self.client_patch(
+ "/json/messages/" + str(new_msg_id),
+ {
+ "message_id": new_msg_id,
+ "stream_id": verona.id,
+ "propagate_mode": "change_all",
+ },
+ )
+
+ self.assert_json_success(result)
+ self.assertEqual(Message.objects.get(id=new_msg_id).recipient_id, verona.recipient_id)
+ self.assertEqual(Message.objects.get(id=newest_msg_id).recipient_id, verona.recipient_id)
+ # The original message and the new, inaccessible message weren't moved,
+ # because user_profile doesn't have access to them.
+ self.assertEqual(
+ Message.objects.get(id=original_msg_id).recipient_id, private_stream.recipient_id
+ )
+ self.assertEqual(
+ Message.objects.get(id=new_inaccessible_msg_id).recipient_id,
+ private_stream.recipient_id,
+ )
+
def test_move_message_cant_move_private_message(
self,
) -> None:
@@ -1268,26 +1322,174 @@ def test_move_message_to_stream_change_later(self) -> None:
f"This topic was moved here from #**test move stream>test** by @_**Iago|{user_profile.id}**",
)
- def test_move_message_to_stream_no_allowed(self) -> None:
+ def test_move_message_between_streams_policy_setting(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
- "aaron", "test move stream", "new stream", "test"
+ "othello", "old_stream_1", "new_stream_1", "test"
)
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- "stream_id": new_stream.id,
- "propagate_mode": "change_all",
- },
+ def check_move_message_according_to_policy(role: int, expect_fail: bool = False) -> None:
+ do_change_user_role(user_profile, role, acting_user=None)
+
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "stream_id": new_stream.id,
+ "propagate_mode": "change_all",
+ },
+ )
+
+ if expect_fail:
+ self.assert_json_error(result, "You don't have permission to move this message")
+ messages = get_topic_messages(user_profile, old_stream, "test")
+ self.assertEqual(len(messages), 3)
+ messages = get_topic_messages(user_profile, new_stream, "test")
+ self.assertEqual(len(messages), 0)
+ else:
+ self.assert_json_success(result)
+ messages = get_topic_messages(user_profile, old_stream, "test")
+ self.assertEqual(len(messages), 1)
+ messages = get_topic_messages(user_profile, new_stream, "test")
+ self.assertEqual(len(messages), 4)
+
+ # Check sending messages when policy is Realm.POLICY_ADMINS_ONLY.
+ do_set_realm_property(
+ user_profile.realm,
+ "move_messages_between_streams_policy",
+ Realm.POLICY_ADMINS_ONLY,
+ acting_user=None,
+ )
+ check_move_message_according_to_policy(UserProfile.ROLE_MODERATOR, expect_fail=True)
+ check_move_message_according_to_policy(UserProfile.ROLE_REALM_ADMINISTRATOR)
+
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_2", "new_stream_2", "test"
+ )
+ # Check sending messages when policy is Realm.POLICY_MODERATORS_ONLY.
+ do_set_realm_property(
+ user_profile.realm,
+ "move_messages_between_streams_policy",
+ Realm.POLICY_MODERATORS_ONLY,
+ acting_user=None,
)
- self.assert_json_error(result, "You don't have permission to move this message")
+ check_move_message_according_to_policy(UserProfile.ROLE_MEMBER, expect_fail=True)
+ check_move_message_according_to_policy(UserProfile.ROLE_MODERATOR)
- messages = get_topic_messages(user_profile, old_stream, "test")
- self.assertEqual(len(messages), 3)
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_3", "new_stream_3", "test"
+ )
+ # Check sending messages when policy is Realm.POLICY_FULL_MEMBERS_ONLY.
+ do_set_realm_property(
+ user_profile.realm,
+ "move_messages_between_streams_policy",
+ Realm.POLICY_FULL_MEMBERS_ONLY,
+ acting_user=None,
+ )
+ do_set_realm_property(
+ user_profile.realm, "waiting_period_threshold", 100000, acting_user=None
+ )
+ check_move_message_according_to_policy(UserProfile.ROLE_MEMBER, expect_fail=True)
- messages = get_topic_messages(user_profile, new_stream, "test")
- self.assertEqual(len(messages), 0)
+ do_set_realm_property(user_profile.realm, "waiting_period_threshold", 0, acting_user=None)
+ check_move_message_according_to_policy(UserProfile.ROLE_MEMBER)
+
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_4", "new_stream_4", "test"
+ )
+ # Check sending messages when policy is Realm.POLICY_MEMBERS_ONLY.
+ do_set_realm_property(
+ user_profile.realm,
+ "move_messages_between_streams_policy",
+ Realm.POLICY_MEMBERS_ONLY,
+ acting_user=None,
+ )
+ check_move_message_according_to_policy(UserProfile.ROLE_GUEST, expect_fail=True)
+ check_move_message_according_to_policy(UserProfile.ROLE_MEMBER)
+
+ def test_move_message_to_stream_based_on_stream_post_policy(self) -> None:
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_1", "new_stream_1", "test"
+ )
+ do_set_realm_property(
+ user_profile.realm,
+ "move_messages_between_streams_policy",
+ Realm.POLICY_MEMBERS_ONLY,
+ acting_user=None,
+ )
+
+ def check_move_message_to_stream(role: int, error_msg: Optional[str] = None) -> None:
+ do_change_user_role(user_profile, role, acting_user=None)
+
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "stream_id": new_stream.id,
+ "propagate_mode": "change_all",
+ },
+ )
+
+ if error_msg is not None:
+ self.assert_json_error(result, error_msg)
+ messages = get_topic_messages(user_profile, old_stream, "test")
+ self.assertEqual(len(messages), 3)
+ messages = get_topic_messages(user_profile, new_stream, "test")
+ self.assertEqual(len(messages), 0)
+ else:
+ self.assert_json_success(result)
+ messages = get_topic_messages(user_profile, old_stream, "test")
+ self.assertEqual(len(messages), 1)
+ messages = get_topic_messages(user_profile, new_stream, "test")
+ self.assertEqual(len(messages), 4)
+
+ # Check when stream_post_policy is STREAM_POST_POLICY_ADMINS.
+ do_change_stream_post_policy(new_stream, Stream.STREAM_POST_POLICY_ADMINS)
+ error_msg = "Only organization administrators can send to this stream."
+ check_move_message_to_stream(UserProfile.ROLE_MODERATOR, error_msg)
+ check_move_message_to_stream(UserProfile.ROLE_REALM_ADMINISTRATOR)
+
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_2", "new_stream_2", "test"
+ )
+
+ # Check when stream_post_policy is STREAM_POST_POLICY_MODERATORS.
+ do_change_stream_post_policy(new_stream, Stream.STREAM_POST_POLICY_MODERATORS)
+ error_msg = "Only organization administrators and moderators can send to this stream."
+ check_move_message_to_stream(UserProfile.ROLE_MEMBER, error_msg)
+ check_move_message_to_stream(UserProfile.ROLE_MODERATOR)
+
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_3", "new_stream_3", "test"
+ )
+
+ # Check when stream_post_policy is STREAM_POST_POLICY_RESTRICT_NEW_MEMBERS.
+ do_change_stream_post_policy(new_stream, Stream.STREAM_POST_POLICY_RESTRICT_NEW_MEMBERS)
+ error_msg = "New members cannot send to this stream."
+
+ do_set_realm_property(
+ user_profile.realm, "waiting_period_threshold", 100000, acting_user=None
+ )
+ check_move_message_to_stream(UserProfile.ROLE_MEMBER, error_msg)
+
+ do_set_realm_property(user_profile.realm, "waiting_period_threshold", 0, acting_user=None)
+ check_move_message_to_stream(UserProfile.ROLE_MEMBER)
+
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "othello", "old_stream_4", "new_stream_4", "test"
+ )
+
+ # Check when stream_post_policy is STREAM_POST_POLICY_EVERYONE.
+ # In this case also, guest is not allowed as we do not allow guest to move
+ # messages between streams in any case, so stream_post_policy of new stream does
+ # not matter.
+ do_change_stream_post_policy(new_stream, Stream.STREAM_POST_POLICY_EVERYONE)
+ do_set_realm_property(
+ user_profile.realm, "waiting_period_threshold", 100000, acting_user=None
+ )
+ check_move_message_to_stream(
+ UserProfile.ROLE_GUEST, "You don't have permission to move this message"
+ )
+ check_move_message_to_stream(UserProfile.ROLE_MEMBER)
def test_move_message_to_stream_with_content(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
| Relax restrictions on moving messages to other streams
Following #6427 and #14498, we should be able to make the feature to move messages to new streams more accessible. There's several directions I'd except this to take:
* [x] Allowing administrators (eventually perhaps stream owners) to move messages to-and-from private streams. We'll need to take some care about how to do the announcement messages about these moves (From a security perspective, it perhaps should say "To a private stream" or "From a private stream" in automated the tombstone messages, rather than naming which stream it went to? Not sure.)
* [ ] Allowing non-administrators to use this feature. I'm guessing we'd extend the options for "who can edit message topics" to include values for who can move messages between streams.
* [x] Offering the option to move a topic to another stream in the main message-edit UI on a message (#13912 primarily implements an out-of-band version right now).
At the moment, I'd like to use this issue thread for feedback on what the semantics/options for this feature should be.
| Hello @zulip/server-message-view members, this issue was labeled with the "area: message-editing" label, so you may want to check it out!
<!-- areaLabelAddition -->
Comment on 1)
I don't see a need for special care here:
If a topic is moved from a private stream to a public one its content is anyway becoming public so why hide where this content came from? Same the other way around: If a topic was public before, i.e. everybody has seen it anyway, what's the point in hiding this fact once it's available only to a subset of those same people?
Comment on 2)
It would indeed be nice to allow administrators to decide whether normal users can use this feature or not. I could imagine some users are less careless/knowledgeable and might move topics by accident, not knowing the implications. Some administrators may decide to educate their users but others may want to simply disable the functionality altogether/keep it in their hands.
> If a topic is moved from a private stream to a public one its content is anyway becoming public so why hide where this content came from? Same the other way around: If a topic was public before, i.e. everybody has seen it anyway, what's the point in hiding this fact once it's available only to a subset of those same people?
@andreas-bulling - just because the content of a message is public doesn't mean that you'd want to make the name of a private stream public. I think the "to/from a private stream" approach in the tombstones makes sense- it minimizes information leakage.
I agree that the scenario "name of a topic is more private than its content" is possible in theory - but highly unlikely in practice I think (we are talking about moving a private topic to a public one, just to be on the same page. Moving from public to private is not critical I believe we can easily agree).
The question is whether this arguably small likelihood of practical relevance warrants making the UI more complex but this is not up to me to decide. I simply think few people will ever use this feature so as a user I'd prefer a leaner UI.
Another idea would be to make the tombstones optional (with a checkbox for whether to create tombstone messages or not). I think there are cases where the topic only has one message that's a quick question and the tombstones might feel like excessive overhead (Or where someone is doing an administrative merge of two streams).
I think making the tombstone messages say "from a private stream" would be reasonable.
I've been looking through the pull requests a bit.
How far is the implementation of this feature? Especially 1) would be very helpful for a workflow we're trying to build up right now.
To give a bit of feedback on the ideas: I think the newest idea (by tim) summarizes this the best: Tombstones should be optional and having them say "from/to a private stream" seems like a good idea to prevent the potential leaking of confidential information
This issue is a bit out of date; we've already implemented making the tombstones optional in the 3.0 release. Though we haven't added the "to a private stream" text change.
The much more complicated issue involves UserMessage objects (which track which users actually received the message, and is used in private streams without shared history to determine which users should have access to which messages). A wrong implementation would be the naive one that preserves those objects; the end result would be that users who were on the public stream when messages were sent could still access them via the API even though they were moved to a private stream (because they'd still be part of that user's private history). We resolved the analog of that issue for guest users in 3.0, but need to resolve the similar issue for private streams.
I think given the work we've already done, doing the groundwork to allow moving messages TO private streams won't actually be very hard; @amanagr are you up for making this happen?
(And then we can think about the FROM side as well -- that is simpler in many ways assuming we're OK with the security implications)
Sure! I will try to pick this up soon.
Any new developments on this?
@Lithimlin thanks for pinging on this issue, I can try some positive work done on it this week, seems simple enough.
This private stream feature was implemented via https://github.com/zulip/zulip/issues/16284. I think the last bit of this will likely be that we'll include stream administrators in who can move messages between streams.
Can this be used to rename a topic as well? | 2021-04-08T19:56:07 |
zulip/zulip | 18,054 | zulip__zulip-18054 | [
"17969"
] | f71b591f95c3883550adef090068d5523ab13cdf | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -398,6 +398,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
WebhookIntegration("intercom", ["customer-support"], display_name="Intercom"),
WebhookIntegration("jira", ["project-management"], display_name="JIRA"),
WebhookIntegration("jotform", ["misc"], display_name="Jotform"),
+ WebhookIntegration("json", ["misc"], display_name="JSON formatter"),
WebhookIntegration("librato", ["monitoring"]),
WebhookIntegration("mention", ["marketing"], display_name="Mention"),
WebhookIntegration("netlify", ["continuous-integration", "deployment"], display_name="Netlify"),
@@ -730,6 +731,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
"intercom": [ScreenshotConfig("conversation_admin_replied.json")],
"jira": [ScreenshotConfig("created_v1.json")],
"jotform": [ScreenshotConfig("response.json")],
+ "json": [ScreenshotConfig("json_github_push__1_commit.json")],
"librato": [ScreenshotConfig("three_conditions_alert.json", payload_as_query_param=True)],
"mention": [ScreenshotConfig("webfeeds.json")],
"netlify": [ScreenshotConfig("deploy_building.json")],
diff --git a/zerver/webhooks/json/__init__.py b/zerver/webhooks/json/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/json/view.py b/zerver/webhooks/json/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/json/view.py
@@ -0,0 +1,39 @@
+import json
+from typing import Any, Dict
+
+from django.http import HttpRequest, HttpResponse
+
+from zerver.decorator import REQ, has_request_variables, webhook_view
+from zerver.lib.response import json_success
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+JSON_MESSAGE_TEMPLATE = """
+```json
+{webhook_payload}
+```
+""".strip()
+
+
+@webhook_view("JSON")
+@has_request_variables
+def api_json_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+
+ body = get_body_for_http_request(payload)
+ subject = get_subject_for_http_request(payload)
+
+ check_send_webhook_message(request, user_profile, subject, body)
+ return json_success()
+
+
+def get_subject_for_http_request(payload: Dict[str, Any]) -> str:
+ return "JSON"
+
+
+def get_body_for_http_request(payload: Dict[str, Any]) -> str:
+ prettypayload = json.dumps(payload, indent=2)
+ return JSON_MESSAGE_TEMPLATE.format(webhook_payload=prettypayload, sort_keys=True)
| diff --git a/zerver/webhooks/json/tests.py b/zerver/webhooks/json/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/json/tests.py
@@ -0,0 +1,54 @@
+import json
+
+from zerver.lib.test_classes import WebhookTestCase
+
+
+class JsonHookTests(WebhookTestCase):
+ STREAM_NAME = "json"
+ URL_TEMPLATE = "/api/v1/external/json?api_key={api_key}&stream={stream}"
+ FIXTURE_DIR_NAME = "json"
+
+ def test_json_github_push__1_commit_message(self) -> None:
+ """
+ Tests if json github push 1 commit is handled correctly
+ """
+ with open("zerver/webhooks/json/fixtures/json_github_push__1_commit.json") as f:
+ original_fixture = json.load(f)
+
+ expected_topic = "JSON"
+ expected_message = """```json
+{original_fixture}
+```""".format(
+ original_fixture=json.dumps(original_fixture, indent=2)
+ )
+ self.check_webhook("json_github_push__1_commit", expected_topic, expected_message)
+
+ def test_json_pingdom_http_up_to_down_message(self) -> None:
+ """
+ Tests if json pingdom http up to down is handled correctly
+ """
+ with open("zerver/webhooks/json/fixtures/json_pingdom_http_up_to_down.json") as f:
+ original_fixture = json.load(f)
+
+ expected_topic = "JSON"
+ expected_message = """```json
+{original_fixture}
+```""".format(
+ original_fixture=json.dumps(original_fixture, indent=2)
+ )
+ self.check_webhook("json_pingdom_http_up_to_down", expected_topic, expected_message)
+
+ def test_json_sentry_event_for_exception_js_message(self) -> None:
+ """
+ Tests if json sentry event for exception js is handled correctly
+ """
+ with open("zerver/webhooks/json/fixtures/json_sentry_event_for_exception_js.json") as f:
+ original_fixture = json.load(f)
+
+ expected_topic = "JSON"
+ expected_message = """```json
+{original_fixture}
+```""".format(
+ original_fixture=json.dumps(original_fixture, indent=2)
+ )
+ self.check_webhook("json_sentry_event_for_exception_js", expected_topic, expected_message)
| Create Integration for Json Printing any Webhook Payload
Create Integration for JSON Printing any Webhook Payload
Useful for webhook testing
https://chat.zulip.org/#narrow/stream/127-integrations/topic/webhook.20testing
| @zulipbot claim
@zulipbot add "area: integrations"
Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-04-09T01:53:28 |
zulip/zulip | 18,065 | zulip__zulip-18065 | [
"17928"
] | d31f01bd0f033fcc7c923ba44993b1da19fc6e47 | diff --git a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
--- a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
+++ b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
@@ -2,6 +2,7 @@
import os
import re
from typing import Callable, Iterator, List, Optional, Union
+from urllib.parse import urlparse
import scrapy
from scrapy.http import Request, Response
@@ -11,6 +12,15 @@
from scrapy.utils.url import url_has_any_extension
from twisted.python.failure import Failure
+EXCLUDED_DOMAINS = [
+ # Returns 429 Rate-Limited Errors
+ "github.com",
+ "gist.github.com",
+ # Returns 503 Errors
+ "www.amazon.com",
+ "gitlab.com",
+]
+
EXCLUDED_URLS = [
# Google calendar returns 404s on HEAD requests unconditionally
"https://calendar.google.com/calendar/[email protected]",
@@ -19,6 +29,8 @@
# Returns 404 to HEAD requests unconditionally
"https://www.git-tower.com/blog/command-line-cheat-sheet/",
"https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode",
+ "https://www.transifex.com/zulip/zulip/announcements/",
+ "https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh",
# Requires authentication
"https://circleci.com/gh/zulip/zulip/tree/master",
"https://circleci.com/gh/zulip/zulip/16617",
@@ -164,6 +176,10 @@ def _make_requests(self, url: str) -> Iterator[Request]:
callback = self.check_fragment
if getattr(self, "skip_external", False) and self._is_external_link(url):
return
+ if urlparse(url).netloc in EXCLUDED_DOMAINS:
+ return
+ if url in EXCLUDED_URLS:
+ return
yield Request(
url,
method=method,
@@ -204,13 +220,12 @@ def retry_request_with_get(self, request: Request) -> Iterator[Request]:
request.dont_filter = True
yield request
- def exclude_error(self, url: str) -> bool:
- return url in EXCLUDED_URLS
-
def error_callback(self, failure: Failure) -> Optional[Union[Failure, Iterator[Request]]]:
if isinstance(failure.value, HttpError):
response = failure.value.response
- if self.exclude_error(response.url):
+ # Hack: The filtering above does not catch this URL,
+ # likely due to a redirect.
+ if urlparse(response.url).netloc == "idmsa.apple.com":
return None
if response.status == 405 and response.request.method == "HEAD":
# Method 'HEAD' not allowed, repeat request with 'GET'
| test-documentation: Fix output spam from external links
Currently, test-documentation run in a development environment (i.e. without `--skip-external-links`) prints like 2 screenfuls of errors like this:
```
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/commit/49dbd85a8985b12666087f9ea36acb6f7da0aa4f
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip-desktop
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/issues/10976
```
I imagine this is really confusing for anyone new to our ReadTheDocs documentation.
Most of these are 429 errors because GitHub doesn't want automation hitting their servers all the time; we could probably just suppress most of them that fit a pattern that we expect to be statically correct (E.g. anything in github.com/zulip/ that is not a tree path).
| Hello @zulip/server-tooling, @zulip/server-user-docs members, this issue was labeled with the "area: documentation (user)", "area: tooling" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-04-09T17:43:28 |
|
zulip/zulip | 18,082 | zulip__zulip-18082 | [
"18022"
] | 1288dcbaafcad06307357172e3de37e06edc52a5 | diff --git a/zerver/webhooks/clubhouse/view.py b/zerver/webhooks/clubhouse/view.py
--- a/zerver/webhooks/clubhouse/view.py
+++ b/zerver/webhooks/clubhouse/view.py
@@ -1,5 +1,5 @@
from functools import partial
-from typing import Any, Callable, Dict, Optional
+from typing import Any, Callable, Dict, Generator, List, Optional
from django.http import HttpRequest, HttpResponse
@@ -30,20 +30,22 @@
"The name of the {entity} {name_template} was changed from:\n"
"``` quote\n{old}\n```\nto\n``` quote\n{new}\n```"
)
-ARCHIVED_TEMPLATE = "The {entity} {name_template} was {action}."
-STORY_TASK_TEMPLATE = "Task **{task_description}** was {action} the story {name_template}."
+ARCHIVED_TEMPLATE = "The {entity} {name_template} was {operation}."
+STORY_TASK_TEMPLATE = "Task **{task_description}** was {operation} the story {name_template}."
STORY_TASK_COMPLETED_TEMPLATE = (
"Task **{task_description}** ({name_template}) was completed. :tada:"
)
STORY_ADDED_REMOVED_EPIC_TEMPLATE = (
- "The story {story_name_template} was {action} the epic {epic_name_template}."
+ "The story {story_name_template} was {operation} the epic {epic_name_template}."
)
STORY_EPIC_CHANGED_TEMPLATE = "The story {story_name_template} was moved from {old_epic_name_template} to {new_epic_name_template}."
STORY_ESTIMATE_TEMPLATE = "The estimate for the story {story_name_template} was set to {estimate}."
FILE_ATTACHMENT_TEMPLATE = (
"A {type} attachment `{file_name}` was added to the story {name_template}."
)
-STORY_LABEL_TEMPLATE = "The label **{label_name}** was added to the story {name_template}."
+LABEL_TEMPLATE = "**{name}**"
+STORY_LABEL_TEMPLATE = "The label {labels} was added to the story {name_template}."
+STORY_LABEL_PLURAL_TEMPLATE = "The labels {labels} were added to the story {name_template}."
STORY_UPDATE_PROJECT_TEMPLATE = (
"The story {name_template} was moved from the **{old}** project to **{new}**."
)
@@ -52,12 +54,16 @@
)
DELETE_TEMPLATE = "The {entity_type} **{name}** was deleted."
STORY_UPDATE_OWNER_TEMPLATE = "New owner added to the story {name_template}."
+TRAILING_WORKFLOW_STATE_CHANGE_TEMPLATE = " ({old} -> {new})"
STORY_GITHUB_PR_TEMPLATE = (
- "New GitHub PR [#{name}]({url}) opened for story {name_template} ({old} -> {new})."
-)
-STORY_GITHUB_BRANCH_TEMPLATE = (
- "New GitHub branch [{name}]({url}) associated with story {name_template} ({old} -> {new})."
+ "New GitHub PR [#{name}]({url}) opened for story {name_template}{workflow_state_template}."
)
+STORY_GITHUB_COMMENT_PR_TEMPLATE = "Existing GitHub PR [#{name}]({url}) associated with story {name_template}{workflow_state_template}."
+STORY_GITHUB_BRANCH_TEMPLATE = "New GitHub branch [{name}]({url}) associated with story {name_template}{workflow_state_template}."
+STORY_UPDATE_BATCH_TEMPLATE = "The story {name_template} {templates}{workflow_state_template}."
+STORY_UPDATE_BATCH_CHANGED_TEMPLATE = "{operation} from {sub_templates}"
+STORY_UPDATE_BATCH_CHANGED_SUB_TEMPLATE = "{entity_type} **{old}** to **{new}**"
+STORY_UPDATE_BATCH_ADD_REMOVE_TEMPLATE = "{operation} with {entity}"
def get_action_with_primary_id(payload: Dict[str, Any]) -> Dict[str, Any]:
@@ -68,10 +74,13 @@ def get_action_with_primary_id(payload: Dict[str, Any]) -> Dict[str, Any]:
return action_with_primary_id
-def get_event(payload: Dict[str, Any]) -> Optional[str]:
- action = get_action_with_primary_id(payload)
+def get_event(payload: Dict[str, Any], action: Dict[str, Any]) -> Optional[str]:
event = "{}_{}".format(action["entity_type"], action["action"])
+ # We only consider the change to be a batch update only if there are multiple stories (thus there is no primary_id)
+ if event == "story_update" and payload.get("primary_id") is None:
+ return "{}_{}".format(event, "batch")
+
if event in IGNORED_EVENTS:
return None
@@ -107,19 +116,16 @@ def get_event(payload: Dict[str, Any]) -> Optional[str]:
return event
-def get_topic_function_based_on_type(payload: Dict[str, Any]) -> Any:
- entity_type = get_action_with_primary_id(payload)["entity_type"]
+def get_topic_function_based_on_type(payload: Dict[str, Any], action: Dict[str, Any]) -> Any:
+ entity_type = action["entity_type"]
return EVENT_TOPIC_FUNCTION_MAPPER.get(entity_type)
-def get_delete_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_delete_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
return DELETE_TEMPLATE.format(**action)
-def get_story_create_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
-
+def get_story_create_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
if action.get("epic_id") is None:
message = "New story [{name}]({app_url}) of type **{story_type}** was created."
kwargs = action
@@ -138,13 +144,12 @@ def get_story_create_body(payload: Dict[str, Any]) -> str:
return message.format(**kwargs)
-def get_epic_create_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_epic_create_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
message = "New epic **{name}**({state}) was created."
return message.format(**action)
-def get_comment_added_body(payload: Dict[str, Any], entity: str) -> str:
+def get_comment_added_body(payload: Dict[str, Any], action: Dict[str, Any], entity: str) -> str:
actions = payload["actions"]
kwargs = {"entity": entity}
for action in actions:
@@ -160,8 +165,9 @@ def get_comment_added_body(payload: Dict[str, Any], entity: str) -> str:
return COMMENT_ADDED_TEMPLATE.format(**kwargs)
-def get_update_description_body(payload: Dict[str, Any], entity: str) -> str:
- action = get_action_with_primary_id(payload)
+def get_update_description_body(
+ payload: Dict[str, Any], action: Dict[str, Any], entity: str
+) -> str:
desc = action["changes"]["description"]
kwargs = {
@@ -184,8 +190,7 @@ def get_update_description_body(payload: Dict[str, Any], entity: str) -> str:
return body
-def get_epic_update_state_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_epic_update_state_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
state = action["changes"]["state"]
kwargs = {
"entity": "epic",
@@ -197,8 +202,7 @@ def get_epic_update_state_body(payload: Dict[str, Any]) -> str:
return STATE_CHANGED_TEMPLATE.format(**kwargs)
-def get_story_update_state_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_story_update_state_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
workflow_state_id = action["changes"]["workflow_state_id"]
references = payload["references"]
@@ -222,8 +226,7 @@ def get_story_update_state_body(payload: Dict[str, Any]) -> str:
return STATE_CHANGED_TEMPLATE.format(**kwargs)
-def get_update_name_body(payload: Dict[str, Any], entity: str) -> str:
- action = get_action_with_primary_id(payload)
+def get_update_name_body(payload: Dict[str, Any], action: Dict[str, Any], entity: str) -> str:
name = action["changes"]["name"]
kwargs = {
"entity": entity,
@@ -238,32 +241,29 @@ def get_update_name_body(payload: Dict[str, Any], entity: str) -> str:
return NAME_CHANGED_TEMPLATE.format(**kwargs)
-def get_update_archived_body(payload: Dict[str, Any], entity: str) -> str:
- primary_action = get_action_with_primary_id(payload)
- archived = primary_action["changes"]["archived"]
+def get_update_archived_body(payload: Dict[str, Any], action: Dict[str, Any], entity: str) -> str:
+ archived = action["changes"]["archived"]
if archived["new"]:
- action = "archived"
+ operation = "archived"
else:
- action = "unarchived"
+ operation = "unarchived"
kwargs = {
"entity": entity,
"name_template": get_name_template(entity).format(
- name=primary_action["name"],
- app_url=primary_action.get("app_url"),
+ name=action["name"],
+ app_url=action.get("app_url"),
),
- "action": action,
+ "operation": operation,
}
return ARCHIVED_TEMPLATE.format(**kwargs)
-def get_story_task_body(payload: Dict[str, Any], action: str) -> str:
- primary_action = get_action_with_primary_id(payload)
-
+def get_story_task_body(payload: Dict[str, Any], action: Dict[str, Any], operation: str) -> str:
kwargs = {
- "task_description": primary_action["description"],
- "action": action,
+ "task_description": action["description"],
+ "operation": operation,
}
for a in payload["actions"]:
@@ -276,9 +276,7 @@ def get_story_task_body(payload: Dict[str, Any], action: str) -> str:
return STORY_TASK_TEMPLATE.format(**kwargs)
-def get_story_task_completed_body(payload: Dict[str, Any]) -> Optional[str]:
- action = get_action_with_primary_id(payload)
-
+def get_story_task_completed_body(payload: Dict[str, Any], action: Dict[str, Any]) -> Optional[str]:
kwargs = {
"task_description": action["description"],
}
@@ -297,9 +295,7 @@ def get_story_task_completed_body(payload: Dict[str, Any]) -> Optional[str]:
return None
-def get_story_update_epic_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
-
+def get_story_update_epic_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
kwargs = {
"story_name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -321,17 +317,15 @@ def get_story_update_epic_body(payload: Dict[str, Any]) -> str:
return STORY_EPIC_CHANGED_TEMPLATE.format(**kwargs)
elif new_id:
kwargs["epic_name_template"] = kwargs["new_epic_name_template"]
- kwargs["action"] = "added to"
+ kwargs["operation"] = "added to"
else:
kwargs["epic_name_template"] = kwargs["old_epic_name_template"]
- kwargs["action"] = "removed from"
+ kwargs["operation"] = "removed from"
return STORY_ADDED_REMOVED_EPIC_TEMPLATE.format(**kwargs)
-def get_story_update_estimate_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
-
+def get_story_update_estimate_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
kwargs = {
"story_name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -357,36 +351,52 @@ def get_reference_by_id(payload: Dict[str, Any], ref_id: int) -> Dict[str, Any]:
return ref
-def get_story_create_github_entity_body(payload: Dict[str, Any], entity: str) -> str:
- action = get_action_with_primary_id(payload)
+def get_secondary_actions_with_param(
+ payload: Dict[str, Any], entity: str, changed_attr: str
+) -> Generator[Dict[str, Any], None, None]:
+ # This function is a generator for secondary actions that have the required changed attributes,
+ # i.e.: "story" that has "pull-request_ids" changed.
+ for action in payload["actions"]:
+ if action["entity_type"] == entity and action["changes"].get(changed_attr) is not None:
+ yield action
- story: Dict[str, Any] = {}
- for a in payload["actions"]:
- if a["entity_type"] == "story" and a["changes"].get("workflow_state_id") is not None:
- story = a
- new_state_id = story["changes"]["workflow_state_id"]["new"]
- old_state_id = story["changes"]["workflow_state_id"]["old"]
- new_state = get_reference_by_id(payload, new_state_id)["name"]
- old_state = get_reference_by_id(payload, old_state_id)["name"]
+def get_story_create_github_entity_body(
+ payload: Dict[str, Any], action: Dict[str, Any], entity: str
+) -> str:
+ pull_request_action: Dict[str, Any] = get_action_with_primary_id(payload)
kwargs = {
- "name_template": STORY_NAME_TEMPLATE.format(**story),
- "name": action.get("number") if entity == "pull-request" else action.get("name"),
- "url": action["url"],
- "new": new_state,
- "old": old_state,
+ "name_template": STORY_NAME_TEMPLATE.format(**action),
+ "name": pull_request_action.get("number")
+ if entity == "pull-request" or entity == "pull-request-comment"
+ else pull_request_action.get("name"),
+ "url": pull_request_action["url"],
+ "workflow_state_template": "",
}
- template = (
- STORY_GITHUB_PR_TEMPLATE if entity == "pull-request" else STORY_GITHUB_BRANCH_TEMPLATE
- )
+ # Sometimes the workflow state of the story will not be changed when linking to a PR.
+ if action["changes"].get("workflow_state_id") is not None:
+ new_state_id = action["changes"]["workflow_state_id"]["new"]
+ old_state_id = action["changes"]["workflow_state_id"]["old"]
+ new_state = get_reference_by_id(payload, new_state_id)["name"]
+ old_state = get_reference_by_id(payload, old_state_id)["name"]
+ kwargs["workflow_state_template"] = TRAILING_WORKFLOW_STATE_CHANGE_TEMPLATE.format(
+ new=new_state, old=old_state
+ )
+
+ if entity == "pull-request":
+ template = STORY_GITHUB_PR_TEMPLATE
+ elif entity == "pull-request-comment":
+ template = STORY_GITHUB_COMMENT_PR_TEMPLATE
+ else:
+ template = STORY_GITHUB_BRANCH_TEMPLATE
return template.format(**kwargs)
-def get_story_update_attachment_body(payload: Dict[str, Any]) -> Optional[str]:
- action = get_action_with_primary_id(payload)
-
+def get_story_update_attachment_body(
+ payload: Dict[str, Any], action: Dict[str, Any]
+) -> Optional[str]:
kwargs = {
"name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -410,9 +420,27 @@ def get_story_update_attachment_body(payload: Dict[str, Any]) -> Optional[str]:
return FILE_ATTACHMENT_TEMPLATE.format(**kwargs)
-def get_story_label_body(payload: Dict[str, Any]) -> Optional[str]:
- action = get_action_with_primary_id(payload)
+def get_story_joined_label_list(
+ payload: Dict[str, Any], action: Dict[str, Any], label_ids_added: List[int]
+) -> str:
+ labels = []
+ for label_id in label_ids_added:
+ label_name = ""
+
+ for action in payload["actions"]:
+ if action.get("id") == label_id:
+ label_name = action.get("name", "")
+
+ if label_name == "":
+ label_name = get_reference_by_id(payload, label_id).get("name", "")
+
+ labels.append(LABEL_TEMPLATE.format(name=label_name))
+
+ return ", ".join(labels)
+
+
+def get_story_label_body(payload: Dict[str, Any], action: Dict[str, Any]) -> Optional[str]:
kwargs = {
"name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -421,29 +449,20 @@ def get_story_label_body(payload: Dict[str, Any]) -> Optional[str]:
}
label_ids_added = action["changes"]["label_ids"].get("adds")
- # If this is a payload for when a label is removed, ignore it
+ # If this is a payload for when no label is added, ignore it
if not label_ids_added:
return None
- label_id = label_ids_added[0]
-
- label_name = ""
- for action in payload["actions"]:
- if action["id"] == label_id:
- label_name = action.get("name", "")
-
- if not label_name:
- for reference in payload["references"]:
- if reference["id"] == label_id:
- label_name = reference.get("name", "")
+ kwargs.update(labels=get_story_joined_label_list(payload, action, label_ids_added))
- kwargs.update(label_name=label_name)
-
- return STORY_LABEL_TEMPLATE.format(**kwargs)
+ return (
+ STORY_LABEL_TEMPLATE.format(**kwargs)
+ if len(label_ids_added) == 1
+ else STORY_LABEL_PLURAL_TEMPLATE.format(**kwargs)
+ )
-def get_story_update_project_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_story_update_project_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
kwargs = {
"name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -462,8 +481,7 @@ def get_story_update_project_body(payload: Dict[str, Any]) -> str:
return STORY_UPDATE_PROJECT_TEMPLATE.format(**kwargs)
-def get_story_update_type_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_story_update_type_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
kwargs = {
"name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -476,8 +494,7 @@ def get_story_update_type_body(payload: Dict[str, Any]) -> str:
return STORY_UPDATE_TYPE_TEMPLATE.format(**kwargs)
-def get_story_update_owner_body(payload: Dict[str, Any]) -> str:
- action = get_action_with_primary_id(payload)
+def get_story_update_owner_body(payload: Dict[str, Any], action: Dict[str, Any]) -> str:
kwargs = {
"name_template": STORY_NAME_TEMPLATE.format(
name=action["name"],
@@ -488,8 +505,104 @@ def get_story_update_owner_body(payload: Dict[str, Any]) -> str:
return STORY_UPDATE_OWNER_TEMPLATE.format(**kwargs)
-def get_entity_name(payload: Dict[str, Any], entity: Optional[str] = None) -> Optional[str]:
- action = get_action_with_primary_id(payload)
+def get_story_update_batch_body(payload: Dict[str, Any], action: Dict[str, Any]) -> Optional[str]:
+ # When the user selects one or more stories with the checkbox, they can perform
+ # a batch update on multiple stories while changing multiple attribtues at the
+ # same time.
+ changes = action["changes"]
+ kwargs = {
+ "name_template": STORY_NAME_TEMPLATE.format(
+ name=action["name"],
+ app_url=action["app_url"],
+ ),
+ "workflow_state_template": "",
+ }
+
+ templates = []
+ last_change = "other"
+
+ move_sub_templates = []
+ if "epic_id" in changes:
+ last_change = "epic"
+ move_sub_templates.append(
+ STORY_UPDATE_BATCH_CHANGED_SUB_TEMPLATE.format(
+ entity_type="Epic",
+ old=get_reference_by_id(payload, changes["epic_id"].get("old")).get("name"),
+ new=get_reference_by_id(payload, changes["epic_id"].get("new")).get("name"),
+ )
+ )
+ if "project_id" in changes:
+ last_change = "project"
+ move_sub_templates.append(
+ STORY_UPDATE_BATCH_CHANGED_SUB_TEMPLATE.format(
+ entity_type="Project",
+ old=get_reference_by_id(payload, changes["project_id"].get("old")).get("name"),
+ new=get_reference_by_id(payload, changes["project_id"].get("new")).get("name"),
+ )
+ )
+ if len(move_sub_templates) > 0:
+ templates.append(
+ STORY_UPDATE_BATCH_CHANGED_TEMPLATE.format(
+ operation="was moved",
+ sub_templates=", ".join(move_sub_templates),
+ )
+ )
+
+ if "story_type" in changes:
+ last_change = "type"
+ templates.append(
+ STORY_UPDATE_BATCH_CHANGED_TEMPLATE.format(
+ operation="{} changed".format("was" if len(templates) == 0 else "and"),
+ sub_templates=STORY_UPDATE_BATCH_CHANGED_SUB_TEMPLATE.format(
+ entity_type="type",
+ old=changes["story_type"].get("old"),
+ new=changes["story_type"].get("new"),
+ ),
+ )
+ )
+
+ if "label_ids" in changes:
+ last_change = "label"
+ labels = get_story_joined_label_list(payload, action, changes["label_ids"].get("adds"))
+ templates.append(
+ STORY_UPDATE_BATCH_ADD_REMOVE_TEMPLATE.format(
+ operation="{} added".format("was" if len(templates) == 0 else "and"),
+ entity="the new label{plural} {labels}".format(
+ plural="s" if len(changes["label_ids"]) > 1 else "", labels=labels
+ ),
+ )
+ )
+
+ if "workflow_state_id" in changes:
+ last_change = "state"
+ kwargs.update(
+ workflow_state_template=TRAILING_WORKFLOW_STATE_CHANGE_TEMPLATE.format(
+ old=get_reference_by_id(payload, changes["workflow_state_id"].get("old")).get(
+ "name"
+ ),
+ new=get_reference_by_id(payload, changes["workflow_state_id"].get("new")).get(
+ "name"
+ ),
+ )
+ )
+
+ # Use the default template for state change if it is the only one change.
+ if len(templates) <= 1 or (len(templates) == 0 and last_change == "state"):
+ event: str = "{}_{}".format("story_update", last_change)
+ alternative_body_func = EVENT_BODY_FUNCTION_MAPPER.get(event)
+ # If last_change is not one of "epic", "project", "type", "label" and "state"
+ # we should ignore the action as there is no way for us to render the changes.
+ if alternative_body_func is None:
+ return None
+ return alternative_body_func(payload, action)
+
+ kwargs.update(templates=", ".join(templates))
+ return STORY_UPDATE_BATCH_TEMPLATE.format(**kwargs)
+
+
+def get_entity_name(
+ payload: Dict[str, Any], action: Dict[str, Any], entity: Optional[str] = None
+) -> Optional[str]:
name = action.get("name")
if name is None or action["entity_type"] == "branch":
@@ -511,16 +624,38 @@ def get_name_template(entity: str) -> str:
return EPIC_NAME_TEMPLATE
-EVENT_BODY_FUNCTION_MAPPER: Dict[str, Callable[[Dict[str, Any]], Optional[str]]] = {
+def send_stream_messages_for_actions(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any],
+ action: Dict[str, Any],
+ event: str,
+) -> None:
+ body_func = EVENT_BODY_FUNCTION_MAPPER.get(event)
+ topic_func = get_topic_function_based_on_type(payload, action)
+ if body_func is None or topic_func is None:
+ raise UnsupportedWebhookEventType(event)
+
+ topic = topic_func(payload, action)
+ body = body_func(payload, action)
+
+ if topic and body:
+ check_send_webhook_message(request, user_profile, topic, body)
+
+
+EVENT_BODY_FUNCTION_MAPPER: Dict[str, Callable[[Dict[str, Any], Dict[str, Any]], Optional[str]]] = {
"story_update_archived": partial(get_update_archived_body, entity="story"),
"epic_update_archived": partial(get_update_archived_body, entity="epic"),
"story_create": get_story_create_body,
"pull-request_create": partial(get_story_create_github_entity_body, entity="pull-request"),
+ "pull-request_comment": partial(
+ get_story_create_github_entity_body, entity="pull-request-comment"
+ ),
"branch_create": partial(get_story_create_github_entity_body, entity="branch"),
"story_delete": get_delete_body,
"epic_delete": get_delete_body,
- "story-task_create": partial(get_story_task_body, action="added to"),
- "story-task_delete": partial(get_story_task_body, action="removed from"),
+ "story-task_create": partial(get_story_task_body, operation="added to"),
+ "story-task_delete": partial(get_story_task_body, operation="removed from"),
"story-task_update_complete": get_story_task_completed_body,
"story_update_epic": get_story_update_epic_body,
"story_update_estimate": get_story_update_estimate_body,
@@ -538,6 +673,7 @@ def get_name_template(entity: str) -> str:
"story_update_state": get_story_update_state_body,
"epic_update_name": partial(get_update_name_body, entity="epic"),
"story_update_name": partial(get_update_name_body, entity="story"),
+ "story_update_batch": get_story_update_batch_body,
}
EVENT_TOPIC_FUNCTION_MAPPER = {
@@ -554,6 +690,20 @@ def get_name_template(entity: str) -> str:
"story-comment_update",
}
+EVENTS_SECONDARY_ACTIONS_FUNCTION_MAPPER: Dict[
+ str, Callable[[Dict[str, Any]], Generator[Dict[str, Any], None, None]]
+] = {
+ "pull-request_create": partial(
+ get_secondary_actions_with_param, entity="story", changed_attr="pull_request_ids"
+ ),
+ "branch_create": partial(
+ get_secondary_actions_with_param, entity="story", changed_attr="branch_ids"
+ ),
+ "pull-request_comment": partial(
+ get_secondary_actions_with_param, entity="story", changed_attr="pull_request_ids"
+ ),
+}
+
@webhook_view("ClubHouse")
@has_request_variables
@@ -570,18 +720,22 @@ def api_clubhouse_webhook(
if payload is None:
return json_success()
- event = get_event(payload)
- if event is None:
- return json_success()
-
- body_func = EVENT_BODY_FUNCTION_MAPPER.get(event)
- topic_func = get_topic_function_based_on_type(payload)
- if body_func is None or topic_func is None:
- raise UnsupportedWebhookEventType(event)
- topic = topic_func(payload)
- body = body_func(payload)
-
- if topic and body:
- check_send_webhook_message(request, user_profile, topic, body)
+ if payload.get("primary_id") is not None:
+ action = get_action_with_primary_id(payload)
+ primary_actions = [action]
+ else:
+ primary_actions = payload["actions"]
+
+ for primary_action in primary_actions:
+ event = get_event(payload, primary_action)
+ if event is None:
+ continue
+
+ if event in EVENTS_SECONDARY_ACTIONS_FUNCTION_MAPPER:
+ sec_actions_func = EVENTS_SECONDARY_ACTIONS_FUNCTION_MAPPER[event]
+ for sec_action in sec_actions_func(payload):
+ send_stream_messages_for_actions(request, user_profile, payload, sec_action, event)
+ else:
+ send_stream_messages_for_actions(request, user_profile, payload, primary_action, event)
return json_success()
| diff --git a/zerver/webhooks/clubhouse/tests.py b/zerver/webhooks/clubhouse/tests.py
--- a/zerver/webhooks/clubhouse/tests.py
+++ b/zerver/webhooks/clubhouse/tests.py
@@ -1,5 +1,5 @@
import json
-from unittest.mock import MagicMock, patch
+from unittest.mock import MagicMock, call, patch
from zerver.lib.test_classes import WebhookTestCase
@@ -121,7 +121,7 @@ def test_story_task_completed(self) -> None:
expected_message = "Task **A new task for this story** ([Add cool feature!](https://app.clubhouse.io/zulip/story/11)) was completed. :tada:"
self.check_webhook("story_task_complete", "Add cool feature!", expected_message)
- @patch("zerver.lib.webhooks.common.check_send_webhook_message")
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
def test_story_task_incomplete_ignore(self, check_send_webhook_message_mock: MagicMock) -> None:
payload = self.get_body("story_task_not_complete")
result = self.client_post(self.url, payload, content_type="application/json")
@@ -159,7 +159,7 @@ def test_story_file_attachment_added(self) -> None:
expected_message = "A file attachment `zuliprc` was added to the story [Add cool feature!](https://app.clubhouse.io/zulip/story/11)."
self.check_webhook("story_update_add_attachment", "Add cool feature!", expected_message)
- @patch("zerver.lib.webhooks.common.check_send_webhook_message")
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
def test_story_file_attachment_removed_ignore(
self, check_send_webhook_message_mock: MagicMock
) -> None:
@@ -172,13 +172,17 @@ def test_story_label_added(self) -> None:
expected_message = "The label **mockup** was added to the story [An epic story!](https://app.clubhouse.io/zulip/story/23)."
self.check_webhook("story_update_add_label", "An epic story!", expected_message)
+ def test_story_label_multiple_added(self) -> None:
+ expected_message = "The labels **mockup**, **label** were added to the story [An epic story!](https://app.clubhouse.io/zulip/story/23)."
+ self.check_webhook("story_update_add_multiple_labels", "An epic story!", expected_message)
+
def test_story_label_added_label_name_in_actions(self) -> None:
expected_message = "The label **sad** was added to the story [An emotional story!](https://app.clubhouse.io/zulip/story/28)."
self.check_webhook(
"story_update_add_label_name_in_action", "An emotional story!", expected_message
)
- @patch("zerver.lib.webhooks.common.check_send_webhook_message")
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
def test_story_label_removed_ignore(self, check_send_webhook_message_mock: MagicMock) -> None:
payload = self.get_body("story_update_remove_label")
result = self.client_post(self.url, payload, content_type="application/json")
@@ -201,13 +205,180 @@ def test_story_update_add_github_pull_request(self) -> None:
expected_message,
)
+ def test_story_update_add_github_pull_request_without_workflow_state(self) -> None:
+ expected_message = "New GitHub PR [#10](https://github.com/eeshangarg/Scheduler/pull/10) opened for story [Testing pull requests with Story](https://app.clubhouse.io/zulip/story/28)."
+ self.check_webhook(
+ "story_update_add_github_pull_request_without_workflow_state",
+ "Testing pull requests with Story",
+ expected_message,
+ )
+
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
+ def test_story_update_add_github_multiple_pull_requests(
+ self, check_send_webhook_message_mock: MagicMock
+ ) -> None:
+ payload = self.get_body("story_update_add_github_multiple_pull_requests")
+ self.client_post(self.url, payload, content_type="application/json")
+ expected_message = "New GitHub PR [#2](https://github.com/PIG208/test-clubhouse/pull/2) opened for story [{name}]({url}) (Unscheduled -> In Development)."
+ request, user_profile = (
+ check_send_webhook_message_mock.call_args_list[0][0][0],
+ check_send_webhook_message_mock.call_args_list[0][0][1],
+ )
+ expected_list = [
+ call(
+ request,
+ user_profile,
+ "Story1",
+ expected_message.format(
+ name="Story1", url="https://app.clubhouse.io/pig208/story/17"
+ ),
+ ),
+ call(
+ request,
+ user_profile,
+ "Story2",
+ expected_message.format(
+ name="Story2", url="https://app.clubhouse.io/pig208/story/18"
+ ),
+ ),
+ ]
+ self.assertEqual(check_send_webhook_message_mock.call_args_list, expected_list)
+
+ def test_story_update_add_github_pull_request_with_comment(self) -> None:
+ expected_message = "Existing GitHub PR [#2](https://github.com/PIG208/test-clubhouse/pull/2) associated with story [asd2](https://app.clubhouse.io/pig208/story/15)."
+ self.check_webhook(
+ "story_update_add_github_pull_request_with_comment",
+ "asd2",
+ expected_message,
+ )
+
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
+ def test_story_update_add_github_multiple_pull_requests_with_comment(
+ self, check_send_webhook_message_mock: MagicMock
+ ) -> None:
+ payload = self.get_body("story_update_add_github_multiple_pull_requests_with_comment")
+ self.client_post(self.url, payload, content_type="application/json")
+ expected_message = "Existing GitHub PR [#1](https://github.com/PIG208/test-clubhouse/pull/1) associated with story [{name}]({url}) (Unscheduled -> In Development)."
+ request, user_profile = (
+ check_send_webhook_message_mock.call_args_list[0][0][0],
+ check_send_webhook_message_mock.call_args_list[0][0][1],
+ )
+ expected_list = [
+ call(
+ request,
+ user_profile,
+ "new1",
+ expected_message.format(
+ name="new1", url="https://app.clubhouse.io/pig208/story/26"
+ ),
+ ),
+ call(
+ request,
+ user_profile,
+ "new2",
+ expected_message.format(
+ name="new2", url="https://app.clubhouse.io/pig208/story/27"
+ ),
+ ),
+ ]
+ self.assertEqual(check_send_webhook_message_mock.call_args_list, expected_list)
+
def test_story_update_add_github_branch(self) -> None:
expected_message = "New GitHub branch [eeshangarg/ch27/testing-pull-requests-with-story](https://github.com/eeshangarg/scheduler/tree/eeshangarg/ch27/testing-pull-requests-with-story) associated with story [Testing pull requests with Story](https://app.clubhouse.io/zulip/story/27) (Unscheduled -> In Development)."
self.check_webhook(
"story_update_add_github_branch", "Testing pull requests with Story", expected_message
)
- @patch("zerver.lib.webhooks.common.check_send_webhook_message")
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
+ def test_story_update_batch(self, check_send_webhook_message_mock: MagicMock) -> None:
+ payload = self.get_body("story_update_everything_at_once")
+ self.client_post(self.url, payload, content_type="application/json")
+ expected_message = "The story [{name}]({url}) was moved from Epic **epic** to **testeipc**, Project **Product Development** to **test2**, and changed from type **feature** to **bug**, and added with the new label **low priority** (In Development -> Ready for Review)."
+ request, user_profile = (
+ check_send_webhook_message_mock.call_args_list[0][0][0],
+ check_send_webhook_message_mock.call_args_list[0][0][1],
+ )
+ expected_list = [
+ call(
+ request,
+ user_profile,
+ "asd4",
+ expected_message.format(
+ name="asd4", url="https://app.clubhouse.io/pig208/story/17"
+ ),
+ ),
+ call(
+ request,
+ user_profile,
+ "new1",
+ expected_message.format(
+ name="new1", url="https://app.clubhouse.io/pig208/story/26"
+ ),
+ ),
+ call(
+ request,
+ user_profile,
+ "new2",
+ expected_message.format(
+ name="new2", url="https://app.clubhouse.io/pig208/story/27"
+ ),
+ ),
+ ]
+ self.assertEqual(check_send_webhook_message_mock.call_args_list, expected_list)
+
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
+ def test_story_update_batch_each_with_one_change(
+ self, check_send_webhook_message_mock: MagicMock
+ ) -> None:
+ payload = self.get_body("story_update_multiple_at_once")
+ self.client_post(self.url, payload, content_type="application/json")
+ expected_messages = [
+ (
+ "asd4",
+ "The type of the story [asd4](https://app.clubhouse.io/pig208/story/17) was changed from **feature** to **bug**.",
+ ),
+ (
+ "new1",
+ "The story [new1](https://app.clubhouse.io/pig208/story/26) was moved from **epic** to **testeipc**.",
+ ),
+ (
+ "new2",
+ "The label **low priority** was added to the story [new2](https://app.clubhouse.io/pig208/story/27).",
+ ),
+ (
+ "new3",
+ "State of the story [new3](https://app.clubhouse.io/pig208/story/28) was changed from **In Development** to **Ready for Review**.",
+ ),
+ (
+ "new4",
+ "The story [new4](https://app.clubhouse.io/pig208/story/29) was moved from the **Product Development** project to **test2**.",
+ ),
+ ]
+ request, user_profile = (
+ check_send_webhook_message_mock.call_args_list[0][0][0],
+ check_send_webhook_message_mock.call_args_list[0][0][1],
+ )
+ expected_list = [
+ call(
+ request,
+ user_profile,
+ expected_message[0],
+ expected_message[1],
+ )
+ for expected_message in expected_messages
+ ]
+ self.assertEqual(check_send_webhook_message_mock.call_args_list, expected_list)
+
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
+ def test_story_update_batch_not_supported_ignore(
+ self, check_send_webhook_message_mock: MagicMock
+ ) -> None:
+ payload = self.get_body("story_update_multiple_not_supported")
+ result = self.client_post(self.url, payload, content_type="application/json")
+ self.assertFalse(check_send_webhook_message_mock.called)
+ self.assert_json_success(result)
+
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
def test_empty_post_request_body_ignore(
self, check_send_webhook_message_mock: MagicMock
) -> None:
@@ -216,7 +387,7 @@ def test_empty_post_request_body_ignore(
self.assertFalse(check_send_webhook_message_mock.called)
self.assert_json_success(result)
- @patch("zerver.lib.webhooks.common.check_send_webhook_message")
+ @patch("zerver.webhooks.clubhouse.view.check_send_webhook_message")
def test_story_comment_updated_ignore(self, check_send_webhook_message_mock: MagicMock) -> None:
payload = self.get_body("story_comment_updated")
result = self.client_post(self.url, payload, content_type="application/json")
| Make clubhouse integration handle multiple relevant secondary actions
Each clubhouse payload has a primary action and a bunch of secondary actions. The primary action can be for example creating a pull request and the secondary action can be including the pull request to a story. So for each payload the clubhouse integration would send a message to a stream. Eg when a pull request is created and added to a story a message would be send under the topic Story name that the pull request has been created.
Our integration currently assumes that there is only one relevant secondary action to a primary action. For example, in case of creating a new pull request(primary action), it assumes that the pull request would be added(secondary action) only to one story. So the action sends a message to the topic story name and with body containing the pull request details.
This is not fully correct since there can be multiple stories associated with a given pull request[1][2].

The current implementation of the clubhouse integration sends message only to one story topic instead of sending messages to all the story topics.
<details>
<summary>Click to show JSON payload</summary>
```json
{
"id":"70686928-f649-607f-b640-8b80310559f4",
"changed_at":"2021-04-02T18:57:41.201Z",
"version":"v1",
"primary_id":700000036,
"member_id":"20576d78-5b7a-5f88-b59c-a67506a49fa8",
"actions":[
{
"id":700000036,
"entity_type":"pull-request",
"action":"create",
"number":2,
"title":"Test 2",
"url":"https://github.com/hackerkid/test/pull/2"
},
{
"id":12,
"entity_type":"story",
"action":"update",
"name":"test story 1",
"story_type":"feature",
"app_url":"https://app.clubhouse.io/hackerkid/story/12",
"changes":{
"pull_request_ids":{
"adds":[
700000036
]
},
"started":{
"new":true,
"old":false
},
"position":{
"new":42147680257,
"old":12547680256
},
"workflow_state_id":{
"new":500000006,
"old":500000008
},
"started_at":{
"new":"2021-04-02T18:57:41Z"
},
"owner_ids":{
"adds":[
"20576d78-5b7a-5f88-b59c-a67506a49fa8"
]
}
}
},
{
"id":10,
"entity_type":"story",
"action":"update",
"name":"www",
"story_type":"feature",
"app_url":"https://app.clubhouse.io/hackerkid/story/10",
"changes":{
"pull_request_ids":{
"adds":[
700000036
]
},
"started":{
"new":true,
"old":false
},
"position":{
"new":32147549184,
"old":12147549184
},
"workflow_state_id":{
"new":500000006,
"old":500000008
},
"started_at":{
"new":"2021-04-02T18:57:41Z"
},
"owner_ids":{
"adds":[
"20576d78-5b7a-5f88-b59c-a67506a49fa8"
]
}
}
}
],
"references":[
{
"id":500000006,
"entity_type":"workflow-state",
"name":"In Development",
"type":"started"
},
{
"id":500000008,
"entity_type":"workflow-state",
"name":"Unscheduled",
"type":"unstarted"
}
]
}
```
</details>

The pull request is just one example of how the integration fails in handling multiple secondary actions. There should be other cases as well. I have not investigated the other scerios.
We should rewrite the clubhouse integration to make sure that all the pull requests are handled.
[1]https://help.clubhouse.io/hc/en-us/articles/207540323-Using-Branches-and-Pull-Requests-with-the-Clubhouse-VCS-Integrations
[2]https://github.com/hackerkid/test/pull/2
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-04-10T18:27:29 |
zulip/zulip | 18,100 | zulip__zulip-18100 | [
"13939"
] | 831b7ca965c77103a31f0bf56e5b04bf527c368b | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -445,6 +445,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
WebhookIntegration("travis", ["continuous-integration"], display_name="Travis CI"),
WebhookIntegration("trello", ["project-management"]),
WebhookIntegration("updown", ["monitoring"]),
+ WebhookIntegration("uptimerobot", ["monitoring"], display_name="Uptime Robot"),
WebhookIntegration(
"yo",
["communication"],
@@ -781,6 +782,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
"travis": [ScreenshotConfig("build.json", payload_as_query_param=True)],
"trello": [ScreenshotConfig("adding_comment_to_card.json")],
"updown": [ScreenshotConfig("check_multiple_events.json")],
+ "uptimerobot": [ScreenshotConfig("uptimerobot_monitor_up.json")],
"wordpress": [ScreenshotConfig("publish_post.txt", "wordpress_post_created.png")],
"yo": [
ScreenshotConfig(
diff --git a/zerver/webhooks/uptimerobot/__init__.py b/zerver/webhooks/uptimerobot/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/uptimerobot/view.py b/zerver/webhooks/uptimerobot/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/uptimerobot/view.py
@@ -0,0 +1,65 @@
+from typing import Any, Dict
+
+from django.http import HttpRequest, HttpResponse
+from django.utils.translation import ugettext as _
+
+from zerver.decorator import REQ, has_request_variables, webhook_view
+from zerver.lib.actions import send_rate_limited_pm_notification_to_bot_owner
+from zerver.lib.response import json_error, json_success
+from zerver.lib.send_email import FromAddress
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+MISCONFIGURED_PAYLOAD_ERROR_MESSAGE = """
+Hi there! Your bot {bot_name} just received a Uptime Robot payload that is missing
+some data that Zulip requires. This usually indicates a configuration issue
+in your Uptime Robot webhook settings. Please make sure that you set the required parameters
+when configuring the Uptime Robot webhook. Contact {support_email} if you
+need further help!
+"""
+
+UPTIMEROBOT_TOPIC_TEMPLATE = "{monitor_friendly_name}"
+UPTIMEROBOT_MESSAGE_UP_TEMPLATE = """
+{monitor_friendly_name} ({monitor_url}) is back UP ({alert_details}).
+It was down for {alert_friendly_duration}.
+""".strip()
+UPTIMEROBOT_MESSAGE_DOWN_TEMPLATE = (
+ "{monitor_friendly_name} ({monitor_url}) is DOWN ({alert_details})."
+)
+
+
+@webhook_view("UptimeRobot")
+@has_request_variables
+def api_uptimerobot_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+
+ try:
+ body = get_body_for_http_request(payload)
+ subject = get_subject_for_http_request(payload)
+ except KeyError:
+ message = MISCONFIGURED_PAYLOAD_ERROR_MESSAGE.format(
+ bot_name=user_profile.full_name,
+ support_email=FromAddress.SUPPORT,
+ ).strip()
+ send_rate_limited_pm_notification_to_bot_owner(user_profile, user_profile.realm, message)
+
+ return json_error(_("Invalid payload"))
+
+ check_send_webhook_message(request, user_profile, subject, body)
+ return json_success()
+
+
+def get_subject_for_http_request(payload: Dict[str, Any]) -> str:
+ return UPTIMEROBOT_TOPIC_TEMPLATE.format(monitor_friendly_name=payload["monitor_friendly_name"])
+
+
+def get_body_for_http_request(payload: Dict[str, Any]) -> str:
+ if payload["alert_type_friendly_name"] == "Up":
+ body = UPTIMEROBOT_MESSAGE_UP_TEMPLATE.format(**payload)
+ elif payload["alert_type_friendly_name"] == "Down":
+ body = UPTIMEROBOT_MESSAGE_DOWN_TEMPLATE.format(**payload)
+
+ return body
| diff --git a/zerver/webhooks/uptimerobot/tests.py b/zerver/webhooks/uptimerobot/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/uptimerobot/tests.py
@@ -0,0 +1,47 @@
+from zerver.lib.send_email import FromAddress
+from zerver.lib.test_classes import WebhookTestCase
+from zerver.models import Recipient
+from zerver.webhooks.uptimerobot.view import MISCONFIGURED_PAYLOAD_ERROR_MESSAGE
+
+
+class UptimeRobotHookTests(WebhookTestCase):
+ STREAM_NAME = "uptimerobot"
+ URL_TEMPLATE = "/api/v1/external/uptimerobot?stream={stream}&api_key={api_key}"
+ FIXTURE_DIR_NAME = "uptimerobot"
+
+ def test_uptimerobot_monitor_down(self) -> None:
+ """
+ Tests if uptimerobot monitor down is handled correctly
+ """
+ expected_topic = "Web Server"
+ expected_message = "Web Server (server1.example.com) is DOWN (Host Is Unreachable)."
+ self.check_webhook("uptimerobot_monitor_down", expected_topic, expected_message)
+
+ def test_uptimerobot_monitor_up(self) -> None:
+ """
+ Tests if uptimerobot monitor up is handled correctly
+ """
+ expected_topic = "Mail Server"
+ expected_message = """
+Mail Server (server2.example.com) is back UP (Host Is Reachable).
+It was down for 44 minutes and 37 seconds.
+""".strip()
+ self.check_webhook("uptimerobot_monitor_up", expected_topic, expected_message)
+
+ def test_uptimerobot_invalid_payload_with_missing_data(self) -> None:
+ """
+ Tests if invalid uptime robot payloads are handled correctly
+ """
+ self.url = self.build_webhook_url()
+ payload = self.get_body("uptimerobot_invalid_payload_with_missing_data")
+ result = self.client_post(self.url, payload, content_type="application/json")
+ self.assert_json_error(result, "Invalid payload")
+
+ expected_message = MISCONFIGURED_PAYLOAD_ERROR_MESSAGE.format(
+ bot_name=self.test_user.full_name,
+ support_email=FromAddress.SUPPORT,
+ ).strip()
+
+ msg = self.get_last_message()
+ self.assertEqual(msg.content, expected_message)
+ self.assertEqual(msg.recipient.type, Recipient.PERSONAL)
| integrations: Add Uptime Robot integration.
Add Uptime Robot Integration.
Uptime Robot webhook informs users if their site is up or down, so give appropriate message for these two events.
Fixes #13854.
| 2021-04-12T01:04:25 |
|
zulip/zulip | 18,120 | zulip__zulip-18120 | [
"18116"
] | 2da4443cc5948ace06c5d0418fbb7e9d9eba1c86 | diff --git a/tools/setup/emoji/emoji_names.py b/tools/setup/emoji/emoji_names.py
--- a/tools/setup/emoji/emoji_names.py
+++ b/tools/setup/emoji/emoji_names.py
@@ -121,8 +121,7 @@
# queasy seemed like a natural addition
"1f922": {"canonical_name": "nauseated", "aliases": ["queasy"]},
"1f927": {"canonical_name": "sneezing", "aliases": []},
- # cant_talk from https://beebom.com/emoji-meanings/
- "1f637": {"canonical_name": "cant_talk", "aliases": ["mask"]},
+ "1f637": {"canonical_name": "mask", "aliases": []},
# flu from https://mashable.com/2015/10/23/ios-9-1-emoji-guide/, sick from
# https://emojipedia.org/face-with-thermometer/, face_with_thermometer so
# it shows up in typeahead (thermometer taken by Objects/82)
| Incorrect short code for mask emoji
The mask emoji (:mask:) is aliased as `:cant_talk:` in Zulip. This is incorrect for several reasons. First, this is not the recognized name by Unicode (see https://unicode.org/emoji/charts-13.1/emoji-list.html). The official name is "face with medical mask". Recognized aliases include: cold | doctor | face | face with medical mask | mask | sick. It's also not aliased this way on any other platform. GitHub, GitLab, and Gitter, for example, only use `:mask:`. Second, the alias implies that you can't talk with a mask on, which is categorically false. A person with a mask can talk and be heard just fine, as we've all become accustomed to in the last year. Third, using the alias "cant_talk" puts a stigma on wearing masks, which is not good. We need to frame as something positive as it will help us curb the spread of this virus (and others ones in the future).
Please consider dropping the alias "cant_talk" (since `:mask:` also works).
| I looked into the code and found a reference to an article. But the [article](https://beebom.com/emoji-meanings/
) doesn't tell anything about the "cant_talk" alias. So I guess it might have been a mistake.
```
# cant_talk from https://beebom.com/emoji-meanings/
"1f637": {"canonical_name": "cant_talk", "aliases": ["mask"]},
``` | 2021-04-13T01:15:10 |
|
zulip/zulip | 18,259 | zulip__zulip-18259 | [
"15964"
] | c3247128ff5860e5f9ba13d4531ced182fbbb579 | diff --git a/scripts/lib/clean_yarn_cache.py b/scripts/lib/clean_yarn_cache.py
new file mode 100644
--- /dev/null
+++ b/scripts/lib/clean_yarn_cache.py
@@ -0,0 +1,47 @@
+#!/usr/bin/env python3
+import argparse
+import os
+import sys
+
+ZULIP_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+sys.path.append(ZULIP_PATH)
+
+from scripts.lib.zulip_tools import may_be_perform_purging, parse_cache_script_args
+
+YARN_CACHE_PATH = os.path.expanduser("~/.cache/yarn/")
+CURRENT_VERSION = "v6"
+
+
+def remove_unused_versions_dir(args: argparse.Namespace) -> None:
+ """Deletes cache data from obsolete Yarn versions.
+
+ Yarn does not provide an interface for removing obsolete data from
+ ~/.cache/yarn for packages that you haven't installed in years; but one
+ can always remove the cache entirely.
+ """
+ current_version_dir = os.path.join(YARN_CACHE_PATH, CURRENT_VERSION)
+ dirs_to_purge = set(
+ [
+ os.path.join(YARN_CACHE_PATH, directory)
+ for directory in os.listdir(YARN_CACHE_PATH)
+ if directory != CURRENT_VERSION
+ ]
+ )
+
+ may_be_perform_purging(
+ dirs_to_purge,
+ {current_version_dir},
+ "yarn cache",
+ args.dry_run,
+ args.verbose,
+ args.no_headings,
+ )
+
+
+def main(args: argparse.Namespace) -> None:
+ remove_unused_versions_dir(args)
+
+
+if __name__ == "__main__":
+ args = parse_cache_script_args("This script cleans redundant Zulip yarn caches.")
+ main(args)
| Avoid leaking huge amounts of disk in the yarn cache in old production servers
On chat.zulip.org, `/home/zulip/.cache` is 7GB; some other users have reported sizes more like 2.6GB. This is large compared to the rest of a Zulip installation, and probably deserves some attention.
The main cause of this is `yarn` caching; which sadly doesn't have any feature to auto-prune its caches.
```
zulip@zulip:~/.cache/yarn$ du -shc *
1.8G v1
1.9G v4
962M v5
2.5G v6
7.0G total
```
There are 2 things we should do to fix this, both in `scripts/lib/clean-unused-caches`:
* Remove the old directories under `~/.cache/yarn/v{1,4,5}`, since those will never be used.
* Remove and individual files under ~/.cache/yarn/v6` that have a modification time 6 months or more ago.
| Hello @zulip/server-production, @zulip/server-tooling members, this issue was labeled with the "area: production", "area: tooling" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-04-25T10:25:16 |
|
zulip/zulip | 18,264 | zulip__zulip-18264 | [
"18223"
] | 4c4c2e46fbe8a99ce94b1b60fb5749a1444ba733 | diff --git a/zerver/lib/outgoing_webhook.py b/zerver/lib/outgoing_webhook.py
--- a/zerver/lib/outgoing_webhook.py
+++ b/zerver/lib/outgoing_webhook.py
@@ -288,8 +288,10 @@ def process_success_response(
try:
response_json = json.loads(response.text)
except json.JSONDecodeError:
- fail_with_message(event, "Invalid JSON in response")
- return
+ raise JsonableError(_("Invalid JSON in response"))
+
+ if not isinstance(response_json, dict):
+ raise JsonableError(_("Invalid response format"))
success_data = service_handler.process_success(response_json)
| diff --git a/zerver/tests/test_outgoing_webhook_interfaces.py b/zerver/tests/test_outgoing_webhook_interfaces.py
--- a/zerver/tests/test_outgoing_webhook_interfaces.py
+++ b/zerver/tests/test_outgoing_webhook_interfaces.py
@@ -5,6 +5,7 @@
import requests
from zerver.lib.avatar import get_gravatar_url
+from zerver.lib.exceptions import JsonableError
from zerver.lib.message import MessageDict
from zerver.lib.outgoing_webhook import get_service_interface_class, process_success_response
from zerver.lib.test_classes import ZulipTestCase
@@ -47,13 +48,12 @@ def test_process_success_response(self) -> None:
response.status_code = 200
response.text = "unparsable text"
- with mock.patch("zerver.lib.outgoing_webhook.fail_with_message") as m:
+ with self.assertRaisesRegex(JsonableError, "Invalid JSON in response"):
process_success_response(
event=event,
service_handler=service_handler,
response=response,
)
- self.assertTrue(m.called)
def test_make_request(self) -> None:
othello = self.example_user("othello")
diff --git a/zerver/tests/test_outgoing_webhook_system.py b/zerver/tests/test_outgoing_webhook_system.py
--- a/zerver/tests/test_outgoing_webhook_system.py
+++ b/zerver/tests/test_outgoing_webhook_system.py
@@ -86,7 +86,7 @@ def test_successful_request(self) -> None:
for service_class in [GenericOutgoingWebhookService, SlackOutgoingWebhookService]:
handler = service_class("token", bot_user, "service")
with mock.patch.object(handler, "session") as session:
- session.post.return_value = ResponseMock(200)
+ session.post.return_value = ResponseMock(200, b"{}")
do_rest_call("", mock_event, handler)
session.post.assert_called_once()
@@ -158,7 +158,7 @@ def test_headers(self) -> None:
session = service_handler.session
with mock.patch.object(session, "send") as mock_send:
- mock_send.return_value = ResponseMock(200)
+ mock_send.return_value = ResponseMock(200, b"{}")
final_response = do_rest_call("https://example.com/", mock_event, service_handler)
assert final_response is not None
@@ -261,6 +261,62 @@ def test_jsonable_exception(self) -> None:
assert bot_user.bot_owner is not None
self.assertEqual(bot_owner_notification.recipient_id, bot_user.bot_owner.recipient_id)
+ def test_invalid_response_format(self) -> None:
+ bot_user = self.example_user("outgoing_webhook_bot")
+ mock_event = self.mock_event(bot_user)
+ service_handler = GenericOutgoingWebhookService("token", bot_user, "service")
+
+ expect_logging_info = self.assertLogs(level="INFO")
+ expect_fail = mock.patch("zerver.lib.outgoing_webhook.fail_with_message")
+
+ with responses.RequestsMock(assert_all_requests_are_fired=True) as requests_mock:
+ # We mock the endpoint to return response with valid json which doesn't
+ # translate to a dict like is expected,
+ requests_mock.add(
+ requests_mock.POST, "https://example.zulip.com", status=200, json=True
+ )
+ with expect_logging_info, expect_fail as mock_fail:
+ do_rest_call("https://example.zulip.com", mock_event, service_handler)
+ self.assertTrue(mock_fail.called)
+ bot_owner_notification = self.get_last_message()
+ self.assertEqual(
+ bot_owner_notification.content,
+ """[A message](http://zulip.testserver/#narrow/stream/999-Verona/topic/Foo/near/) to your bot @_**Outgoing Webhook** triggered an outgoing webhook.
+The outgoing webhook server attempted to send a message in Zulip, but that request resulted in the following error:
+> Invalid response format""",
+ )
+ assert bot_user.bot_owner is not None
+ self.assertEqual(bot_owner_notification.recipient_id, bot_user.bot_owner.recipient_id)
+
+ def test_invalid_json_in_response(self) -> None:
+ bot_user = self.example_user("outgoing_webhook_bot")
+ mock_event = self.mock_event(bot_user)
+ service_handler = GenericOutgoingWebhookService("token", bot_user, "service")
+
+ expect_logging_info = self.assertLogs(level="INFO")
+ expect_fail = mock.patch("zerver.lib.outgoing_webhook.fail_with_message")
+
+ with responses.RequestsMock(assert_all_requests_are_fired=True) as requests_mock:
+ # We mock the endpoint to return response with a body which isn't valid json.
+ requests_mock.add(
+ requests_mock.POST,
+ "https://example.zulip.com",
+ status=200,
+ body="this isn't valid json",
+ )
+ with expect_logging_info, expect_fail as mock_fail:
+ do_rest_call("https://example.zulip.com", mock_event, service_handler)
+ self.assertTrue(mock_fail.called)
+ bot_owner_notification = self.get_last_message()
+ self.assertEqual(
+ bot_owner_notification.content,
+ """[A message](http://zulip.testserver/#narrow/stream/999-Verona/topic/Foo/near/) to your bot @_**Outgoing Webhook** triggered an outgoing webhook.
+The outgoing webhook server attempted to send a message in Zulip, but that request resulted in the following error:
+> Invalid JSON in response""",
+ )
+ assert bot_user.bot_owner is not None
+ self.assertEqual(bot_owner_notification.recipient_id, bot_user.bot_owner.recipient_id)
+
class TestOutgoingWebhookMessaging(ZulipTestCase):
def create_outgoing_bot(self, bot_owner: UserProfile) -> UserProfile:
@@ -305,7 +361,7 @@ def test_multiple_services(self) -> None:
session = mock.Mock(spec=requests.Session)
session.headers = {}
- session.post.return_value = ResponseMock(200)
+ session.post.return_value = ResponseMock(200, b"{}")
with mock.patch("zerver.lib.outgoing_webhook.Session") as sessionmaker:
sessionmaker.return_value = session
self.send_personal_message(
| TypeError: argument of type 'NoneType' is not iterable
https://sentry.io/share/issue/71c35f8aca6444b4a8e1d3ab6880ef71/
```
TypeError: argument of type 'NoneType' is not iterable
(1 additional frame(s) were not displayed)
...
File "zerver/worker/queue_processors.py", line 335, in <lambda>
consume_func = lambda events: self.consume(events[0])
File "zerver/worker/queue_processors.py", line 749, in consume
do_rest_call(service.base_url, event, service_handler)
File "zerver/lib/outgoing_webhook.py", line 325, in do_rest_call
process_success_response(event, service_handler, response)
File "zerver/lib/outgoing_webhook.py", line 293, in process_success_response
success_data = service_handler.process_success(response_json)
File "zerver/lib/outgoing_webhook.py", line 79, in process_success
if "response_not_required" in response_json and response_json["response_not_required"]:
```
| Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-04-25T19:44:48 |
zulip/zulip | 18,274 | zulip__zulip-18274 | [
"16090"
] | 8e2042d37837f1fe16b8eb8db1c0860e7eddfa48 | diff --git a/zerver/views/user_settings.py b/zerver/views/user_settings.py
--- a/zerver/views/user_settings.py
+++ b/zerver/views/user_settings.py
@@ -272,6 +272,7 @@ def json_change_notify_settings(
if (
notification_sound is not None
and notification_sound not in get_available_notification_sounds()
+ and notification_sound != "none"
):
raise JsonableError(_("Invalid notification sound '{}'").format(notification_sound))
| diff --git a/frontend_tests/node_tests/notifications.js b/frontend_tests/node_tests/notifications.js
--- a/frontend_tests/node_tests/notifications.js
+++ b/frontend_tests/node_tests/notifications.js
@@ -59,6 +59,7 @@ function test(label, f) {
page_params.enable_desktop_notifications = true;
page_params.enable_sounds = true;
page_params.wildcard_mentions_notify = true;
+ page_params.notification_sound = "ding";
f(override);
});
}
@@ -239,13 +240,35 @@ test("message_is_notifiable", () => {
assert.equal(notifications.should_send_audible_notification(message), true);
assert.equal(notifications.message_is_notifiable(message), false);
+ // Case 9: If `None` is selected as the notification sound, send no
+ // audible notification, no matter what other user configurations are.
+ message = {
+ id: 50,
+ content: "message number 7",
+ sent_by_me: false,
+ notification_sent: false,
+ mentioned: true,
+ mentioned_me_directly: true,
+ type: "stream",
+ stream: "general",
+ stream_id: general.stream_id,
+ topic: "whatever",
+ };
+ page_params.notification_sound = "none";
+ assert.equal(notifications.should_send_desktop_notification(message), true);
+ assert.equal(notifications.should_send_audible_notification(message), false);
+ assert.equal(notifications.message_is_notifiable(message), true);
+
+ // Reset state
+ page_params.notification_sound = "ding";
+
// If none of the above cases apply
// (ie: topic is not muted, message does not mention user,
// no notification sent before, message not sent by user),
// return true to pass it to notifications settings, which will return false.
message = {
id: 60,
- content: "message number 7",
+ content: "message number 8",
sent_by_me: false,
notification_sent: false,
mentioned: false,
| Disable Notification Sound Not available
**Describe the bug**
I could not disable the notification sound because the page only gives 2 choices of music, and no "no sound" option.
**To Reproduce**
Go to settings -> notifications -> other notification settings -> notification sound.
**Expected behavior**
Either a check option to disable the sound or no sound as one of options
**Screenshots**

**Desktop (please complete the following information):**
- Operating System: Win 10 Home (18362)
- Zulip Desktop Version: 5.4.0
| Hello @zulip/server-settings members, this issue was labeled with the "area: settings (user)" label, so you may want to check it out!
<!-- areaLabelAddition -->
This seems worth supporting, and is likely quite easy. The main decision is how to represent "no sound" in the API -- maybe the empty string would be most consistent? I could also imagine the string "none" to avoid having multiple types here.
@timabbott @Gittenburg
Hey there are two ways (a) either I write silent in the array "available_notification_sounds" or should (B) or should I add files "silent.mp3 and silent .ogg" in "static/audio/notification.sounds".
which should I do ?
I think adding a special value would be preferable to playing empty audio files (because it might confuse users if the browser indicates that the tab is playing some sounds when it's actually not).
> I think adding a special value would be preferable to playing empty audio files (because it might confuse users if the browser indicates that the tab is playing some sounds when it's actually not).
what do you mean by "special value".
Something like `"silent"`.
> I think adding a special value would be preferable to playing empty audio files (because it might confuse users if the browser indicates that the tab is playing some sounds when it's actually not).
But why, when we know that it's an empty string and it makes the notification sound "silent" when mesage comes .so why it will create a confusion for user because they only want there notification sound "silent" or no-sound.
If we would play empty audio files browsers would probably still briefly display a speaker symbol in the tab. Don't worry about it you don't have any empty audio files in your PR anyway.
> This seems worth supporting, and is likely quite easy. The main decision is how to represent "no sound" in the API -- maybe the empty string would be most consistent? I could also imagine the string "none" to avoid having multiple types here.
> I think adding a special value would be preferable to playing empty audio files (because it might confuse users if the browser indicates that the tab is playing some sounds when it's actually not).
What about a new checkbox called `Disable notification sounds` instead? Seems cleaner both in terms of implementation (no special strings) and the settings UI.

Checking the box would either hide or disable the dropdown + notification sound label. I'd like to help out with this.
@areebbeigh , we just want to have a option(silent) inside the noditfication-sound , as said by Tim above.
@zulipbot claim
So I just realized we already have an independent mechanism for disabling notification sounds in Zulip:

So I think the actual issue here is a documentation/communication issue, not a feature we need to add.
@parthi2929 do you remember what you checked when trying to figure out how to disable sounds? I'm wondering if it'd be useful to add a Help Center article one could find via google on disabling notification sounds.
I think since there is specific option about Notification sound, I went there to disable or looking for silence option. Yes, better documentation could help here. But UI could also be improvised so its more intuitive.
Hello @tushar912, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning --> | 2021-04-27T04:30:38 |
zulip/zulip | 18,314 | zulip__zulip-18314 | [
"18310"
] | 8711ab7676a7f8e720bc6b070ce7fc2a76a8047f | diff --git a/zerver/lib/events.py b/zerver/lib/events.py
--- a/zerver/lib/events.py
+++ b/zerver/lib/events.py
@@ -1107,25 +1107,6 @@ def do_events_register(
# clients cannot compute gravatars, so we force-set it to false.
client_gravatar = False
- # Note that we pass event_types, not fetch_event_types here, since
- # that's what controls which future events are sent.
- queue_id = request_event_queue(
- user_profile,
- user_client,
- apply_markdown,
- client_gravatar,
- slim_presence,
- queue_lifespan_secs,
- event_types,
- all_public_streams,
- narrow=narrow,
- bulk_message_deletion=bulk_message_deletion,
- stream_typing_notifications=stream_typing_notifications,
- )
-
- if queue_id is None:
- raise JsonableError(_("Could not allocate event queue"))
-
if fetch_event_types is not None:
event_types_set: Optional[Set[str]] = set(fetch_event_types)
elif event_types is not None:
@@ -1136,52 +1117,58 @@ def do_events_register(
# Fill up the UserMessage rows if a soft-deactivated user has returned
reactivate_user_if_soft_deactivated(user_profile)
- ret = fetch_initial_state_data(
- user_profile,
- event_types=event_types_set,
- queue_id=queue_id,
- client_gravatar=client_gravatar,
- user_avatar_url_field_optional=user_avatar_url_field_optional,
- slim_presence=slim_presence,
- include_subscribers=include_subscribers,
- include_streams=include_streams,
- )
-
- # Apply events that came in while we were fetching initial data
- events = get_user_events(user_profile, queue_id, -1)
- try:
- apply_events(
- user_profile,
- state=ret,
- events=events,
- fetch_event_types=fetch_event_types,
- client_gravatar=client_gravatar,
- slim_presence=slim_presence,
- include_subscribers=include_subscribers,
- )
- except RestartEventException:
- # This represents a rare race condition, where Tornado
- # restarted (and sent `restart` events) while we were waiting
- # for fetch_initial_state_data to return. To avoid the client
- # needing to reload shortly after loading, we recursively call
- # do_events_register here.
- ret = do_events_register(
+ while True:
+ # Note that we pass event_types, not fetch_event_types here, since
+ # that's what controls which future events are sent.
+ queue_id = request_event_queue(
user_profile,
user_client,
apply_markdown,
client_gravatar,
slim_presence,
- event_types,
queue_lifespan_secs,
+ event_types,
all_public_streams,
- include_subscribers,
- include_streams,
- client_capabilities,
- narrow,
- fetch_event_types,
+ narrow=narrow,
+ bulk_message_deletion=bulk_message_deletion,
+ stream_typing_notifications=stream_typing_notifications,
)
- return ret
+ if queue_id is None:
+ raise JsonableError(_("Could not allocate event queue"))
+
+ ret = fetch_initial_state_data(
+ user_profile,
+ event_types=event_types_set,
+ queue_id=queue_id,
+ client_gravatar=client_gravatar,
+ user_avatar_url_field_optional=user_avatar_url_field_optional,
+ slim_presence=slim_presence,
+ include_subscribers=include_subscribers,
+ include_streams=include_streams,
+ )
+
+ # Apply events that came in while we were fetching initial data
+ events = get_user_events(user_profile, queue_id, -1)
+ try:
+ apply_events(
+ user_profile,
+ state=ret,
+ events=events,
+ fetch_event_types=fetch_event_types,
+ client_gravatar=client_gravatar,
+ slim_presence=slim_presence,
+ include_subscribers=include_subscribers,
+ )
+ except RestartEventException:
+ # This represents a rare race condition, where Tornado
+ # restarted (and sent `restart` events) while we were waiting
+ # for fetch_initial_state_data to return. To avoid the client
+ # needing to reload shortly after loading, we recursively call
+ # do_events_register here.
+ continue
+ else:
+ break
post_process_state(user_profile, ret, notification_settings_null)
| Make it impossible for the do_events_register recursive call to have missing parameters
There's a high-value follow-up we need to do following https://github.com/zulip/zulip/pull/18208, which is making it impossible for someone to add new parameters to do_events_register without updating the recursive call, since that would introduce a new nasty bug.
We can perhaps use the "mandatory kwargs" technique from this commit 0a21476, at the cost of somewhat annoying repetition. Another idea might be to create some sort of decorator that handles doing this and takes `*args, **kwargs` as parameters.
| Hello @zulip/server-refactoring members, this issue was labeled with the "area: refactoring" label, so you may want to check it out!
<!-- areaLabelAddition -->
Started a thread here: https://chat.zulip.org/#narrow/stream/49-development-help/topic/recursive.20call.20question/near/1171371 | 2021-04-29T21:06:28 |
|
zulip/zulip | 18,399 | zulip__zulip-18399 | [
"18393"
] | ad0be6cea1db64d43a0e1f402a03632535ab9863 | diff --git a/zerver/webhooks/github/view.py b/zerver/webhooks/github/view.py
--- a/zerver/webhooks/github/view.py
+++ b/zerver/webhooks/github/view.py
@@ -466,10 +466,8 @@ def get_pull_request_review_requested_body(helper: Helper) -> str:
payload = helper.payload
include_title = helper.include_title
requested_reviewer = [payload["requested_reviewer"]] if "requested_reviewer" in payload else []
- requested_reviewers = payload["pull_request"]["requested_reviewers"] or requested_reviewer
requested_team = [payload["requested_team"]] if "requested_team" in payload else []
- requested_team_reviewers = payload["pull_request"]["requested_teams"] or requested_team
sender = get_sender_name(payload)
pr_number = payload["pull_request"]["number"]
@@ -482,17 +480,14 @@ def get_pull_request_review_requested_body(helper: Helper) -> str:
all_reviewers = []
- for reviewer in requested_reviewers:
+ for reviewer in requested_reviewer:
all_reviewers.append("[{login}]({html_url})".format(**reviewer))
- for team_reviewer in requested_team_reviewers:
+ for team_reviewer in requested_team:
all_reviewers.append("[{name}]({html_url})".format(**team_reviewer))
reviewers = ""
- if len(all_reviewers) == 1:
- reviewers = all_reviewers[0]
- else:
- reviewers = "{} and {}".format(", ".join(all_reviewers[:-1]), all_reviewers[-1])
+ reviewers = all_reviewers[0]
return body.format(
sender=sender,
| diff --git a/zerver/webhooks/github/tests.py b/zerver/webhooks/github/tests.py
--- a/zerver/webhooks/github/tests.py
+++ b/zerver/webhooks/github/tests.py
@@ -320,24 +320,8 @@ def test_pull_request_review_requested_msg(self) -> None:
expected_message,
)
- def test_pull_request_review_requested_singular_key_msg(self) -> None:
- expected_message = "**eeshangarg** requested [rishig](https://github.com/rishig) for a review on [PR #6](https://github.com/eeshangarg/Scheduler/pull/6)."
- self.check_webhook(
- "pull_request__review_requested_singular_key",
- "Scheduler / PR #6 Mention how awesome this project is in ...",
- expected_message,
- )
-
- def test_pull_request_review_requested_multiple_reviwers_msg(self) -> None:
- expected_message = "**eeshangarg** requested [showell](https://github.com/showell) and [timabbott](https://github.com/timabbott) for a review on [PR #1](https://github.com/eeshangarg/Scheduler/pull/1)."
- self.check_webhook(
- "pull_request__review_requested_multiple_reviewers",
- "Scheduler / PR #1 This is just a test commit",
- expected_message,
- )
-
def test_pull_request__review_requested_team_reviewer_msg(self) -> None:
- expected_message = "**singhsourabh** requested [shreyaskargit](https://github.com/shreyaskargit), [bajaj99prashant](https://github.com/bajaj99prashant), [review-team](https://github.com/orgs/test-org965/teams/review-team), [authority](https://github.com/orgs/test-org965/teams/authority) and [management](https://github.com/orgs/test-org965/teams/management) for a review on [PR #4](https://github.com/test-org965/webhook-test/pull/4)."
+ expected_message = "**singhsourabh** requested [authority](https://github.com/orgs/test-org965/teams/authority) for a review on [PR #4](https://github.com/test-org965/webhook-test/pull/4)."
self.check_webhook(
"pull_request__review_requested_team_reviewer",
"webhook-test / PR #4 testing webhook",
| github integration: Duplicate requested_reviewer notifications when adding two reviewers in new PR
When you open a new pull request with 2 requested reviewers, GitHub's outgoing webhook API sends you 3 events in immediate succession:
* A new pull request event
* A new reviewer 1 event
* A new reviewer 2 event
GitHub's `pull_request/review_requested` payloads appear to have two copies of the data; one inside the `pull_request` key and the other at top level. Only the one at top level tells you which one is new; the other is a summary of the overall state of the PR, and appears to have both reviewers in both events. Our current processing ends up with ugly duplicate output like this:

Here is our code; I think the bug is that we should be ignoring the `pull_request` part of the event bodies.
```
def get_pull_request_review_requested_body(helper: Helper) -> str:
payload = helper.payload
include_title = helper.include_title
requested_reviewer = [payload["requested_reviewer"]] if "requested_reviewer" in payload else []
requested_reviewers = payload["pull_request"]["requested_reviewers"] or requested_reviewer
requested_team = [payload["requested_team"]] if "requested_team" in payload else []
requested_team_reviewers = payload["pull_request"]["requested_teams"] or requested_team
sender = get_sender_name(payload)
pr_number = payload["pull_request"]["number"]
pr_url = payload["pull_request"]["html_url"]
message = "**{sender}** requested {reviewers} for a review on [PR #{pr_number}]({pr_url})."
message_with_title = (
"**{sender}** requested {reviewers} for a review on [PR #{pr_number} {title}]({pr_url})."
)
body = message_with_title if include_title else message
```
Our last functional change to this code was 1b3cfecf2acef7fa2fc0ce5f4eedbb488361af91; 4c0890e8b0e67e9fc38ec11512d45149d89b2924 is also relevant history. So I think my conclusion is that we can just stop looking at the `pull_request` key and that will fix this bug.
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-05-08T02:13:46 |
zulip/zulip | 18,413 | zulip__zulip-18413 | [
"18390"
] | 27d964327496f4f6236db2f21fb580e4136e0a83 | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -1570,7 +1570,7 @@ def sanitize_url(url: str) -> Optional[str]:
if not scheme:
return sanitize_url("http://" + url)
- locless_schemes = ["mailto", "news", "file", "bitcoin"]
+ locless_schemes = ["mailto", "news", "file", "bitcoin", "sms", "tel"]
if netloc == "" and scheme not in locless_schemes:
# This fails regardless of anything else.
# Return immediately to save additional processing
@@ -1580,7 +1580,7 @@ def sanitize_url(url: str) -> Optional[str]:
# appears to have a netloc. Additionally there are plenty of other
# schemes that do weird things like launch external programs. To be
# on the safe side, we whitelist the scheme.
- if scheme not in ("http", "https", "ftp", "mailto", "file", "bitcoin"):
+ if scheme not in ("http", "https", "ftp", "mailto", "file", "bitcoin", "sms", "tel"):
return None
# Upstream code scans path, parameters, and query for colon characters
| diff --git a/zerver/tests/fixtures/markdown_test_cases.json b/zerver/tests/fixtures/markdown_test_cases.json
--- a/zerver/tests/fixtures/markdown_test_cases.json
+++ b/zerver/tests/fixtures/markdown_test_cases.json
@@ -948,6 +948,11 @@
"input": "<h1>*<h1>[<h2>Static types in Python</h2>](https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy)</h1>*</h1>",
"expected_output": "<p><h1><em><h1><a href=\"https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy\"><h2>Static types in Python</h2></a></h1></em></h1></p>",
"marked_expected_output": "<p><h1><em><h1><a href=\"https://blog.zulip.com/2016/10/13/static-types-in-python-oh-mypy\"><h2>Static types in Python</h2></a></h1></em></h1>\n\n</p>"
+ },
+ {
+ "name": "telephone_sms_link",
+ "input": "[call me](tel:+14155551234) [or maybe not](sms:+14155551234)",
+ "expected_output": "<p><a href=\"tel:+14155551234\">call me</a> <a href=\"sms:+14155551234\">or maybe not</a></p>"
}
],
"linkify_tests": [
| Support `tel:` and `sms:` schemes for Markdown links
When using the Markdown format, I can embed links in my messages using Markdown syntax, like so:
```
If you want to email me, [click here](mailto:[email protected]).
If you want to see our site, [click here](https://example.com).
```
These links are then invoked in the relevant operating-system-specified handler for their relevant URL schemes. This is great, and I like it a lot!
However, today I tried to send a message that contained my phone number, and wanted someone clicking it to invoke their operating-system-specified handler for the `tel` and `sms` schemes, which should prompt the user to start a call or compose a text message respectively. I wrote the following Markdown:
```
Hey, I just met you, and this is crazy,
but here's my number, so [call me](tel:+14155551234), [maybe](sms:+14155551234).
```
Immediately after sending the message, the relevant pieces of text showed as links (I assume with the appropriate schemes), but shortly after, they fell back to showing my unformatted Markdown.
I verified that this appears in both the current Zulip Desktop and the in-browser version. I did not check the mobile application.
| Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
Thanks for the feedback! This seems like a reasonable feature to have. | 2021-05-09T05:05:29 |
zulip/zulip | 18,589 | zulip__zulip-18589 | [
"18256"
] | 64bd461bad57a4bbacfe2b890628ea1247323bf7 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -6480,11 +6480,9 @@ def do_send_confirmation_email(invitee: PreregistrationUser, referrer: UserProfi
"activate_url": activation_url,
"referrer_realm_name": referrer.realm.name,
}
- from_name = f"{referrer.full_name} (via Zulip)"
send_email(
"zerver/emails/invitation",
to_emails=[invitee.email],
- from_name=from_name,
from_address=FromAddress.tokenized_no_reply_address(),
language=referrer.realm.default_language,
context=context,
| diff --git a/zerver/tests/test_signup.py b/zerver/tests/test_signup.py
--- a/zerver/tests/test_signup.py
+++ b/zerver/tests/test_signup.py
@@ -928,9 +928,7 @@ def test_zulip_default_context_does_not_load_inline_previews(self) -> None:
class InviteUserBase(ZulipTestCase):
- def check_sent_emails(
- self, correct_recipients: List[str], custom_from_name: Optional[str] = None
- ) -> None:
+ def check_sent_emails(self, correct_recipients: List[str]) -> None:
from django.core.mail import outbox
self.assert_length(outbox, len(correct_recipients))
@@ -939,8 +937,7 @@ def check_sent_emails(
if len(outbox) == 0:
return
- if custom_from_name is not None:
- self.assertIn(custom_from_name, self.email_display_from(outbox[0]))
+ self.assertIn("Zulip", self.email_display_from(outbox[0]))
self.assertEqual(self.email_envelope_from(outbox[0]), settings.NOREPLY_EMAIL_ADDRESS)
self.assertRegex(
@@ -987,7 +984,7 @@ def test_successful_invite_user(self) -> None:
invitee = "[email protected]"
self.assert_json_success(self.invite(invitee, ["Denmark"]))
self.assertTrue(find_key_by_email(invitee))
- self.check_sent_emails([invitee], custom_from_name="Hamlet")
+ self.check_sent_emails([invitee])
def test_newbie_restrictions(self) -> None:
user_profile = self.example_user("hamlet")
@@ -1274,7 +1271,7 @@ def test_successful_invite_user_with_name(self) -> None:
invitee = f"Alice Test <{email}>"
self.assert_json_success(self.invite(invitee, ["Denmark"]))
self.assertTrue(find_key_by_email(email))
- self.check_sent_emails([email], custom_from_name="Hamlet")
+ self.check_sent_emails([email])
def test_successful_invite_user_with_name_and_normal_one(self) -> None:
"""
@@ -1288,7 +1285,7 @@ def test_successful_invite_user_with_name_and_normal_one(self) -> None:
self.assert_json_success(self.invite(invitee, ["Denmark"]))
self.assertTrue(find_key_by_email(email))
self.assertTrue(find_key_by_email(email2))
- self.check_sent_emails([email, email2], custom_from_name="Hamlet")
+ self.check_sent_emails([email, email2])
def test_can_invite_others_to_realm(self) -> None:
def validation_func(user_profile: UserProfile) -> bool:
@@ -2230,7 +2227,7 @@ def test_successful_resend_invitation(self) -> None:
prereg_user = PreregistrationUser.objects.get(email=invitee)
# Verify and then clear from the outbox the original invite email
- self.check_sent_emails([invitee], custom_from_name="Zulip")
+ self.check_sent_emails([invitee])
from django.core.mail import outbox
outbox.pop()
@@ -2261,7 +2258,7 @@ def test_successful_resend_invitation(self) -> None:
error_result = self.client_post("/json/invites/" + str(9999) + "/resend")
self.assert_json_error(error_result, "No such invitation")
- self.check_sent_emails([invitee], custom_from_name="Zulip")
+ self.check_sent_emails([invitee])
def test_successful_member_resend_invitation(self) -> None:
"""A POST call from member a account to /json/invites/<ID>/resend
@@ -2278,7 +2275,7 @@ def test_successful_member_resend_invitation(self) -> None:
prereg_user = PreregistrationUser.objects.get(email=invitee)
# Verify and then clear from the outbox the original invite email
- self.check_sent_emails([invitee], custom_from_name="Zulip")
+ self.check_sent_emails([invitee])
from django.core.mail import outbox
outbox.pop()
@@ -2309,7 +2306,7 @@ def test_successful_member_resend_invitation(self) -> None:
error_result = self.client_post("/json/invites/" + str(9999) + "/resend")
self.assert_json_error(error_result, "No such invitation")
- self.check_sent_emails([invitee], custom_from_name="Zulip")
+ self.check_sent_emails([invitee])
self.logout()
self.login("othello")
@@ -2329,7 +2326,7 @@ def test_resend_owner_invitation(self) -> None:
invitee, ["Denmark"], invite_as=PreregistrationUser.INVITE_AS["REALM_OWNER"]
)
)
- self.check_sent_emails([invitee], custom_from_name="Zulip")
+ self.check_sent_emails([invitee])
scheduledemail_filter = ScheduledEmail.objects.filter(
address__iexact=invitee, type=ScheduledEmail.INVITATION_REMINDER
)
| Add a setting to disable sender names in email notifications
When sending email notifications we currently, when relevant, put a user's name in the From header of the email message. E.g. if I trigger an invitation email to someone, my name will be put in the From header. For some organizations this can be undesirable, because some phishing detection systems (Google's Advanced phishing and malware protection system is the case we've run into) can mark such messages as suspicious - due to the fact that the name of the company's employee is indicated as the sender, even though the email is coming from an unrelated domain.
We should add an organizaton-level setting `Disable sender names in email notifications` that would disable this behavior and instead use a fixed, generic string in the From field - `Zulip` or `Zulip notifications` are the two ideas that popped up.
| Hello @zulip/server-development, @zulip/server-settings members, this issue was labeled with the "area: emails", "area: settings (admin/org)" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
Hello @tushar912, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning -->
@sahil839 this might be a good project for you to pick up -- we've been seeing this issue an increasing amount of late.
Okay, will start working on this along with other issues.
@zulipbot claim | 2021-05-25T06:58:25 |
zulip/zulip | 18,598 | zulip__zulip-18598 | [
"18580"
] | 0e42a3f11729b61f5168a90ef903557c3d4265be | diff --git a/zerver/webhooks/pivotal/view.py b/zerver/webhooks/pivotal/view.py
--- a/zerver/webhooks/pivotal/view.py
+++ b/zerver/webhooks/pivotal/view.py
@@ -72,6 +72,7 @@ def get_text(attrs: List[str]) -> str:
"story_delete_activity",
"story_move_into_project_activity",
"epic_update_activity",
+ "label_create_activity",
]
| Pivotal integration exception
Hi,
I've added Pivotal integration and from time to time I receive those two e-mails when working in Pivotal:
I'm running ubuntu 20.04
If you need more information, I'd be happy to help.
```
Logger django.request, from module django.utils.log line 224:
Error generated by PivotalMessenger <pivotal-bot@***> (Member) on *** deployment
No stack trace available
Deployed code:
- git: None
- ZULIP_VERSION: 4.2
Request info:
- path: /api/v1/external/pivotal
- POST: {}
- REMOTE_ADDR: "35.184.18.147"
- QUERY_STRING: "api_key=******&stream=******&topic=******"
- SERVER_NAME: ""
```
```
Logger zerver.middleware.json_error_handler, from module zerver.middleware line 450:
Error generated by PivotalMessenger <pivotal-bot@***> (Member) on *** deployment
Traceback (most recent call last):
File "/usr/lib/python3.8/xml/etree/ElementTree.py", line 1693, in feed
self.parser.Parse(data, 0)
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./zerver/webhooks/pivotal/view.py", line 172, in api_pivotal_webhook
subject, content = api_pivotal_webhook_v3(request, user_profile)
File "./zerver/webhooks/pivotal/view.py", line 19, in api_pivotal_webhook_v3
payload = xml_fromstring(request.body)
File "/srv/zulip-venv-cache/9d0f5ac272f4e644b222ed65b0b5a996616a215f/zulip-py3-venv/lib/python3.8/site-packages/defusedxml/common.py", line 131, in fromstring
parser.feed(text)
File "/usr/lib/python3.8/xml/etree/ElementTree.py", line 1695, in feed
self._raiseerror(v)
File "/usr/lib/python3.8/xml/etree/ElementTree.py", line 1602, in _raiseerror
raise err
File "<string>", line None
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/zulip-venv-cache/9d0f5ac272f4e644b222ed65b0b5a996616a215f/zulip-py3-venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/srv/zulip-venv-cache/9d0f5ac272f4e644b222ed65b0b5a996616a215f/zulip-py3-venv/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "./zerver/lib/request.py", line 390, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "./zerver/decorator.py", line 354, in _wrapped_func_arguments
raise err
File "./zerver/decorator.py", line 334, in _wrapped_func_arguments
return view_func(request, user_profile, *args, **kwargs)
File "./zerver/lib/request.py", line 390, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "./zerver/webhooks/pivotal/view.py", line 175, in api_pivotal_webhook
subject, content = api_pivotal_webhook_v5(request, user_profile)
File "./zerver/webhooks/pivotal/view.py", line 87, in api_pivotal_webhook_v5
story_url = primary_resources["url"]
KeyError: 'url'
Deployed code:
- git: None
- ZULIP_VERSION: 4.2
Request info:
- path: /api/v1/external/pivotal
- POST: {}
- REMOTE_ADDR: "35.184.18.147"
- QUERY_STRING: "api_key=******&stream=******&topic=******"
- SERVER_NAME: ""
```
| Thanks for the report @vrozkovec! Error payloads from webhooks should be logged to `/var/log/zulip/webhooks_errors.log`; can you capture that to figure out what event type is causing the exception?
(Sharing the whole payload, potentially after anonymizing any data with `example.com` type values, is ideal for being able to fix it quickly, but even just knowing the event type makes it a lot easier to reproduce)
Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hi @timabbott, here are few last payloads:
[log.txt](https://github.com/zulip/zulip/files/6536711/log.txt)
@vrozkovec thanks! That should make it possible for someone to investigate and fix this.
@zulipbot claim
@vrozkovec are you wanting to alert on this event or not? As we have a list of a bunch of unsupported events that we just ignore, we can just add to this if you aren't specifically wanting to alert on this event.
As the content of that payload seems pretty useless to be fair. | 2021-05-25T21:30:57 |
|
zulip/zulip | 18,606 | zulip__zulip-18606 | [
"18604"
] | d37ddf13a42dee2e7b545dde0c02bebfeeef8608 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -2703,6 +2703,39 @@ def check_schedule_message(
return do_schedule_messages([send_request])[0]
+def validate_message_edit_payload(
+ message: Message,
+ stream_id: Optional[int],
+ topic_name: Optional[str],
+ propagate_mode: Optional[str],
+ content: Optional[str],
+) -> None:
+ """
+ Checks that the data sent is well-formed. Does not handle editability, permissions etc.
+ """
+ if topic_name is None and content is None and stream_id is None:
+ raise JsonableError(_("Nothing to change"))
+
+ if not message.is_stream_message():
+ if stream_id is not None:
+ raise JsonableError(_("Private messages cannot be moved to streams."))
+ if topic_name is not None:
+ raise JsonableError(_("Private messages cannot have topics."))
+
+ if propagate_mode != "change_one" and topic_name is None and stream_id is None:
+ raise JsonableError(_("Invalid propagate_mode without topic edit"))
+
+ if topic_name == "":
+ raise JsonableError(_("Topic can't be empty"))
+
+ if stream_id is not None and content is not None:
+ raise JsonableError(_("Cannot change message content while changing stream"))
+
+ # Right now, we prevent users from editing widgets.
+ if content is not None and is_widget_message(message):
+ raise JsonableError(_("Widgets cannot be edited."))
+
+
def check_update_message(
user_profile: UserProfile,
message_id: int,
@@ -2718,13 +2751,19 @@ def check_update_message(
and raises a JsonableError if otherwise.
It returns the number changed.
"""
+ message, ignored_user_message = access_message(user_profile, message_id)
+
if not user_profile.realm.allow_message_editing:
raise JsonableError(_("Your organization has turned off message editing"))
- if propagate_mode != "change_one" and topic_name is None and stream_id is None:
- raise JsonableError(_("Invalid propagate_mode without topic edit"))
+ # The zerver/views/message_edit.py callpoint already strips this
+ # via REQ_topic; so we can delete this line if we arrange a
+ # contract where future callers in the embedded bots system strip
+ # use REQ_topic as well (or otherwise are guaranteed to strip input).
+ if topic_name is not None:
+ topic_name = topic_name.strip()
+ validate_message_edit_payload(message, stream_id, topic_name, propagate_mode, content)
- message, ignored_user_message = access_message(user_profile, message_id)
is_no_topic_msg = message.topic_name() == "(no topic)"
# You only have permission to edit a message if:
@@ -2744,10 +2783,6 @@ def check_update_message(
else:
raise JsonableError(_("You don't have permission to edit this message"))
- # Right now, we prevent users from editing widgets.
- if content is not None and is_widget_message(message):
- raise JsonableError(_("Widgets cannot be edited."))
-
# If there is a change to the content, check that it hasn't been too long
# Allow an extra 20 seconds since we potentially allow editing 15 seconds
# past the limit, and in case there are network issues, etc. The 15 comes
@@ -2773,12 +2808,6 @@ def check_update_message(
if (timezone_now() - message.date_sent) > datetime.timedelta(seconds=deadline_seconds):
raise JsonableError(_("The time limit for editing this message has passed"))
- if topic_name is None and content is None and stream_id is None:
- raise JsonableError(_("Nothing to change"))
- if topic_name is not None:
- topic_name = topic_name.strip()
- if topic_name == "":
- raise JsonableError(_("Topic can't be empty"))
rendered_content = None
links_for_embed: Set[str] = set()
prior_mention_user_ids: Set[int] = set()
@@ -2815,8 +2844,7 @@ def check_update_message(
number_changed = 0
if stream_id is not None:
- if not message.is_stream_message():
- raise JsonableError(_("Message must be a stream message"))
+ assert message.is_stream_message()
if not user_profile.can_move_messages_between_streams():
raise JsonableError(_("You don't have permission to move this message"))
try:
@@ -2827,8 +2855,6 @@ def check_update_message(
"You don't have permission to move this message due to missing access to its stream"
)
)
- if content is not None:
- raise JsonableError(_("Cannot change message content while changing stream"))
new_stream = access_stream_by_id(user_profile, stream_id, require_active=True)[0]
check_stream_access_based_on_stream_post_policy(user_profile, new_stream)
| diff --git a/zerver/tests/test_message_edit.py b/zerver/tests/test_message_edit.py
--- a/zerver/tests/test_message_edit.py
+++ b/zerver/tests/test_message_edit.py
@@ -22,7 +22,7 @@
from zerver.models import Message, Realm, Stream, UserMessage, UserProfile, get_realm, get_stream
-class EditMessageTest(ZulipTestCase):
+class EditMessageTestCase(ZulipTestCase):
def check_topic(self, msg_id: int, topic_name: str) -> None:
msg = Message.objects.get(id=msg_id)
self.assertEqual(msg.topic_name(), topic_name)
@@ -74,6 +74,155 @@ def check_message(self, msg_id: int, topic_name: str, content: str) -> None:
orjson.loads(msg.edit_history),
)
+ def prepare_move_topics(
+ self, user_email: str, old_stream: str, new_stream: str, topic: str
+ ) -> Tuple[UserProfile, Stream, Stream, int, int]:
+ user_profile = self.example_user(user_email)
+ self.login(user_email)
+ stream = self.make_stream(old_stream)
+ new_stream = self.make_stream(new_stream)
+ self.subscribe(user_profile, stream.name)
+ self.subscribe(user_profile, new_stream.name)
+ msg_id = self.send_stream_message(
+ user_profile, stream.name, topic_name=topic, content="First"
+ )
+ msg_id_lt = self.send_stream_message(
+ user_profile, stream.name, topic_name=topic, content="Second"
+ )
+
+ self.send_stream_message(user_profile, stream.name, topic_name=topic, content="third")
+
+ return (user_profile, stream, new_stream, msg_id, msg_id_lt)
+
+
+class EditMessagePayloadTest(EditMessageTestCase):
+ def test_edit_message_no_changes(self) -> None:
+ self.login("hamlet")
+ msg_id = self.send_stream_message(
+ self.example_user("hamlet"), "Scotland", topic_name="editing", content="before edit"
+ )
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ },
+ )
+ self.assert_json_error(result, "Nothing to change")
+
+ def test_move_message_cant_move_private_message(self) -> None:
+ hamlet = self.example_user("hamlet")
+ self.login("hamlet")
+ cordelia = self.example_user("cordelia")
+ msg_id = self.send_personal_message(hamlet, cordelia)
+
+ verona = get_stream("Verona", hamlet.realm)
+
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "stream_id": verona.id,
+ },
+ )
+
+ self.assert_json_error(result, "Private messages cannot be moved to streams.")
+
+ def test_private_message_edit_topic(self) -> None:
+ hamlet = self.example_user("hamlet")
+ self.login("hamlet")
+ cordelia = self.example_user("cordelia")
+ msg_id = self.send_personal_message(hamlet, cordelia)
+
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "topic": "Should not exist",
+ },
+ )
+
+ self.assert_json_error(result, "Private messages cannot have topics.")
+
+ def test_propagate_invalid(self) -> None:
+ self.login("hamlet")
+ id1 = self.send_stream_message(self.example_user("hamlet"), "Scotland", topic_name="topic1")
+
+ result = self.client_patch(
+ "/json/messages/" + str(id1),
+ {
+ "topic": "edited",
+ "propagate_mode": "invalid",
+ },
+ )
+ self.assert_json_error(result, "Invalid propagate_mode")
+ self.check_topic(id1, topic_name="topic1")
+
+ result = self.client_patch(
+ "/json/messages/" + str(id1),
+ {
+ "content": "edited",
+ "propagate_mode": "change_all",
+ },
+ )
+ self.assert_json_error(result, "Invalid propagate_mode without topic edit")
+ self.check_topic(id1, topic_name="topic1")
+
+ def test_edit_message_no_topic(self) -> None:
+ self.login("hamlet")
+ msg_id = self.send_stream_message(
+ self.example_user("hamlet"), "Scotland", topic_name="editing", content="before edit"
+ )
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "topic": " ",
+ },
+ )
+ self.assert_json_error(result, "Topic can't be empty")
+
+ def test_move_message_to_stream_with_content(self) -> None:
+ (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
+ "iago", "test move stream", "new stream", "test"
+ )
+
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "stream_id": new_stream.id,
+ "propagate_mode": "change_all",
+ "content": "Not allowed",
+ },
+ )
+ self.assert_json_error(result, "Cannot change message content while changing stream")
+
+ messages = get_topic_messages(user_profile, old_stream, "test")
+ self.assert_length(messages, 3)
+
+ messages = get_topic_messages(user_profile, new_stream, "test")
+ self.assert_length(messages, 0)
+
+ # Right now, we prevent users from editing widgets.
+ def test_edit_submessage(self) -> None:
+ self.login("hamlet")
+ msg_id = self.send_stream_message(
+ self.example_user("hamlet"),
+ "Scotland",
+ topic_name="editing",
+ content="/poll Games?\nYES\nNO",
+ )
+ result = self.client_patch(
+ "/json/messages/" + str(msg_id),
+ {
+ "message_id": msg_id,
+ "content": "/poll Games?\nYES\nNO\nMaybe",
+ },
+ )
+ self.assert_json_error(result, "Widgets cannot be edited.")
+
+
+class EditMessageTest(EditMessageTestCase):
def test_query_count_on_to_dict_uncached(self) -> None:
# `to_dict_uncached` method is used by the mechanisms
# tested in this class. Hence, its performance is tested here.
@@ -189,24 +338,6 @@ def test_fetch_raw_message_private_stream(self) -> None:
result = self.client_get("/json/messages/" + str(msg_id))
self.assert_json_error(result, "Invalid message(s)")
- # Right now, we prevent users from editing widgets.
- def test_edit_submessage(self) -> None:
- self.login("hamlet")
- msg_id = self.send_stream_message(
- self.example_user("hamlet"),
- "Scotland",
- topic_name="editing",
- content="/poll Games?\nYES\nNO",
- )
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- "content": "/poll Games?\nYES\nNO\nMaybe",
- },
- )
- self.assert_json_error(result, "Widgets cannot be edited.")
-
def test_edit_message_no_permission(self) -> None:
self.login("hamlet")
msg_id = self.send_stream_message(
@@ -221,33 +352,6 @@ def test_edit_message_no_permission(self) -> None:
)
self.assert_json_error(result, "You don't have permission to edit this message")
- def test_edit_message_no_changes(self) -> None:
- self.login("hamlet")
- msg_id = self.send_stream_message(
- self.example_user("hamlet"), "Scotland", topic_name="editing", content="before edit"
- )
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- },
- )
- self.assert_json_error(result, "Nothing to change")
-
- def test_edit_message_no_topic(self) -> None:
- self.login("hamlet")
- msg_id = self.send_stream_message(
- self.example_user("hamlet"), "Scotland", topic_name="editing", content="before edit"
- )
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- "topic": " ",
- },
- )
- self.assert_json_error(result, "Topic can't be empty")
-
def test_edit_message_no_content(self) -> None:
self.login("hamlet")
msg_id = self.send_stream_message(
@@ -1072,50 +1176,6 @@ def test_propagate_all_topics_with_different_uppercase_letters(self) -> None:
self.check_topic(id3, topic_name="topiC1")
self.check_topic(id4, topic_name="edited")
- def test_propagate_invalid(self) -> None:
- self.login("hamlet")
- id1 = self.send_stream_message(self.example_user("hamlet"), "Scotland", topic_name="topic1")
-
- result = self.client_patch(
- "/json/messages/" + str(id1),
- {
- "topic": "edited",
- "propagate_mode": "invalid",
- },
- )
- self.assert_json_error(result, "Invalid propagate_mode")
- self.check_topic(id1, topic_name="topic1")
-
- result = self.client_patch(
- "/json/messages/" + str(id1),
- {
- "content": "edited",
- "propagate_mode": "change_all",
- },
- )
- self.assert_json_error(result, "Invalid propagate_mode without topic edit")
- self.check_topic(id1, topic_name="topic1")
-
- def prepare_move_topics(
- self, user_email: str, old_stream: str, new_stream: str, topic: str
- ) -> Tuple[UserProfile, Stream, Stream, int, int]:
- user_profile = self.example_user(user_email)
- self.login(user_email)
- stream = self.make_stream(old_stream)
- new_stream = self.make_stream(new_stream)
- self.subscribe(user_profile, stream.name)
- self.subscribe(user_profile, new_stream.name)
- msg_id = self.send_stream_message(
- user_profile, stream.name, topic_name=topic, content="First"
- )
- msg_id_lt = self.send_stream_message(
- user_profile, stream.name, topic_name=topic, content="Second"
- )
-
- self.send_stream_message(user_profile, stream.name, topic_name=topic, content="third")
-
- return (user_profile, stream, new_stream, msg_id, msg_id_lt)
-
def test_move_message_to_stream(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_lt) = self.prepare_move_topics(
"iago", "test move stream", "new stream", "test"
@@ -1268,29 +1328,6 @@ def test_move_message_from_private_stream_message_access_checks(
private_stream.recipient_id,
)
- def test_move_message_cant_move_private_message(
- self,
- ) -> None:
- user_profile = self.example_user("iago")
- self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_ADMINISTRATOR)
- self.login("iago")
-
- hamlet = self.example_user("hamlet")
- msg_id = self.send_personal_message(user_profile, hamlet)
-
- verona = get_stream("Verona", user_profile.realm)
-
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- "stream_id": verona.id,
- "propagate_mode": "change_all",
- },
- )
-
- self.assert_json_error(result, "Message must be a stream message")
-
def test_move_message_to_stream_change_later(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
"iago", "test move stream", "new stream", "test"
@@ -1491,28 +1528,6 @@ def check_move_message_to_stream(role: int, error_msg: Optional[str] = None) ->
)
check_move_message_to_stream(UserProfile.ROLE_MEMBER)
- def test_move_message_to_stream_with_content(self) -> None:
- (user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
- "iago", "test move stream", "new stream", "test"
- )
-
- result = self.client_patch(
- "/json/messages/" + str(msg_id),
- {
- "message_id": msg_id,
- "stream_id": new_stream.id,
- "propagate_mode": "change_all",
- "content": "Not allowed",
- },
- )
- self.assert_json_error(result, "Cannot change message content while changing stream")
-
- messages = get_topic_messages(user_profile, old_stream, "test")
- self.assert_length(messages, 3)
-
- messages = get_topic_messages(user_profile, new_stream, "test")
- self.assert_length(messages, 0)
-
def test_move_message_to_stream_and_topic(self) -> None:
(user_profile, old_stream, new_stream, msg_id, msg_id_later) = self.prepare_move_topics(
"iago", "test move stream", "new stream", "test"
| Passing a 'topic_name' when editing a private message using API raises AssertionError
When a user tries to pass `topic_name` while editing a private message using the API, the below code raises AssertionError on `stream_being_edited` not being `None`.
```python
update_edit_history(target_message, timestamp, edit_history_event)
delete_event_notify_user_ids: List[int] = []
if propagate_mode in ["change_later", "change_all"]:
assert topic_name is not None or new_stream is not None
assert stream_being_edited is not None
```
| Tagging as a priority because it's a 500 error in production, and we have a goal of having none of those. | 2021-05-26T10:37:19 |
zulip/zulip | 18,655 | zulip__zulip-18655 | [
"17786"
] | bf179b7d2f04e8085277b9e6f6fc7651cea06d34 | diff --git a/zerver/views/registration.py b/zerver/views/registration.py
--- a/zerver/views/registration.py
+++ b/zerver/views/registration.py
@@ -534,7 +534,10 @@ def send_confirm_registration_email(
to_emails=[email],
from_address=FromAddress.tokenized_no_reply_address(),
language=language,
- context={"activate_url": activation_url},
+ context={
+ "create_realm": (realm is None),
+ "activate_url": activation_url,
+ },
realm=realm,
)
| diff --git a/zerver/lib/test_classes.py b/zerver/lib/test_classes.py
--- a/zerver/lib/test_classes.py
+++ b/zerver/lib/test_classes.py
@@ -649,6 +649,8 @@ def get_confirmation_url_from_outbox(
email_address: str,
*,
url_pattern: Optional[str] = None,
+ email_subject_contains: Optional[str] = None,
+ email_body_contains: Optional[str] = None,
) -> str:
from django.core.mail import outbox
@@ -661,6 +663,13 @@ def get_confirmation_url_from_outbox(
):
match = re.search(url_pattern, message.body)
assert match is not None
+
+ if email_subject_contains:
+ self.assertIn(email_subject_contains, message.subject)
+
+ if email_body_contains:
+ self.assertIn(email_body_contains, message.body)
+
[confirmation_url] = match.groups()
return confirmation_url
else:
diff --git a/zerver/tests/test_signup.py b/zerver/tests/test_signup.py
--- a/zerver/tests/test_signup.py
+++ b/zerver/tests/test_signup.py
@@ -2769,8 +2769,13 @@ def check_able_to_create_realm(self, email: str, password: str = "test") -> None
result = self.client_get(result["Location"])
self.assert_in_response("Check your email so we can get started.", result)
- # Visit the confirmation link.
- confirmation_url = self.get_confirmation_url_from_outbox(email)
+ # Check confirmation email has the correct subject and body, extract
+ # confirmation link and visit it
+ confirmation_url = self.get_confirmation_url_from_outbox(
+ email,
+ email_subject_contains="Create your Zulip organization",
+ email_body_contains="You have requested a new Zulip organization",
+ )
result = self.client_get(confirmation_url)
self.assertEqual(result.status_code, 200)
| Customize "Activate your Zulip account" email when creating a new org
The email generated for creating a new org does not say anywhere that you'll be creating a new org.

We should probably change:
* subject line -> "Create your Zulip organization"
* text above the button
Draft text (minimally different from current version):
> You have requested a new Zulip organization. Awesome!
> Click the button below to create the organization and register your account.
| Hello @zulip/server-development, @zulip/server-onboarding members, this issue was labeled with the "area: emails", "area: onboarding" labels, so you may want to check it out!
<!-- areaLabelAddition -->
Thanks for the report! That'd be a nice improvement.
@zulipbot claim
Welcome to Zulip, @gupta-piyush19! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@timabbott can you please help me in finding code for this? It is really a large codebase, not able to find the same.
@gupta-piyush19 Zulip's only a medium-size codebase. But you can generally find anything in a codebase using `git grep` -- e.g. `git grep 'Complete registration'`:
```
...
templates/zerver/emails/confirm_registration.source.html: <a class="button" href="{{ activate_url }}">{{ _('Complete registration') }}</a>
templates/zerver/emails/invitation.source.html: <a class="button" href="{{ activate_url }}">{{ _("Complete registration") }}</a>
templates/zerver/emails/invitation_reminder.source.html: <a class="button" href="{{ activate_url }}">{{ _("Complete registration") }}</a>
```
Check out our [GitHub guide](https://zulip.readthedocs.io/en/latest/git-guide.html) for more useful advice.
Hello Everyone, I am new here #newmember. I am Roushan Kumar from India. Please guide me, I am intereseted in working on this project.
@zulipbot claim
Hello @weilirs, it looks like someone has already claimed this issue! Since we believe multiple assignments to the same issue may cause some confusion, we encourage you to search for other unclaimed issues to work on. However, you can always reclaim this issue if no one is working on it.
We look forward to your valuable contributions!
Hello @gupta-piyush19, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning -->
@zulipbot claim
@zulipbot claim
Hello @stefanieelling, it looks like we've already sent you a collaboration invite at https://github.com/zulip/zulip/invitations, but you haven't accepted it yet!
Please accept the invite and try to claim this issue again afterwards. We look forward to your contributions!
> Hello @stefanieelling, it looks like we've already sent you a collaboration invite at https://github.com/zulip/zulip/invitations, but you haven't accepted it yet!
>
> Please accept the invite and try to claim this issue again afterwards. We look forward to your contributions!
@zulipbot can you resend the invitation please?
@zulipbot claim
Welcome to Zulip, @lawynnj! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@zulipbot claim
@zulipbot claim
@zulipbot claim
Welcome to Zulip, @MayurDeshmukh10! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@zulipbot claim
Welcome to Zulip, @gilbertbw! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)! | 2021-05-31T17:39:01 |
zulip/zulip | 18,775 | zulip__zulip-18775 | [
"18687"
] | 0eb33b70ddc6eced84283581bb3509c6052cd563 | diff --git a/tools/linter_lib/custom_check.py b/tools/linter_lib/custom_check.py
--- a/tools/linter_lib/custom_check.py
+++ b/tools/linter_lib/custom_check.py
@@ -212,6 +212,12 @@
"good_lines": ["#my-style {color: blue;}"],
"bad_lines": ['<p style="color: blue;">Foo</p>', 'style = "color: blue;"'],
},
+ {
+ "pattern": r"assert\(",
+ "description": "Use 'assert.ok' instead of 'assert'. We avoid the use of 'assert' as it can easily be confused with 'assert.equal'.",
+ "good_lines": ["assert.ok(...)"],
+ "bad_lines": ["assert(...)"],
+ },
*whitespace_rules,
],
)
| diff --git a/frontend_tests/node_tests/activity.js b/frontend_tests/node_tests/activity.js
--- a/frontend_tests/node_tests/activity.js
+++ b/frontend_tests/node_tests/activity.js
@@ -340,7 +340,7 @@ test("handlers", (override) => {
narrowed = false;
activity.user_cursor.go_to(alice.user_id);
filter_key_handlers.Enter();
- assert(narrowed);
+ assert.ok(narrowed);
// get line coverage for cleared case
activity.user_cursor.clear();
@@ -353,7 +353,7 @@ test("handlers", (override) => {
// so this just tests the called function.
narrowed = false;
activity.narrow_for_user({li: alice_li});
- assert(narrowed);
+ assert.ok(narrowed);
})();
(function test_blur_filter() {
@@ -393,8 +393,8 @@ test("insert_one_user_into_empty_list", (override) => {
});
activity.redraw_user(alice.user_id);
- assert(appended_html.indexOf('data-user-id="1"') > 0);
- assert(appended_html.indexOf("user_circle_green") > 0);
+ assert.ok(appended_html.indexOf('data-user-id="1"') > 0);
+ assert.ok(appended_html.indexOf("user_circle_green") > 0);
});
test("insert_alice_then_fred", (override) => {
@@ -405,12 +405,12 @@ test("insert_alice_then_fred", (override) => {
override(padded_widget, "update_padding", () => {});
activity.redraw_user(alice.user_id);
- assert(appended_html.indexOf('data-user-id="1"') > 0);
- assert(appended_html.indexOf("user_circle_green") > 0);
+ assert.ok(appended_html.indexOf('data-user-id="1"') > 0);
+ assert.ok(appended_html.indexOf("user_circle_green") > 0);
activity.redraw_user(fred.user_id);
- assert(appended_html.indexOf('data-user-id="2"') > 0);
- assert(appended_html.indexOf("user_circle_green") > 0);
+ assert.ok(appended_html.indexOf('data-user-id="2"') > 0);
+ assert.ok(appended_html.indexOf("user_circle_green") > 0);
});
test("insert_fred_then_alice_then_rename", (override) => {
@@ -421,8 +421,8 @@ test("insert_fred_then_alice_then_rename", (override) => {
override(padded_widget, "update_padding", () => {});
activity.redraw_user(fred.user_id);
- assert(appended_html.indexOf('data-user-id="2"') > 0);
- assert(appended_html.indexOf("user_circle_green") > 0);
+ assert.ok(appended_html.indexOf('data-user-id="2"') > 0);
+ assert.ok(appended_html.indexOf("user_circle_green") > 0);
const fred_stub = $.create("fred-first");
buddy_list_add(fred.user_id, fred_stub);
@@ -438,8 +438,8 @@ test("insert_fred_then_alice_then_rename", (override) => {
};
activity.redraw_user(alice.user_id);
- assert(inserted_html.indexOf('data-user-id="1"') > 0);
- assert(inserted_html.indexOf("user_circle_green") > 0);
+ assert.ok(inserted_html.indexOf('data-user-id="1"') > 0);
+ assert.ok(inserted_html.indexOf("user_circle_green") > 0);
// Next rename fred to Aaron.
const fred_with_new_name = {
@@ -457,8 +457,8 @@ test("insert_fred_then_alice_then_rename", (override) => {
};
activity.redraw_user(fred_with_new_name.user_id);
- assert(fred_removed);
- assert(appended_html.indexOf('data-user-id="2"') > 0);
+ assert.ok(fred_removed);
+ assert.ok(appended_html.indexOf('data-user-id="2"') > 0);
// restore old Fred data
people.add_active_user(fred);
@@ -516,12 +516,12 @@ test("update_presence_info", (override) => {
presence.presence_info.delete(me.user_id);
activity.update_presence_info(me.user_id, info, server_time);
- assert(inserted);
+ assert.ok(inserted);
assert.deepEqual(presence.presence_info.get(me.user_id).status, "active");
presence.presence_info.delete(alice.user_id);
activity.update_presence_info(alice.user_id, info, server_time);
- assert(inserted);
+ assert.ok(inserted);
const expected = {status: "active", last_active: 500};
assert.deepEqual(presence.presence_info.get(alice.user_id), expected);
@@ -563,9 +563,9 @@ test("initialize", (override) => {
$(window).trigger("focus");
clear();
- assert(scroll_handler_started);
- assert(!activity.new_user_input);
- assert(!$("#zephyr-mirror-error").hasClass("show"));
+ assert.ok(scroll_handler_started);
+ assert.ok(!activity.new_user_input);
+ assert.ok(!$("#zephyr-mirror-error").hasClass("show"));
assert.equal(activity.compute_active_status(), "active");
$(window).idle = (params) => {
@@ -581,8 +581,8 @@ test("initialize", (override) => {
presences: {},
});
- assert($("#zephyr-mirror-error").hasClass("show"));
- assert(!activity.new_user_input);
+ assert.ok($("#zephyr-mirror-error").hasClass("show"));
+ assert.ok(!activity.new_user_input);
assert.equal(activity.compute_active_status(), "idle");
// Exercise the mousemove handler, which just
@@ -596,11 +596,11 @@ test("away_status", (override) => {
override(pm_list, "update_private_messages", () => {});
override(buddy_list, "insert_or_move", () => {});
- assert(!user_status.is_away(alice.user_id));
+ assert.ok(!user_status.is_away(alice.user_id));
activity.on_set_away(alice.user_id);
- assert(user_status.is_away(alice.user_id));
+ assert.ok(user_status.is_away(alice.user_id));
activity.on_revoke_away(alice.user_id);
- assert(!user_status.is_away(alice.user_id));
+ assert.ok(!user_status.is_away(alice.user_id));
});
test("electron_bridge", (override) => {
diff --git a/frontend_tests/node_tests/alert_words.js b/frontend_tests/node_tests/alert_words.js
--- a/frontend_tests/node_tests/alert_words.js
+++ b/frontend_tests/node_tests/alert_words.js
@@ -88,15 +88,15 @@ const message_with_emoji = {
};
run_test("notifications", () => {
- assert(!alert_words.notifies(regular_message));
- assert(!alert_words.notifies(own_message));
- assert(alert_words.notifies(other_message));
- assert(alert_words.notifies(caps_message));
- assert(!alert_words.notifies(alertwordboundary_message));
- assert(alert_words.notifies(multialert_message));
- assert(alert_words.notifies(unsafe_word_message));
- assert(alert_words.notifies(alert_domain_message));
- assert(alert_words.notifies(message_with_emoji));
+ assert.ok(!alert_words.notifies(regular_message));
+ assert.ok(!alert_words.notifies(own_message));
+ assert.ok(alert_words.notifies(other_message));
+ assert.ok(alert_words.notifies(caps_message));
+ assert.ok(!alert_words.notifies(alertwordboundary_message));
+ assert.ok(alert_words.notifies(multialert_message));
+ assert.ok(alert_words.notifies(unsafe_word_message));
+ assert.ok(alert_words.notifies(alert_domain_message));
+ assert.ok(alert_words.notifies(message_with_emoji));
});
run_test("munging", () => {
@@ -161,10 +161,10 @@ run_test("munging", () => {
run_test("basic get/set operations", () => {
alert_words.initialize({alert_words: []});
- assert(!alert_words.has_alert_word("breakfast"));
- assert(!alert_words.has_alert_word("lunch"));
+ assert.ok(!alert_words.has_alert_word("breakfast"));
+ assert.ok(!alert_words.has_alert_word("lunch"));
alert_words.set_words(["breakfast", "lunch"]);
- assert(alert_words.has_alert_word("breakfast"));
- assert(alert_words.has_alert_word("lunch"));
- assert(!alert_words.has_alert_word("dinner"));
+ assert.ok(alert_words.has_alert_word("breakfast"));
+ assert.ok(alert_words.has_alert_word("lunch"));
+ assert.ok(!alert_words.has_alert_word("dinner"));
});
diff --git a/frontend_tests/node_tests/alert_words_ui.js b/frontend_tests/node_tests/alert_words_ui.js
--- a/frontend_tests/node_tests/alert_words_ui.js
+++ b/frontend_tests/node_tests/alert_words_ui.js
@@ -36,12 +36,12 @@ run_test("render_alert_words_ui", () => {
});
const new_alert_word = $("#create_alert_word_name");
- assert(!new_alert_word.is_focused());
+ assert.ok(!new_alert_word.is_focused());
alert_words_ui.render_alert_words_ui();
assert.deepEqual(appended, ["stub-bar", "stub-foo"]);
- assert(new_alert_word.is_focused());
+ assert.ok(new_alert_word.is_focused());
});
run_test("add_alert_word", (override) => {
@@ -60,17 +60,17 @@ run_test("add_alert_word", (override) => {
// add '' as alert word
add_func();
assert.equal(new_alert_word.val(), "");
- assert(alert_word_status.hasClass("alert-danger"));
+ assert.ok(alert_word_status.hasClass("alert-danger"));
assert.equal(alert_word_status_text.text(), "translated: Alert word can't be empty!");
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
// add 'foo' as alert word (existing word)
new_alert_word.val("foo");
add_func();
- assert(alert_word_status.hasClass("alert-danger"));
+ assert.ok(alert_word_status.hasClass("alert-danger"));
assert.equal(alert_word_status_text.text(), "translated: Alert word already exists!");
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
// add 'zot' as alert word (new word)
new_alert_word.val("zot");
@@ -88,15 +88,15 @@ run_test("add_alert_word", (override) => {
// test failure
fail_func();
- assert(alert_word_status.hasClass("alert-danger"));
+ assert.ok(alert_word_status.hasClass("alert-danger"));
assert.equal(alert_word_status_text.text(), "translated: Error adding alert word!");
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
// test success
success_func();
- assert(alert_word_status.hasClass("alert-success"));
+ assert.ok(alert_word_status.hasClass("alert-success"));
assert.equal(alert_word_status_text.text(), 'translated: Alert word "zot" added successfully!');
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
});
run_test("add_alert_word_keypress", (override) => {
@@ -122,7 +122,7 @@ run_test("add_alert_word_keypress", (override) => {
};
keypress_func(event);
- assert(called);
+ assert.ok(called);
});
run_test("remove_alert_word", (override) => {
@@ -161,15 +161,15 @@ run_test("remove_alert_word", (override) => {
// test failure
fail_func();
- assert(alert_word_status.hasClass("alert-danger"));
+ assert.ok(alert_word_status.hasClass("alert-danger"));
assert.equal(alert_word_status_text.text(), "translated: Error removing alert word!");
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
// test success
success_func();
- assert(alert_word_status.hasClass("alert-success"));
+ assert.ok(alert_word_status.hasClass("alert-success"));
assert.equal(alert_word_status_text.text(), "translated: Alert word removed successfully!");
- assert(alert_word_status.visible());
+ assert.ok(alert_word_status.visible());
});
run_test("close_status_message", (override) => {
@@ -190,7 +190,7 @@ run_test("close_status_message", (override) => {
currentTarget: ".close-alert-word-status",
};
- assert(alert.visible());
+ assert.ok(alert.visible());
close(event);
- assert(!alert.visible());
+ assert.ok(!alert.visible());
});
diff --git a/frontend_tests/node_tests/billing.js b/frontend_tests/node_tests/billing.js
--- a/frontend_tests/node_tests/billing.js
+++ b/frontend_tests/node_tests/billing.js
@@ -76,16 +76,16 @@ run_test("initialize", (override) => {
$.get_initialize_function()();
- assert(set_tab_called);
- assert(stripe_checkout_configure_called);
+ assert.ok(set_tab_called);
+ assert.ok(stripe_checkout_configure_called);
const e = {
preventDefault: () => {},
};
const update_card_click_handler = $("#update-card-button").get_on_handler("click");
with_field(helpers, "create_ajax_request", card_change_ajax, () => {
update_card_click_handler(e);
- assert(create_ajax_request_called);
- assert(open_func_called);
+ assert.ok(create_ajax_request_called);
+ assert.ok(open_func_called);
});
create_ajax_request_called = false;
@@ -103,7 +103,7 @@ run_test("initialize", (override) => {
with_field(helpers, "create_ajax_request", plan_change_ajax, () => {
change_plan_status_click_handler(e);
- assert(create_ajax_request_called);
+ assert.ok(create_ajax_request_called);
});
create_ajax_request_called = false;
@@ -125,7 +125,7 @@ run_test("initialize", (override) => {
}
with_field(helpers, "create_ajax_request", license_change_ajax, () => {
billing.create_update_license_request();
- assert(create_ajax_request_called);
+ assert.ok(create_ajax_request_called);
});
let create_update_license_request_called = false;
@@ -137,7 +137,7 @@ run_test("initialize", (override) => {
"click",
);
confirm_license_update_click_handler(e);
- assert(create_update_license_request_called);
+ assert.ok(create_update_license_request_called);
let confirm_license_modal_shown = false;
override(helpers, "is_valid_input", () => true);
@@ -154,14 +154,14 @@ run_test("initialize", (override) => {
const update_licenses_button_click_handler =
$("#update-licenses-button").get_on_handler("click");
update_licenses_button_click_handler(e);
- assert(create_update_license_request_called);
- assert(!confirm_license_modal_shown);
+ assert.ok(create_update_license_request_called);
+ assert.ok(!confirm_license_modal_shown);
$("#new_licenses_input").val = () => 25;
create_update_license_request_called = false;
update_licenses_button_click_handler(e);
- assert(!create_update_license_request_called);
- assert(confirm_license_modal_shown);
+ assert.ok(!create_update_license_request_called);
+ assert.ok(confirm_license_modal_shown);
override(helpers, "is_valid_input", () => false);
let prevent_default_called = false;
@@ -171,7 +171,7 @@ run_test("initialize", (override) => {
},
};
update_licenses_button_click_handler(event);
- assert(!prevent_default_called);
+ assert.ok(!prevent_default_called);
const update_next_renewal_licenses_button_click_handler = $(
"#update-licenses-at-next-renewal-button",
@@ -195,26 +195,26 @@ run_test("initialize", (override) => {
}
with_field(helpers, "create_ajax_request", licenses_at_next_renewal_change_ajax, () => {
update_next_renewal_licenses_button_click_handler(e);
- assert(create_ajax_request_called);
+ assert.ok(create_ajax_request_called);
});
});
run_test("billing_template", () => {
// Elements necessary for create_ajax_request
- assert(document.querySelector("#cardchange-error"));
- assert(document.querySelector("#cardchange-loading"));
- assert(document.querySelector("#cardchange_loading_indicator"));
- assert(document.querySelector("#cardchange-success"));
-
- assert(document.querySelector("#licensechange-error"));
- assert(document.querySelector("#licensechange-loading"));
- assert(document.querySelector("#licensechange_loading_indicator"));
- assert(document.querySelector("#licensechange-success"));
-
- assert(document.querySelector("#planchange-error"));
- assert(document.querySelector("#planchange-loading"));
- assert(document.querySelector("#planchange_loading_indicator"));
- assert(document.querySelector("#planchange-success"));
-
- assert(document.querySelector("input[name=csrfmiddlewaretoken]"));
+ assert.ok(document.querySelector("#cardchange-error"));
+ assert.ok(document.querySelector("#cardchange-loading"));
+ assert.ok(document.querySelector("#cardchange_loading_indicator"));
+ assert.ok(document.querySelector("#cardchange-success"));
+
+ assert.ok(document.querySelector("#licensechange-error"));
+ assert.ok(document.querySelector("#licensechange-loading"));
+ assert.ok(document.querySelector("#licensechange_loading_indicator"));
+ assert.ok(document.querySelector("#licensechange-success"));
+
+ assert.ok(document.querySelector("#planchange-error"));
+ assert.ok(document.querySelector("#planchange-loading"));
+ assert.ok(document.querySelector("#planchange_loading_indicator"));
+ assert.ok(document.querySelector("#planchange-success"));
+
+ assert.ok(document.querySelector("input[name=csrfmiddlewaretoken]"));
});
diff --git a/frontend_tests/node_tests/billing_helpers.js b/frontend_tests/node_tests/billing_helpers.js
--- a/frontend_tests/node_tests/billing_helpers.js
+++ b/frontend_tests/node_tests/billing_helpers.js
@@ -136,7 +136,7 @@ run_test("create_ajax_request", (override) => {
assert.equal(data.schedule, "monthly");
assert.equal(data.licenses, "");
- assert(!("license_management" in data));
+ assert.ok(!("license_management" in data));
history.pushState = (state_object, title, path) => {
state.pushState += 1;
diff --git a/frontend_tests/node_tests/bot_data.js b/frontend_tests/node_tests/bot_data.js
--- a/frontend_tests/node_tests/bot_data.js
+++ b/frontend_tests/node_tests/bot_data.js
@@ -116,7 +116,7 @@ test("test_basics", () => {
bot = bot_data.get(43);
assert.equal("Bot 1", bot.full_name);
- assert(bot.is_active);
+ assert.ok(bot.is_active);
bot_data.deactivate(43);
bot = bot_data.get(43);
assert.equal(bot.is_active, false);
@@ -135,7 +135,7 @@ test("test_basics", () => {
bot = bot_data.get(43);
assert.equal("Bot 1", bot.full_name);
- assert(bot.is_active);
+ assert.ok(bot.is_active);
bot_data.del(43);
bot = bot_data.get(43);
assert.equal(bot, undefined);
diff --git a/frontend_tests/node_tests/browser_history.js b/frontend_tests/node_tests/browser_history.js
--- a/frontend_tests/node_tests/browser_history.js
+++ b/frontend_tests/node_tests/browser_history.js
@@ -32,7 +32,7 @@ test("basics", () => {
assert.equal(browser_history.old_hash(), hash1);
const was_internal_change = browser_history.save_old_hash();
- assert(was_internal_change);
+ assert.ok(was_internal_change);
assert.equal(browser_history.old_hash(), hash2);
});
diff --git a/frontend_tests/node_tests/buddy_data.js b/frontend_tests/node_tests/buddy_data.js
--- a/frontend_tests/node_tests/buddy_data.js
+++ b/frontend_tests/node_tests/buddy_data.js
@@ -394,12 +394,12 @@ test("muted users excluded from search", () => {
assert.equal(user_ids.includes(selma.user_id), false);
user_ids = buddy_data.get_filtered_and_sorted_user_ids("sel");
assert.deepEqual(user_ids, []);
- assert(!buddy_data.matches_filter("sel", selma.user_id));
+ assert.ok(!buddy_data.matches_filter("sel", selma.user_id));
muting.remove_muted_user(selma.user_id);
user_ids = buddy_data.get_filtered_and_sorted_user_ids("sel");
assert.deepEqual(user_ids, [selma.user_id]);
- assert(buddy_data.matches_filter("sel", selma.user_id));
+ assert.ok(buddy_data.matches_filter("sel", selma.user_id));
});
test("bulk_data_hacks", () => {
diff --git a/frontend_tests/node_tests/buddy_list.js b/frontend_tests/node_tests/buddy_list.js
--- a/frontend_tests/node_tests/buddy_list.js
+++ b/frontend_tests/node_tests/buddy_list.js
@@ -83,7 +83,7 @@ run_test("basics", (override) => {
buddy_list.populate({
keys: [alice.user_id],
});
- assert(appended);
+ assert.ok(appended);
const alice_li = {length: 1};
@@ -185,7 +185,7 @@ run_test("find_li w/force_render", (override) => {
key,
});
assert.equal(empty_li, stub_li);
- assert(!shown);
+ assert.ok(!shown);
const li = buddy_list.find_li({
key,
@@ -193,7 +193,7 @@ run_test("find_li w/force_render", (override) => {
});
assert.equal(li, stub_li);
- assert(shown);
+ assert.ok(shown);
});
run_test("find_li w/bad key", (override) => {
@@ -224,10 +224,10 @@ run_test("scrolling", (override) => {
tried_to_fill = true;
});
- assert(!tried_to_fill);
+ assert.ok(!tried_to_fill);
buddy_list.start_scroll_handler();
$(buddy_list.scroll_container_sel).trigger("scroll");
- assert(tried_to_fill);
+ assert.ok(tried_to_fill);
});
diff --git a/frontend_tests/node_tests/channel.js b/frontend_tests/node_tests/channel.js
--- a/frontend_tests/node_tests/channel.js
+++ b/frontend_tests/node_tests/channel.js
@@ -42,7 +42,7 @@ function test_with_mock_ajax(test_params) {
};
run_code();
- assert(ajax_called);
+ assert.ok(ajax_called);
check_ajax_options(ajax_options);
}
@@ -176,10 +176,10 @@ test("normal_post", () => {
assert.equal(options.url, "/json/endpoint");
options.simulate_success("response data", "success");
- assert(orig_success_called);
+ assert.ok(orig_success_called);
options.simulate_error();
- assert(orig_error_called);
+ assert.ok(orig_error_called);
},
});
});
@@ -201,7 +201,7 @@ test("patch_with_form_data", () => {
data,
processData: false,
});
- assert(appended);
+ assert.ok(appended);
},
check_ajax_options(options) {
@@ -233,7 +233,7 @@ test("reload_on_403_error", () => {
});
options.simulate_error();
- assert(handler_called);
+ assert.ok(handler_called);
},
});
});
@@ -339,11 +339,11 @@ test("while_reloading", () => {
check_ajax_options(options) {
blueslip.expect("log", "Ignoring DELETE /json/endpoint response while reloading");
options.simulate_success();
- assert(!orig_success_called);
+ assert.ok(!orig_success_called);
blueslip.expect("log", "Ignoring DELETE /json/endpoint error response while reloading");
options.simulate_error();
- assert(!orig_error_called);
+ assert.ok(!orig_error_called);
},
});
});
diff --git a/frontend_tests/node_tests/common.js b/frontend_tests/node_tests/common.js
--- a/frontend_tests/node_tests/common.js
+++ b/frontend_tests/node_tests/common.js
@@ -24,18 +24,18 @@ const common = zrequire("common");
run_test("basics", () => {
common.autofocus("#home");
$.get_initialize_function()();
- assert($("#home").is_focused());
+ assert.ok($("#home").is_focused());
$.clear_initialize_function();
});
run_test("phrase_match", () => {
- assert(common.phrase_match("tes", "test"));
- assert(common.phrase_match("Tes", "test"));
- assert(common.phrase_match("Tes", "Test"));
- assert(common.phrase_match("tes", "Stream Test"));
+ assert.ok(common.phrase_match("tes", "test"));
+ assert.ok(common.phrase_match("Tes", "test"));
+ assert.ok(common.phrase_match("Tes", "Test"));
+ assert.ok(common.phrase_match("tes", "Stream Test"));
- assert(!common.phrase_match("tests", "test"));
- assert(!common.phrase_match("tes", "hostess"));
+ assert.ok(!common.phrase_match("tests", "test"));
+ assert.ok(!common.phrase_match("tes", "hostess"));
});
run_test("copy_data_attribute_value", (override) => {
@@ -76,9 +76,9 @@ run_test("copy_data_attribute_value", (override) => {
faded_in = true;
};
common.copy_data_attribute_value(elem, "admin-emails");
- assert(removed);
- assert(faded_in);
- assert(faded_out);
+ assert.ok(removed);
+ assert.ok(faded_in);
+ assert.ok(faded_out);
});
run_test("adjust_mac_shortcuts non-mac", (override) => {
@@ -147,8 +147,8 @@ run_test("show password", () => {
function check_assertion(type, present_class, absent_class) {
assert.equal($("#id_password").attr("type"), type);
- assert($(password_selector).hasClass(present_class));
- assert(!$(password_selector).hasClass(absent_class));
+ assert.ok($(password_selector).hasClass(present_class));
+ assert.ok(!$(password_selector).hasClass(absent_class));
}
const ev = {
diff --git a/frontend_tests/node_tests/compose.js b/frontend_tests/node_tests/compose.js
--- a/frontend_tests/node_tests/compose.js
+++ b/frontend_tests/node_tests/compose.js
@@ -124,39 +124,39 @@ test_ui("test_wildcard_mention_allowed", () => {
settings_config.wildcard_mention_policy_values.by_everyone.code;
page_params.is_guest = true;
page_params.is_admin = false;
- assert(compose.wildcard_mention_allowed());
+ assert.ok(compose.wildcard_mention_allowed());
page_params.realm_wildcard_mention_policy =
settings_config.wildcard_mention_policy_values.nobody.code;
page_params.is_admin = true;
- assert(!compose.wildcard_mention_allowed());
+ assert.ok(!compose.wildcard_mention_allowed());
page_params.realm_wildcard_mention_policy =
settings_config.wildcard_mention_policy_values.by_members.code;
page_params.is_guest = true;
page_params.is_admin = false;
- assert(!compose.wildcard_mention_allowed());
+ assert.ok(!compose.wildcard_mention_allowed());
page_params.is_guest = false;
- assert(compose.wildcard_mention_allowed());
+ assert.ok(compose.wildcard_mention_allowed());
page_params.realm_wildcard_mention_policy =
settings_config.wildcard_mention_policy_values.by_moderators_only.code;
page_params.is_moderator = false;
- assert(!compose.wildcard_mention_allowed());
+ assert.ok(!compose.wildcard_mention_allowed());
page_params.is_moderator = true;
- assert(compose.wildcard_mention_allowed());
+ assert.ok(compose.wildcard_mention_allowed());
page_params.realm_wildcard_mention_policy =
settings_config.wildcard_mention_policy_values.by_stream_admins_only.code;
page_params.is_admin = false;
- assert(!compose.wildcard_mention_allowed());
+ assert.ok(!compose.wildcard_mention_allowed());
// TODO: Add a by_admins_only case when we implement stream-level administrators.
page_params.is_admin = true;
- assert(compose.wildcard_mention_allowed());
+ assert.ok(compose.wildcard_mention_allowed());
page_params.realm_wildcard_mention_policy =
settings_config.wildcard_mention_policy_values.by_full_members.code;
@@ -164,9 +164,9 @@ test_ui("test_wildcard_mention_allowed", () => {
person.date_joined = new Date(Date.now());
page_params.realm_waiting_period_threshold = 10;
- assert(compose.wildcard_mention_allowed());
+ assert.ok(compose.wildcard_mention_allowed());
page_params.is_admin = false;
- assert(!compose.wildcard_mention_allowed());
+ assert.ok(!compose.wildcard_mention_allowed());
});
test_ui("right-to-left", () => {
@@ -370,12 +370,12 @@ test_ui("send_message_success", (override) => {
compose.send_message_success("1001", 12, false);
assert.equal($("#compose-textarea").val(), "");
- assert($("#compose-textarea").is_focused());
- assert(!$("#compose-send-status").visible());
+ assert.ok($("#compose-textarea").is_focused());
+ assert.ok(!$("#compose-send-status").visible());
assert.equal($("#compose-send-button").prop("disabled"), false);
- assert(!$("#sending-indicator").visible());
+ assert.ok(!$("#sending-indicator").visible());
- assert(reify_message_id_checked);
+ assert.ok(reify_message_id_checked);
});
test_ui("send_message", (override) => {
@@ -469,10 +469,10 @@ test_ui("send_message", (override) => {
};
assert.deepEqual(stub_state, state);
assert.equal($("#compose-textarea").val(), "");
- assert($("#compose-textarea").is_focused());
- assert(!$("#compose-send-status").visible());
+ assert.ok($("#compose-textarea").is_focused());
+ assert.ok(!$("#compose-send-status").visible());
assert.equal($("#compose-send-button").prop("disabled"), false);
- assert(!$("#sending-indicator").visible());
+ assert.ok(!$("#sending-indicator").visible());
})();
// This is the additional setup which is common to both the tests below.
@@ -501,7 +501,7 @@ test_ui("send_message", (override) => {
send_msg_called: 1,
};
assert.deepEqual(stub_state, state);
- assert(echo_error_msg_checked);
+ assert.ok(echo_error_msg_checked);
})();
(function test_error_codepath_local_id_undefined() {
@@ -525,14 +525,14 @@ test_ui("send_message", (override) => {
send_msg_called: 1,
};
assert.deepEqual(stub_state, state);
- assert(!echo_error_msg_checked);
+ assert.ok(!echo_error_msg_checked);
assert.equal($("#compose-send-button").prop("disabled"), false);
assert.equal($("#compose-error-msg").html(), "Error sending message: Server says 408");
assert.equal($("#compose-textarea").val(), "foobarfoobar");
- assert($("#compose-textarea").is_focused());
- assert($("#compose-send-status").visible());
+ assert.ok($("#compose-textarea").is_focused());
+ assert.ok($("#compose-send-status").visible());
assert.equal($("#compose-send-button").prop("disabled"), false);
- assert(!$("#sending-indicator").visible());
+ assert.ok(!$("#sending-indicator").visible());
})();
});
@@ -558,16 +558,16 @@ test_ui("enter_with_preview_open", (override) => {
send_message_called = true;
});
compose.enter_with_preview_open();
- assert($("#compose-textarea").visible());
- assert(!$("#compose .undo_markdown_preview").visible());
- assert(!$("#compose .preview_message_area").visible());
- assert($("#compose .markdown_preview").visible());
- assert(send_message_called);
+ assert.ok($("#compose-textarea").visible());
+ assert.ok(!$("#compose .undo_markdown_preview").visible());
+ assert.ok(!$("#compose .preview_message_area").visible());
+ assert.ok($("#compose .markdown_preview").visible());
+ assert.ok(send_message_called);
page_params.enter_sends = false;
$("#compose-textarea").trigger("blur");
compose.enter_with_preview_open();
- assert($("#compose-textarea").is_focused());
+ assert.ok($("#compose-textarea").is_focused());
// Test sending a message without content.
$("#compose-textarea").val("");
@@ -577,7 +577,7 @@ test_ui("enter_with_preview_open", (override) => {
compose.enter_with_preview_open();
- assert($("#enter_sends").prop("checked"));
+ assert.ok($("#enter_sends").prop("checked"));
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "You have nothing to send!"}),
@@ -598,9 +598,9 @@ test_ui("finish", (override) => {
$("#compose-textarea").val("");
const res = compose.finish();
assert.equal(res, false);
- assert(!$("#compose_invite_users").visible());
- assert(!$("#sending-indicator").visible());
- assert(!$("#compose-send-button").is_focused());
+ assert.ok(!$("#compose_invite_users").visible());
+ assert.ok(!$("#sending-indicator").visible());
+ assert.ok(!$("#compose-send-button").is_focused());
assert.equal($("#compose-send-button").prop("disabled"), false);
assert.equal(
$("#compose-error-msg").html(),
@@ -625,13 +625,13 @@ test_ui("finish", (override) => {
override(compose, "send_message", () => {
send_message_called = true;
});
- assert(compose.finish());
- assert($("#compose-textarea").visible());
- assert(!$("#compose .undo_markdown_preview").visible());
- assert(!$("#compose .preview_message_area").visible());
- assert($("#compose .markdown_preview").visible());
- assert(send_message_called);
- assert(compose_finished_event_checked);
+ assert.ok(compose.finish());
+ assert.ok($("#compose-textarea").visible());
+ assert.ok(!$("#compose .undo_markdown_preview").visible());
+ assert.ok(!$("#compose .preview_message_area").visible());
+ assert.ok($("#compose .markdown_preview").visible());
+ assert.ok(send_message_called);
+ assert.ok(compose_finished_event_checked);
})();
});
@@ -677,7 +677,7 @@ test_ui("warn_if_private_stream_is_linked", () => {
return "fake-compose_private_stream_alert-template";
});
return function () {
- assert(called);
+ assert.ok(called);
};
})(),
@@ -688,7 +688,7 @@ test_ui("warn_if_private_stream_is_linked", () => {
assert.equal(html, "fake-compose_private_stream_alert-template");
};
return function () {
- assert(called);
+ assert.ok(called);
};
})(),
];
@@ -763,10 +763,10 @@ test_ui("initialize", (override) => {
compose.initialize();
- assert(resize_watch_manual_resize_checked);
- assert(xmlhttprequest_checked);
- assert(!$("#compose .compose_upload_file").hasClass("notdisplayed"));
- assert(setup_upload_called);
+ assert.ok(resize_watch_manual_resize_checked);
+ assert.ok(xmlhttprequest_checked);
+ assert.ok(!$("#compose .compose_upload_file").hasClass("notdisplayed"));
+ assert.ok(setup_upload_called);
function set_up_compose_start_mock(expected_opts) {
compose_actions_start_checked = false;
@@ -781,7 +781,7 @@ test_ui("initialize", (override) => {
compose.initialize();
- assert(compose_actions_start_checked);
+ assert.ok(compose_actions_start_checked);
})();
(function test_page_params_narrow_topic() {
@@ -792,7 +792,7 @@ test_ui("initialize", (override) => {
compose.initialize();
- assert(compose_actions_start_checked);
+ assert.ok(compose_actions_start_checked);
})();
(function test_abort_xhr() {
@@ -804,7 +804,7 @@ test_ui("initialize", (override) => {
compose.abort_xhr();
assert.equal($("#compose-send-button").attr(), undefined);
- assert(uppy_cancel_all_called);
+ assert.ok(uppy_cancel_all_called);
})();
});
@@ -829,13 +829,13 @@ test_ui("update_fade", (override) => {
compose_state.set_message_type(false);
keyup_handler_func();
- assert(!set_focused_recipient_checked);
- assert(!update_all_called);
+ assert.ok(!set_focused_recipient_checked);
+ assert.ok(!update_all_called);
compose_state.set_message_type("private");
keyup_handler_func();
- assert(set_focused_recipient_checked);
- assert(update_all_called);
+ assert.ok(set_focused_recipient_checked);
+ assert.ok(update_all_called);
});
test_ui("trigger_submit_compose_form", (override) => {
@@ -856,8 +856,8 @@ test_ui("trigger_submit_compose_form", (override) => {
submit_handler(e);
- assert(prevent_default_checked);
- assert(compose_finish_checked);
+ assert.ok(prevent_default_checked);
+ assert.ok(compose_finish_checked);
});
test_ui("needs_subscribe_warning", () => {
@@ -947,7 +947,7 @@ test_ui("warn_if_mentioning_unsubscribed_user", (override) => {
return true;
});
return function () {
- assert(called);
+ assert.ok(called);
};
})(),
@@ -962,7 +962,7 @@ test_ui("warn_if_mentioning_unsubscribed_user", (override) => {
return "fake-compose-invite-user-template";
});
return function () {
- assert(called);
+ assert.ok(called);
};
})(),
@@ -973,7 +973,7 @@ test_ui("warn_if_mentioning_unsubscribed_user", (override) => {
assert.equal(html, "fake-compose-invite-user-template");
};
return function () {
- assert(called);
+ assert.ok(called);
};
})(),
];
@@ -1017,7 +1017,7 @@ test_ui("warn_if_mentioning_unsubscribed_user", (override) => {
stub_templates(noop);
compose.warn_if_mentioning_unsubscribed_user(mentioned);
assert.equal($("#compose_invite_users").visible(), true);
- assert(looked_for_existing);
+ assert.ok(looked_for_existing);
});
test_ui("on_events", (override) => {
@@ -1074,10 +1074,10 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(helper.container_was_removed());
- assert(compose_finish_checked);
- assert(!$("#compose-all-everyone").visible());
- assert(!$("#compose-send-status").visible());
+ assert.ok(helper.container_was_removed());
+ assert.ok(compose_finish_checked);
+ assert.ok(!$("#compose-all-everyone").visible());
+ assert.ok(!$("#compose-send-status").visible());
})();
(function test_compose_invite_users_clicked() {
@@ -1131,10 +1131,10 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(helper.container_was_removed());
- assert(!$("#compose_invite_users").visible());
- assert(invite_user_to_stream_called);
- assert(all_invite_children_called);
+ assert.ok(helper.container_was_removed());
+ assert.ok(!$("#compose_invite_users").visible());
+ assert.ok(invite_user_to_stream_called);
+ assert.ok(all_invite_children_called);
})();
(function test_compose_invite_close_clicked() {
@@ -1155,9 +1155,9 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(helper.container_was_removed());
- assert(all_invite_children_called);
- assert(!$("#compose_invite_users").visible());
+ assert.ok(helper.container_was_removed());
+ assert.ok(all_invite_children_called);
+ assert.ok(!$("#compose_invite_users").visible());
})();
(function test_compose_not_subscribed_clicked() {
@@ -1180,7 +1180,7 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(compose_not_subscribed_called);
+ assert.ok(compose_not_subscribed_called);
stream_data.add_sub(subscription);
$("#stream_message_recipient_stream").val("test");
@@ -1188,7 +1188,7 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(!$("#compose-send-status").visible());
+ assert.ok(!$("#compose-send-status").visible());
})();
(function test_compose_not_subscribed_close_clicked() {
@@ -1207,13 +1207,13 @@ test_ui("on_events", (override) => {
handler(helper.event);
- assert(!$("#compose-send-status").visible());
+ assert.ok(!$("#compose-send-status").visible());
})();
(function test_attach_files_compose_clicked() {
const handler = $("#compose").get_on_handler("click", ".compose_upload_file");
$("#compose .file_input").clone = (param) => {
- assert(param);
+ assert.ok(param);
};
let compose_file_input_clicked = false;
$("#compose .file_input").on("click", () => {
@@ -1225,7 +1225,7 @@ test_ui("on_events", (override) => {
};
handler(event);
- assert(compose_file_input_clicked);
+ assert.ok(compose_file_input_clicked);
})();
(function test_markdown_preview_compose_clicked() {
@@ -1238,10 +1238,10 @@ test_ui("on_events", (override) => {
}
function assert_visibilities() {
- assert(!$("#compose-textarea").visible());
- assert(!$("#compose .markdown_preview").visible());
- assert($("#compose .undo_markdown_preview").visible());
- assert($("#compose .preview_message_area").visible());
+ assert.ok(!$("#compose-textarea").visible());
+ assert.ok(!$("#compose .markdown_preview").visible());
+ assert.ok($("#compose .undo_markdown_preview").visible());
+ assert.ok($("#compose .preview_message_area").visible());
}
function setup_mock_markdown_contains_backend_only_syntax(msg_content, return_val) {
@@ -1278,8 +1278,8 @@ test_ui("on_events", (override) => {
override(channel, "post", (payload) => {
assert.equal(payload.url, "/json/messages/render");
- assert(payload.idempotent);
- assert(payload.data);
+ assert.ok(payload.idempotent);
+ assert.ok(payload.data);
assert.deepEqual(payload.data.content, current_message);
function test(func, param) {
@@ -1292,7 +1292,7 @@ test_ui("on_events", (override) => {
func(param);
- assert(destroy_indicator_called);
+ assert.ok(destroy_indicator_called);
}
test(test_post_error, payload.error);
@@ -1329,7 +1329,7 @@ test_ui("on_events", (override) => {
handler(event);
- assert(make_indicator_called);
+ assert.ok(make_indicator_called);
assert_visibilities();
let apply_markdown_called = false;
@@ -1348,7 +1348,7 @@ test_ui("on_events", (override) => {
handler(event);
- assert(apply_markdown_called);
+ assert.ok(apply_markdown_called);
assert_visibilities();
assert.equal($("#compose .preview_content").html(), "Server: foobarfoobar");
})();
@@ -1367,10 +1367,10 @@ test_ui("on_events", (override) => {
handler(event);
- assert($("#compose-textarea").visible());
- assert(!$("#compose .undo_markdown_preview").visible());
- assert(!$("#compose .preview_message_area").visible());
- assert($("#compose .markdown_preview").visible());
+ assert.ok($("#compose-textarea").visible());
+ assert.ok(!$("#compose .undo_markdown_preview").visible());
+ assert.ok(!$("#compose .preview_message_area").visible());
+ assert.ok($("#compose .markdown_preview").visible());
})();
});
diff --git a/frontend_tests/node_tests/compose_actions.js b/frontend_tests/node_tests/compose_actions.js
--- a/frontend_tests/node_tests/compose_actions.js
+++ b/frontend_tests/node_tests/compose_actions.js
@@ -66,11 +66,11 @@ const reply_with_mention = compose_actions.reply_with_mention;
const quote_and_reply = compose_actions.quote_and_reply;
function assert_visible(sel) {
- assert($(sel).visible());
+ assert.ok($(sel).visible());
}
function assert_hidden(sel) {
- assert(!$(sel).visible());
+ assert.ok(!$(sel).visible());
}
function override_private_message_recipient(override) {
@@ -137,7 +137,7 @@ test("start", (override) => {
assert.equal($("#stream_message_recipient_stream").val(), "stream1");
assert.equal($("#stream_message_recipient_topic").val(), "topic1");
assert.equal(compose_state.get_message_type(), "stream");
- assert(compose_state.composing());
+ assert.ok(compose_state.composing());
// Autofill stream field for single subscription
const denmark = {
@@ -198,7 +198,7 @@ test("start", (override) => {
assert.equal(compose_state.private_message_recipient(), "[email protected]");
assert.equal($("#compose-textarea").val(), "hello");
assert.equal(compose_state.get_message_type(), "private");
- assert(compose_state.composing());
+ assert.ok(compose_state.composing());
// Cancel compose.
let pill_cleared;
@@ -215,11 +215,11 @@ test("start", (override) => {
assert_hidden("#compose_controls");
cancel();
- assert(abort_xhr_called);
- assert(pill_cleared);
+ assert.ok(abort_xhr_called);
+ assert.ok(pill_cleared);
assert_visible("#compose_controls");
assert_hidden("#private-message");
- assert(!compose_state.composing());
+ assert.ok(!compose_state.composing());
});
test("respond_to_message", (override) => {
@@ -371,7 +371,7 @@ test("quote_and_reply", (override) => {
success_function({
raw_content: "Testing.",
});
- assert(replaced);
+ assert.ok(replaced);
selected_message = {
type: "stream",
@@ -390,7 +390,7 @@ test("quote_and_reply", (override) => {
with_field(channel, "get", whiny_get, () => {
quote_and_reply(opts);
});
- assert(replaced);
+ assert.ok(replaced);
selected_message = {
type: "stream",
@@ -405,7 +405,7 @@ test("quote_and_reply", (override) => {
expected_replacement =
"translated: @_**Steve Stephenson|90** [said](https://chat.zulip.org/#narrow/stream/92-learning/topic/Tornado):\n````quote\n```\nmultiline code block\nshoudln't mess with quotes\n```\n````";
quote_and_reply(opts);
- assert(replaced);
+ assert.ok(replaced);
});
test("get_focus_area", () => {
@@ -434,16 +434,16 @@ test("focus_in_empty_compose", (override) => {
override(compose_state, "composing", () => true);
$("#compose-textarea").val("");
$("#compose-textarea").trigger("focus");
- assert(compose_state.focus_in_empty_compose());
+ assert.ok(compose_state.focus_in_empty_compose());
override(compose_state, "composing", () => false);
- assert(!compose_state.focus_in_empty_compose());
+ assert.ok(!compose_state.focus_in_empty_compose());
$("#compose-textarea").val("foo");
- assert(!compose_state.focus_in_empty_compose());
+ assert.ok(!compose_state.focus_in_empty_compose());
$("#compose-textarea").trigger("blur");
- assert(!compose_state.focus_in_empty_compose());
+ assert.ok(!compose_state.focus_in_empty_compose());
});
test("on_narrow", (override) => {
@@ -463,7 +463,7 @@ test("on_narrow", (override) => {
compose_actions.on_narrow({
force_close: true,
});
- assert(cancel_called);
+ assert.ok(cancel_called);
let on_topic_narrow_called = false;
override(compose_actions, "on_topic_narrow", () => {
@@ -473,7 +473,7 @@ test("on_narrow", (override) => {
compose_actions.on_narrow({
force_close: false,
});
- assert(on_topic_narrow_called);
+ assert.ok(on_topic_narrow_called);
let update_message_list_called = false;
narrowed_by_topic_reply = false;
@@ -484,7 +484,7 @@ test("on_narrow", (override) => {
compose_actions.on_narrow({
force_close: false,
});
- assert(update_message_list_called);
+ assert.ok(update_message_list_called);
has_message_content = false;
let start_called = false;
@@ -497,7 +497,7 @@ test("on_narrow", (override) => {
trigger: "not-search",
private_message_recipient: "[email protected]",
});
- assert(start_called);
+ assert.ok(start_called);
start_called = false;
compose_actions.on_narrow({
@@ -505,12 +505,12 @@ test("on_narrow", (override) => {
trigger: "search",
private_message_recipient: "",
});
- assert(!start_called);
+ assert.ok(!start_called);
narrowed_by_pm_reply = false;
cancel_called = false;
compose_actions.on_narrow({
force_close: false,
});
- assert(cancel_called);
+ assert.ok(cancel_called);
});
diff --git a/frontend_tests/node_tests/compose_fade.js b/frontend_tests/node_tests/compose_fade.js
--- a/frontend_tests/node_tests/compose_fade.js
+++ b/frontend_tests/node_tests/compose_fade.js
@@ -87,6 +87,6 @@ run_test("set_focused_recipient", () => {
stream_id: 999,
topic: "lunch",
};
- assert(!compose_fade_helper.should_fade_message(good_msg));
- assert(compose_fade_helper.should_fade_message(bad_msg));
+ assert.ok(!compose_fade_helper.should_fade_message(good_msg));
+ assert.ok(compose_fade_helper.should_fade_message(bad_msg));
});
diff --git a/frontend_tests/node_tests/compose_pm_pill.js b/frontend_tests/node_tests/compose_pm_pill.js
--- a/frontend_tests/node_tests/compose_pm_pill.js
+++ b/frontend_tests/node_tests/compose_pm_pill.js
@@ -49,7 +49,7 @@ run_test("pills", (override) => {
pills.appendValidatedData = (item) => {
const id = item.user_id;
- assert(!all_pills.has(id));
+ assert.ok(!all_pills.has(id));
all_pills.set(id, item);
};
pills.items = () => Array.from(all_pills.values());
@@ -102,14 +102,14 @@ run_test("pills", (override) => {
function test_create_item(handler) {
(function test_rejection_path() {
const item = handler(othello.email, pills.items());
- assert(get_by_email_called);
+ assert.ok(get_by_email_called);
assert.equal(item, undefined);
})();
(function test_success_path() {
get_by_email_called = false;
const res = handler(iago.email, pills.items());
- assert(get_by_email_called);
+ assert.ok(get_by_email_called);
assert.equal(typeof res, "object");
assert.equal(res.user_id, iago.user_id);
assert.equal(res.display_value, iago.full_name);
@@ -119,7 +119,7 @@ run_test("pills", (override) => {
function input_pill_stub(opts) {
assert.equal(opts.container, pill_container_stub);
create_item_handler = opts.create_item_from_text;
- assert(create_item_handler);
+ assert.ok(create_item_handler);
return pills;
}
@@ -137,7 +137,7 @@ run_test("pills", (override) => {
};
compose_pm_pill.initialize();
- assert(compose_pm_pill.widget);
+ assert.ok(compose_pm_pill.widget);
compose_pm_pill.set_from_typeahead(othello);
compose_pm_pill.set_from_typeahead(hamlet);
@@ -158,12 +158,12 @@ run_test("pills", (override) => {
test_create_item(create_item_handler);
compose_pm_pill.set_from_emails("[email protected]");
- assert(compose_pm_pill.widget);
+ assert.ok(compose_pm_pill.widget);
- assert(get_by_user_id_called);
- assert(pills_cleared);
- assert(appendValue_called);
- assert(text_cleared);
+ assert.ok(get_by_user_id_called);
+ assert.ok(pills_cleared);
+ assert.ok(appendValue_called);
+ assert.ok(text_cleared);
});
run_test("has_unconverted_data", () => {
diff --git a/frontend_tests/node_tests/compose_ui.js b/frontend_tests/node_tests/compose_ui.js
--- a/frontend_tests/node_tests/compose_ui.js
+++ b/frontend_tests/node_tests/compose_ui.js
@@ -93,7 +93,7 @@ run_test("autosize_textarea", (override) => {
const container = "container-stub";
compose_ui.autosize_textarea(container);
assert.equal(textarea_autosized.textarea, container);
- assert(textarea_autosized.autosized);
+ assert.ok(textarea_autosized.autosized);
});
run_test("insert_syntax_and_focus", () => {
@@ -108,7 +108,7 @@ run_test("insert_syntax_and_focus", () => {
compose_ui.insert_syntax_and_focus(":octopus:");
assert.equal($("#compose-textarea").caret(), 4);
assert.equal($("#compose-textarea").val(), "xyz :octopus: ");
- assert($("#compose-textarea").is_focused());
+ assert.ok($("#compose-textarea").is_focused());
});
run_test("smart_insert", () => {
@@ -119,27 +119,27 @@ run_test("smart_insert", () => {
assert.equal(textbox.insert_pos, 4);
assert.equal(textbox.insert_text, " :smile: ");
assert.equal(textbox.val(), "abc :smile: ");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
textbox.trigger("blur");
compose_ui.smart_insert(textbox, ":airplane:");
assert.equal(textbox.insert_text, ":airplane: ");
assert.equal(textbox.val(), "abc :smile: :airplane: ");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
textbox.caret(0);
textbox.trigger("blur");
compose_ui.smart_insert(textbox, ":octopus:");
assert.equal(textbox.insert_text, ":octopus: ");
assert.equal(textbox.val(), ":octopus: abc :smile: :airplane: ");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
textbox.caret(textbox.val().length);
textbox.trigger("blur");
compose_ui.smart_insert(textbox, ":heart:");
assert.equal(textbox.insert_text, ":heart: ");
assert.equal(textbox.val(), ":octopus: abc :smile: :airplane: :heart: ");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
// Test handling of spaces for ```quote
textbox = make_textbox("");
@@ -148,7 +148,7 @@ run_test("smart_insert", () => {
compose_ui.smart_insert(textbox, "```quote\nquoted message\n```\n");
assert.equal(textbox.insert_text, "```quote\nquoted message\n```\n");
assert.equal(textbox.val(), "```quote\nquoted message\n```\n");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
textbox = make_textbox("");
textbox.caret(0);
@@ -156,7 +156,7 @@ run_test("smart_insert", () => {
compose_ui.smart_insert(textbox, "[Quoting…]\n");
assert.equal(textbox.insert_text, "[Quoting…]\n");
assert.equal(textbox.val(), "[Quoting…]\n");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
textbox = make_textbox("abc");
textbox.caret(3);
@@ -164,7 +164,7 @@ run_test("smart_insert", () => {
compose_ui.smart_insert(textbox, " test with space");
assert.equal(textbox.insert_text, " test with space ");
assert.equal(textbox.val(), "abc test with space ");
- assert(textbox.focused);
+ assert.ok(textbox.focused);
// Note that we don't have any special logic for strings that are
// already surrounded by spaces, since we are usually inserting things
diff --git a/frontend_tests/node_tests/compose_validate.js b/frontend_tests/node_tests/compose_validate.js
--- a/frontend_tests/node_tests/compose_validate.js
+++ b/frontend_tests/node_tests/compose_validate.js
@@ -65,7 +65,7 @@ test_ui("validate_stream_message_address_info", () => {
subscribed: true,
};
stream_data.add_sub(sub);
- assert(compose.validate_stream_message_address_info("social"));
+ assert.ok(compose.validate_stream_message_address_info("social"));
sub.subscribed = false;
stream_data.add_sub(sub);
@@ -73,7 +73,7 @@ test_ui("validate_stream_message_address_info", () => {
assert.equal(template_name, "compose_not_subscribed");
return "compose_not_subscribed_stub";
});
- assert(!compose.validate_stream_message_address_info("social"));
+ assert.ok(!compose.validate_stream_message_address_info("social"));
assert.equal($("#compose-error-msg").html(), "compose_not_subscribed_stub");
page_params.narrow_stream = false;
@@ -82,7 +82,7 @@ test_ui("validate_stream_message_address_info", () => {
payload.data.subscribed = true;
payload.success(payload.data);
};
- assert(compose.validate_stream_message_address_info("social"));
+ assert.ok(compose.validate_stream_message_address_info("social"));
sub.name = "Frontend";
sub.stream_id = 102;
@@ -92,14 +92,14 @@ test_ui("validate_stream_message_address_info", () => {
payload.data.subscribed = false;
payload.success(payload.data);
};
- assert(!compose.validate_stream_message_address_info("Frontend"));
+ assert.ok(!compose.validate_stream_message_address_info("Frontend"));
assert.equal($("#compose-error-msg").html(), "compose_not_subscribed_stub");
channel.post = (payload) => {
assert.equal(payload.data.stream, "Frontend");
payload.error({status: 404});
};
- assert(!compose.validate_stream_message_address_info("Frontend"));
+ assert.ok(!compose.validate_stream_message_address_info("Frontend"));
assert.equal(
$("#compose-error-msg").html(),
"translated HTML: <p>The stream <b>Frontend</b> does not exist.</p><p>Manage your subscriptions <a href='#streams/all'>on your Streams page</a>.</p>",
@@ -109,7 +109,7 @@ test_ui("validate_stream_message_address_info", () => {
assert.equal(payload.data.stream, "social");
payload.error({status: 500});
};
- assert(!compose.validate_stream_message_address_info("social"));
+ assert.ok(!compose.validate_stream_message_address_info("social"));
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Error checking subscription"}),
@@ -150,9 +150,9 @@ test_ui("validate", (override) => {
}
initialize_pm_pill();
- assert(!compose.validate());
- assert(!$("#sending-indicator").visible());
- assert(!$("#compose-send-button").is_focused());
+ assert.ok(!compose.validate());
+ assert.ok(!$("#sending-indicator").visible());
+ assert.ok(!$("#compose-send-button").is_focused());
assert.equal($("#compose-send-button").prop("disabled"), false);
assert.equal(
$("#compose-error-msg").html(),
@@ -173,8 +173,8 @@ test_ui("validate", (override) => {
}
return false;
};
- assert(!compose.validate());
- assert(zephyr_checked);
+ assert.ok(!compose.validate());
+ assert.ok(zephyr_checked);
assert.equal(
$("#compose-error-msg").html(),
$t_html({
@@ -189,7 +189,7 @@ test_ui("validate", (override) => {
compose_state.set_message_type("private");
compose_state.private_message_recipient("");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Please specify at least one valid recipient"}),
@@ -199,7 +199,7 @@ test_ui("validate", (override) => {
add_content_to_compose_box();
compose_state.private_message_recipient("[email protected]");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
@@ -207,7 +207,7 @@ test_ui("validate", (override) => {
);
compose_state.private_message_recipient("[email protected],[email protected]");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
@@ -216,15 +216,15 @@ test_ui("validate", (override) => {
people.add_active_user(bob);
compose_state.private_message_recipient("[email protected]");
- assert(compose.validate());
+ assert.ok(compose.validate());
page_params.realm_is_zephyr_mirror_realm = true;
- assert(compose.validate());
+ assert.ok(compose.validate());
page_params.realm_is_zephyr_mirror_realm = false;
compose_state.set_message_type("stream");
compose_state.stream_name("");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Please specify a stream"}),
@@ -233,7 +233,7 @@ test_ui("validate", (override) => {
compose_state.stream_name("Denmark");
page_params.realm_mandatory_topics = true;
compose_state.topic("");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Please specify a topic"}),
@@ -276,9 +276,9 @@ test_ui("validate_stream_message", (override) => {
};
stream_data.add_sub(sub);
compose_state.stream_name("social");
- assert(compose.validate());
- assert(!$("#compose-all-everyone").visible());
- assert(!$("#compose-send-status").visible());
+ assert.ok(compose.validate());
+ assert.ok(!$("#compose-all-everyone").visible());
+ assert.ok(!$("#compose-send-status").visible());
peer_data.get_subscriber_count = (stream_id) => {
assert.equal(stream_id, 101);
@@ -296,14 +296,14 @@ test_ui("validate_stream_message", (override) => {
override(compose, "wildcard_mention_allowed", () => true);
compose_state.message_content("Hey @**all**");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal($("#compose-send-button").prop("disabled"), false);
- assert(!$("#compose-send-status").visible());
+ assert.ok(!$("#compose-send-status").visible());
assert.equal(compose_content, "compose_all_everyone_stub");
- assert($("#compose-all-everyone").visible());
+ assert.ok($("#compose-all-everyone").visible());
override(compose, "wildcard_mention_allowed", () => false);
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({
@@ -330,7 +330,7 @@ test_ui("test_validate_stream_message_post_policy_admin_only", (override) => {
compose_state.topic("subject102");
compose_state.stream_name("stream102");
stream_data.add_sub(sub);
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Only organization admins are allowed to post to this stream."}),
@@ -344,7 +344,7 @@ test_ui("test_validate_stream_message_post_policy_admin_only", (override) => {
compose_state.topic("subject102");
compose_state.stream_name("stream102");
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Only organization admins are allowed to post to this stream."}),
@@ -368,7 +368,7 @@ test_ui("test_validate_stream_message_post_policy_moderators_only", (override) =
compose_state.topic("subject104");
compose_state.stream_name("stream104");
stream_data.add_sub(sub);
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({
@@ -405,7 +405,7 @@ test_ui("test_validate_stream_message_post_policy_full_members_only", (override)
compose_state.topic("subject103");
compose_state.stream_name("stream103");
stream_data.add_sub(sub);
- assert(!compose.validate());
+ assert.ok(!compose.validate());
assert.equal(
$("#compose-error-msg").html(),
$t_html({defaultMessage: "Guests are not allowed to post to this stream."}),
diff --git a/frontend_tests/node_tests/compose_video.js b/frontend_tests/node_tests/compose_video.js
--- a/frontend_tests/node_tests/compose_video.js
+++ b/frontend_tests/node_tests/compose_video.js
@@ -102,7 +102,7 @@ test("videos", (override) => {
$("#compose-textarea").val("");
handler(ev);
- assert(!called);
+ assert.ok(!called);
})();
(function test_jitsi_video_link_compose_clicked() {
@@ -131,14 +131,14 @@ test("videos", (override) => {
page_params.jitsi_server_url = null;
handler(ev);
- assert(!called);
+ assert.ok(!called);
page_params.jitsi_server_url = "https://meet.jit.si";
handler(ev);
// video link ids consist of 15 random digits
const video_link_regex =
/\[translated: Click to join video call]\(https:\/\/meet.jit.si\/\d{15}\)/;
- assert(called);
+ assert.ok(called);
assert.match(syntax_to_insert, video_link_regex);
})();
@@ -168,7 +168,7 @@ test("videos", (override) => {
page_params.has_zoom_token = false;
window.open = (url) => {
- assert(url.endsWith("/calls/zoom/register"));
+ assert.ok(url.endsWith("/calls/zoom/register"));
// The event here has value=true. We keep it in events.js to
// allow our tooling to verify its schema.
@@ -183,7 +183,7 @@ test("videos", (override) => {
handler(ev);
const video_link_regex = /\[translated: Click to join video call]\(example\.zoom\.com\)/;
- assert(called);
+ assert.ok(called);
assert.match(syntax_to_insert, video_link_regex);
})();
@@ -222,7 +222,7 @@ test("videos", (override) => {
handler(ev);
const video_link_regex =
/\[translated: Click to join video call]\(\/calls\/bigbluebutton\/join\?meeting_id=%22zulip-1%22&password=%22AAAAAAAAAA%22&checksum=%2232702220bff2a22a44aee72e96cfdb4c4091752e%22\)/;
- assert(called);
+ assert.ok(called);
assert.match(syntax_to_insert, video_link_regex);
})();
});
diff --git a/frontend_tests/node_tests/composebox_typeahead.js b/frontend_tests/node_tests/composebox_typeahead.js
--- a/frontend_tests/node_tests/composebox_typeahead.js
+++ b/frontend_tests/node_tests/composebox_typeahead.js
@@ -401,7 +401,7 @@ test("content_typeahead_selected", (override) => {
actual_value = ct.content_typeahead_selected.call(fake_this, othello);
expected_value = "@**Othello, the Moor of Venice** ";
assert.equal(actual_value, expected_value);
- assert(warned_for_mention);
+ assert.ok(warned_for_mention);
fake_this.query = "Hello @oth";
fake_this.token = "oth";
@@ -565,11 +565,11 @@ test("content_typeahead_selected", (override) => {
expected_value = fake_this.query;
assert.equal(actual_value, expected_value);
- assert(caret_called1);
- assert(caret_called2);
- assert(autosize_called);
- assert(set_timeout_called);
- assert(warned_for_stream_link);
+ assert.ok(caret_called1);
+ assert.ok(caret_called2);
+ assert.ok(autosize_called);
+ assert.ok(set_timeout_called);
+ assert.ok(warned_for_stream_link);
});
function sorted_names_from(subs) {
@@ -831,14 +831,14 @@ test("initialize", (override) => {
options.query = "hamletchar";
options.updater(hamletcharacters, event);
assert.deepEqual(appended_names, ["King Lear"]);
- assert(cleared);
+ assert.ok(cleared);
inserted_users = [lear.user_id];
appended_names = [];
cleared = false;
options.updater(hamletcharacters, event);
assert.deepEqual(appended_names, []);
- assert(cleared);
+ assert.ok(cleared);
pm_recipient_typeahead_called = true;
};
@@ -862,7 +862,7 @@ test("initialize", (override) => {
fake_this.options = options;
let actual_value = options.source.call(fake_this, "test #s");
assert.deepEqual(sorted_names_from(actual_value), ["Denmark", "Sweden", "The Netherlands"]);
- assert(caret_called);
+ assert.ok(caret_called);
// options.highlighter()
//
@@ -1048,7 +1048,7 @@ test("initialize", (override) => {
});
$("form#send_message_form").trigger(event);
- assert(compose_finish_called);
+ assert.ok(compose_finish_called);
event.metaKey = false;
event.ctrlKey = true;
$("form#send_message_form").trigger(event);
@@ -1122,11 +1122,11 @@ test("initialize", (override) => {
// Now let's make sure that all the stub functions have been called
// during the initialization.
- assert(stream_typeahead_called);
- assert(subject_typeahead_called);
- assert(pm_recipient_typeahead_called);
- assert(channel_post_called);
- assert(compose_textarea_typeahead_called);
+ assert.ok(stream_typeahead_called);
+ assert.ok(subject_typeahead_called);
+ assert.ok(pm_recipient_typeahead_called);
+ assert.ok(channel_post_called);
+ assert.ok(compose_textarea_typeahead_called);
});
test("begins_typeahead", (override) => {
@@ -1419,15 +1419,15 @@ test("content_highlighter", (override) => {
ct.content_highlighter.call(fake_this, "py");
fake_this = {completing: "something-else"};
- assert(!ct.content_highlighter.call(fake_this));
+ assert.ok(!ct.content_highlighter.call(fake_this));
// Verify that all stub functions have been called.
- assert(th_render_typeahead_item_called);
- assert(th_render_person_called);
- assert(th_render_user_group_called);
- assert(th_render_stream_called);
- assert(th_render_typeahead_item_called);
- assert(th_render_slash_command_called);
+ assert.ok(th_render_typeahead_item_called);
+ assert.ok(th_render_person_called);
+ assert.ok(th_render_user_group_called);
+ assert.ok(th_render_stream_called);
+ assert.ok(th_render_typeahead_item_called);
+ assert.ok(th_render_slash_command_called);
});
test("filter_and_sort_mentions (normal)", () => {
diff --git a/frontend_tests/node_tests/copy_and_paste.js b/frontend_tests/node_tests/copy_and_paste.js
--- a/frontend_tests/node_tests/copy_and_paste.js
+++ b/frontend_tests/node_tests/copy_and_paste.js
@@ -100,7 +100,7 @@ run_test("paste_handler", () => {
insert_syntax_and_focus_called = true;
};
copy_and_paste.paste_handler(event);
- assert(insert_syntax_and_focus_called);
+ assert.ok(insert_syntax_and_focus_called);
data =
'<meta http-equiv="content-type" content="text/html; charset=utf-8"><img src="http://localhost:9991/thumbnail?url=user_uploads%2F1%2Fe2%2FHPMCcGWOG9rS2M4ybHN8sEzh%2Fpasted_image.png&size=full"/>';
@@ -108,5 +108,5 @@ run_test("paste_handler", () => {
event.originalEvent.clipboardData.setData("text/html", data);
insert_syntax_and_focus_called = false;
copy_and_paste.paste_handler(event);
- assert(!insert_syntax_and_focus_called);
+ assert.ok(!insert_syntax_and_focus_called);
});
diff --git a/frontend_tests/node_tests/dispatch.js b/frontend_tests/node_tests/dispatch.js
--- a/frontend_tests/node_tests/dispatch.js
+++ b/frontend_tests/node_tests/dispatch.js
@@ -115,16 +115,16 @@ function assert_same(actual, expected) {
run_test("alert_words", (override) => {
alert_words.initialize({alert_words: []});
- assert(!alert_words.has_alert_word("fire"));
- assert(!alert_words.has_alert_word("lunch"));
+ assert.ok(!alert_words.has_alert_word("fire"));
+ assert.ok(!alert_words.has_alert_word("lunch"));
override(alert_words_ui, "render_alert_words_ui", noop);
const event = event_fixtures.alert_words;
dispatch(event);
assert.deepEqual(alert_words.get_word_list(), ["fire", "lunch"]);
- assert(alert_words.has_alert_word("fire"));
- assert(alert_words.has_alert_word("lunch"));
+ assert.ok(alert_words.has_alert_word("fire"));
+ assert.ok(alert_words.has_alert_word("lunch"));
});
run_test("attachments", (override) => {
@@ -305,7 +305,7 @@ run_test("realm settings", (override) => {
called = true;
});
- assert(called);
+ assert.ok(called);
}
// realm
@@ -567,7 +567,7 @@ run_test("realm_user", (override) => {
// manipulation
assert.deepEqual(added_person, event.person);
- assert(people.is_active_user_for_popover(event.person.user_id));
+ assert.ok(people.is_active_user_for_popover(event.person.user_id));
event = event_fixtures.realm_user__remove;
override(stream_events, "remove_deactivated_user_from_all_streams", noop);
@@ -576,7 +576,7 @@ run_test("realm_user", (override) => {
// We don't actually remove the person, we just deactivate them.
const removed_person = people.get_by_user_id(event.person.user_id);
assert.equal(removed_person.full_name, "Test User");
- assert(!people.is_active_user_for_popover(event.person.user_id));
+ assert.ok(!people.is_active_user_for_popover(event.person.user_id));
event = event_fixtures.realm_user__update;
const stub = make_stub();
diff --git a/frontend_tests/node_tests/dispatch_subs.js b/frontend_tests/node_tests/dispatch_subs.js
--- a/frontend_tests/node_tests/dispatch_subs.js
+++ b/frontend_tests/node_tests/dispatch_subs.js
@@ -88,14 +88,14 @@ test("peer add/remove", (override) => {
assert.equal(compose_fade_stub.num_calls, 1);
assert.equal(subs_stub.num_calls, 1);
- assert(peer_data.is_user_subscribed(event.stream_ids[0], event.user_ids[0]));
+ assert.ok(peer_data.is_user_subscribed(event.stream_ids[0], event.user_ids[0]));
event = event_fixtures.subscription__peer_remove;
dispatch(event);
assert.equal(compose_fade_stub.num_calls, 2);
assert.equal(subs_stub.num_calls, 2);
- assert(!peer_data.is_user_subscribed(event.stream_ids[0], event.user_ids[0]));
+ assert.ok(!peer_data.is_user_subscribed(event.stream_ids[0], event.user_ids[0]));
});
test("remove", (override) => {
diff --git a/frontend_tests/node_tests/drafts.js b/frontend_tests/node_tests/drafts.js
--- a/frontend_tests/node_tests/drafts.js
+++ b/frontend_tests/node_tests/drafts.js
@@ -171,7 +171,7 @@ test("initialize", (override) => {
called = true;
});
f();
- assert(called);
+ assert.ok(called);
};
drafts.initialize();
diff --git a/frontend_tests/node_tests/dropdown_list_widget.js b/frontend_tests/node_tests/dropdown_list_widget.js
--- a/frontend_tests/node_tests/dropdown_list_widget.js
+++ b/frontend_tests/node_tests/dropdown_list_widget.js
@@ -43,25 +43,25 @@ run_test("basic_functions", () => {
assert.equal(widget.value(), "one");
assert.equal(updated_value, undefined); // We haven't 'updated' the widget yet.
- assert(reset_button.visible());
+ assert.ok(reset_button.visible());
widget.update("two");
assert.equal($widget.text(), "rendered: two");
assert.equal(widget.value(), "two");
assert.equal(updated_value, "two");
- assert(reset_button.visible());
+ assert.ok(reset_button.visible());
widget.update(null);
assert.equal($widget.text(), "translated: not set");
assert.equal(widget.value(), "");
assert.equal(updated_value, null);
- assert(!reset_button.visible());
+ assert.ok(!reset_button.visible());
widget.update("four");
assert.equal($widget.text(), "translated: not set");
assert.equal(widget.value(), "four");
assert.equal(updated_value, "four");
- assert(!reset_button.visible());
+ assert.ok(!reset_button.visible());
});
run_test("no_default_value", () => {
diff --git a/frontend_tests/node_tests/echo.js b/frontend_tests/node_tests/echo.js
--- a/frontend_tests/node_tests/echo.js
+++ b/frontend_tests/node_tests/echo.js
@@ -226,9 +226,9 @@ run_test("insert_local_message streams", (override) => {
};
echo.insert_local_message(message_request, local_id_float);
- assert(apply_markdown_called);
- assert(add_topic_links_called);
- assert(insert_message_called);
+ assert.ok(apply_markdown_called);
+ assert.ok(add_topic_links_called);
+ assert.ok(insert_message_called);
});
run_test("insert_local_message PM", (override) => {
@@ -273,9 +273,9 @@ run_test("insert_local_message PM", (override) => {
sender_id: 123,
};
echo.insert_local_message(message_request, local_id_float);
- assert(add_topic_links_called);
- assert(apply_markdown_called);
- assert(insert_message_called);
+ assert.ok(add_topic_links_called);
+ assert.ok(apply_markdown_called);
+ assert.ok(insert_message_called);
});
MockDate.reset();
diff --git a/frontend_tests/node_tests/example1.js b/frontend_tests/node_tests/example1.js
--- a/frontend_tests/node_tests/example1.js
+++ b/frontend_tests/node_tests/example1.js
@@ -25,8 +25,8 @@ const util = zrequire("util");
// The most basic unit tests load up code, call functions,
// and assert truths:
-assert(!util.find_wildcard_mentions("boring text"));
-assert(util.find_wildcard_mentions("mention @**everyone**"));
+assert.ok(!util.find_wildcard_mentions("boring text"));
+assert.ok(util.find_wildcard_mentions("mention @**everyone**"));
// Let's test with people.js next. We'll show this technique:
// * get a false value
@@ -44,9 +44,9 @@ const isaac = {
// the tests in people.js in the same directory as this file.
// Let's exercise the code and use assert to verify it works!
-assert(!people.is_known_user_id(isaac.user_id));
+assert.ok(!people.is_known_user_id(isaac.user_id));
people.add_active_user(isaac);
-assert(people.is_known_user_id(isaac.user_id));
+assert.ok(people.is_known_user_id(isaac.user_id));
// Let's look at stream_data next, and we will start by putting
// some data at module scope. (You could also declare this inside
diff --git a/frontend_tests/node_tests/example4.js b/frontend_tests/node_tests/example4.js
--- a/frontend_tests/node_tests/example4.js
+++ b/frontend_tests/node_tests/example4.js
@@ -79,13 +79,13 @@ run_test("add users with event", () => {
person: bob,
};
- assert(!people.is_known_user_id(bob.user_id));
+ assert.ok(!people.is_known_user_id(bob.user_id));
// Let's simulate dispatching our event!
server_events_dispatch.dispatch_normal_event(event);
// And it works!
- assert(people.is_known_user_id(bob.user_id));
+ assert.ok(people.is_known_user_id(bob.user_id));
});
/*
diff --git a/frontend_tests/node_tests/filter.js b/frontend_tests/node_tests/filter.js
--- a/frontend_tests/node_tests/filter.js
+++ b/frontend_tests/node_tests/filter.js
@@ -78,20 +78,20 @@ test("basics", () => {
assert_same_operators(filter.operators(), operators);
assert.deepEqual(filter.operands("stream"), ["foo"]);
- assert(filter.has_operator("stream"));
- assert(!filter.has_operator("search"));
+ assert.ok(filter.has_operator("stream"));
+ assert.ok(!filter.has_operator("search"));
- assert(filter.has_operand("stream", "foo"));
- assert(!filter.has_operand("stream", "exclude_stream"));
- assert(!filter.has_operand("stream", "nada"));
+ assert.ok(filter.has_operand("stream", "foo"));
+ assert.ok(!filter.has_operand("stream", "exclude_stream"));
+ assert.ok(!filter.has_operand("stream", "nada"));
- assert(!filter.is_search());
- assert(!filter.can_mark_messages_read());
- assert(!filter.contains_only_private_messages());
- assert(!filter.allow_use_first_unread_when_narrowing());
- assert(filter.includes_full_stream_history());
- assert(filter.can_apply_locally());
- assert(!filter.is_personal_filter());
+ assert.ok(!filter.is_search());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(!filter.allow_use_first_unread_when_narrowing());
+ assert.ok(filter.includes_full_stream_history());
+ assert.ok(filter.can_apply_locally());
+ assert.ok(!filter.is_personal_filter());
operators = [
{operator: "stream", operand: "foo"},
@@ -100,184 +100,184 @@ test("basics", () => {
];
filter = new Filter(operators);
- assert(filter.is_search());
- assert(!filter.can_mark_messages_read());
- assert(!filter.contains_only_private_messages());
- assert(!filter.allow_use_first_unread_when_narrowing());
- assert(!filter.can_apply_locally());
- assert(!filter.is_personal_filter());
- assert(filter.can_bucket_by("stream"));
- assert(filter.can_bucket_by("stream", "topic"));
+ assert.ok(filter.is_search());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(!filter.allow_use_first_unread_when_narrowing());
+ assert.ok(!filter.can_apply_locally());
+ assert.ok(!filter.is_personal_filter());
+ assert.ok(filter.can_bucket_by("stream"));
+ assert.ok(filter.can_bucket_by("stream", "topic"));
// If our only stream operator is negated, then for all intents and purposes,
// we don't consider ourselves to have a stream operator, because we don't
// want to have the stream in the tab bar or unsubscribe messaging, etc.
operators = [{operator: "stream", operand: "exclude", negated: true}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(!filter.has_operator("stream"));
- assert(!filter.can_mark_messages_read());
- assert(!filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(!filter.has_operator("stream"));
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.is_personal_filter());
// Negated searches are just like positive searches for our purposes, since
// the search logic happens on the backend and we need to have can_apply_locally()
// be false, and we want "Search results" in the tab bar.
operators = [{operator: "search", operand: "stop_word", negated: true}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(filter.has_operator("search"));
- assert(!filter.can_apply_locally());
- assert(!filter.can_mark_messages_read());
- assert(!filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(filter.has_operator("search"));
+ assert.ok(!filter.can_apply_locally());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.is_personal_filter());
// Similar logic applies to negated "has" searches.
operators = [{operator: "has", operand: "images", negated: true}];
filter = new Filter(operators);
- assert(filter.has_operator("has"));
- assert(filter.can_apply_locally());
- assert(!filter.can_apply_locally(true));
- assert(!filter.includes_full_stream_history());
- assert(!filter.can_mark_messages_read());
- assert(!filter.is_personal_filter());
+ assert.ok(filter.has_operator("has"));
+ assert.ok(filter.can_apply_locally());
+ assert.ok(!filter.can_apply_locally(true));
+ assert.ok(!filter.includes_full_stream_history());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.is_personal_filter());
operators = [{operator: "streams", operand: "public", negated: true}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(!filter.has_operator("streams"));
- assert(!filter.can_mark_messages_read());
- assert(filter.has_negated_operand("streams", "public"));
- assert(!filter.can_apply_locally());
- assert(!filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(!filter.has_operator("streams"));
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(filter.has_negated_operand("streams", "public"));
+ assert.ok(!filter.can_apply_locally());
+ assert.ok(!filter.is_personal_filter());
operators = [{operator: "streams", operand: "public"}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(filter.has_operator("streams"));
- assert(!filter.can_mark_messages_read());
- assert(!filter.has_negated_operand("streams", "public"));
- assert(!filter.can_apply_locally());
- assert(filter.includes_full_stream_history());
- assert(!filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(filter.has_operator("streams"));
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.has_negated_operand("streams", "public"));
+ assert.ok(!filter.can_apply_locally());
+ assert.ok(filter.includes_full_stream_history());
+ assert.ok(!filter.is_personal_filter());
operators = [{operator: "is", operand: "private"}];
filter = new Filter(operators);
- assert(filter.contains_only_private_messages());
- assert(filter.can_mark_messages_read());
- assert(!filter.has_operator("search"));
- assert(filter.can_apply_locally());
- assert(!filter.is_personal_filter());
+ assert.ok(filter.contains_only_private_messages());
+ assert.ok(filter.can_mark_messages_read());
+ assert.ok(!filter.has_operator("search"));
+ assert.ok(filter.can_apply_locally());
+ assert.ok(!filter.is_personal_filter());
operators = [{operator: "is", operand: "mentioned"}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(filter.can_mark_messages_read());
- assert(!filter.has_operator("search"));
- assert(filter.can_apply_locally());
- assert(filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(filter.can_mark_messages_read());
+ assert.ok(!filter.has_operator("search"));
+ assert.ok(filter.can_apply_locally());
+ assert.ok(filter.is_personal_filter());
operators = [{operator: "is", operand: "starred"}];
filter = new Filter(operators);
- assert(!filter.contains_only_private_messages());
- assert(!filter.can_mark_messages_read());
- assert(!filter.has_operator("search"));
- assert(filter.can_apply_locally());
- assert(filter.is_personal_filter());
+ assert.ok(!filter.contains_only_private_messages());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(!filter.has_operator("search"));
+ assert.ok(filter.can_apply_locally());
+ assert.ok(filter.is_personal_filter());
operators = [{operator: "pm-with", operand: "[email protected]"}];
filter = new Filter(operators);
- assert(filter.is_non_huddle_pm());
- assert(filter.contains_only_private_messages());
- assert(!filter.has_operator("search"));
- assert(filter.can_apply_locally());
- assert(!filter.is_personal_filter());
+ assert.ok(filter.is_non_huddle_pm());
+ assert.ok(filter.contains_only_private_messages());
+ assert.ok(!filter.has_operator("search"));
+ assert.ok(filter.can_apply_locally());
+ assert.ok(!filter.is_personal_filter());
operators = [{operator: "pm-with", operand: "[email protected],[email protected]"}];
filter = new Filter(operators);
- assert(!filter.is_non_huddle_pm());
- assert(filter.contains_only_private_messages());
+ assert.ok(!filter.is_non_huddle_pm());
+ assert.ok(filter.contains_only_private_messages());
operators = [{operator: "group-pm-with", operand: "[email protected]"}];
filter = new Filter(operators);
- assert(!filter.is_non_huddle_pm());
- assert(filter.contains_only_private_messages());
- assert(!filter.has_operator("search"));
- assert(filter.can_apply_locally());
+ assert.ok(!filter.is_non_huddle_pm());
+ assert.ok(filter.contains_only_private_messages());
+ assert.ok(!filter.has_operator("search"));
+ assert.ok(filter.can_apply_locally());
});
function assert_not_mark_read_with_has_operands(additional_operators_to_test) {
additional_operators_to_test = additional_operators_to_test || [];
let has_operator = [{operator: "has", operand: "link"}];
let filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
has_operator = [{operator: "has", operand: "link", negated: true}];
filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
has_operator = [{operator: "has", operand: "image"}];
filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
has_operator = [{operator: "has", operand: "image", negated: true}];
filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
has_operator = [{operator: "has", operand: "attachment", negated: true}];
filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
has_operator = [{operator: "has", operand: "attachment"}];
filter = new Filter(additional_operators_to_test.concat(has_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
}
function assert_not_mark_read_with_is_operands(additional_operators_to_test) {
additional_operators_to_test = additional_operators_to_test || [];
let is_operator = [{operator: "is", operand: "starred"}];
let filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "starred", negated: true}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "mentioned"}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
if (additional_operators_to_test.length === 0) {
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
} else {
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
}
is_operator = [{operator: "is", operand: "mentioned", negated: true}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "alerted"}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "alerted", negated: true}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "unread"}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
is_operator = [{operator: "is", operand: "unread", negated: true}];
filter = new Filter(additional_operators_to_test.concat(is_operator));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
}
function assert_not_mark_read_when_searching(additional_operators_to_test) {
additional_operators_to_test = additional_operators_to_test || [];
let search_op = [{operator: "search", operand: "keyword"}];
let filter = new Filter(additional_operators_to_test.concat(search_op));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
search_op = [{operator: "search", operand: "keyword", negated: true}];
filter = new Filter(additional_operators_to_test.concat(search_op));
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
}
test("can_mark_messages_read", () => {
@@ -287,21 +287,21 @@ test("can_mark_messages_read", () => {
const stream_operator = [{operator: "stream", operand: "foo"}];
let filter = new Filter(stream_operator);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_has_operands(stream_operator);
assert_not_mark_read_with_is_operands(stream_operator);
assert_not_mark_read_when_searching(stream_operator);
const stream_negated_operator = [{operator: "stream", operand: "foo", negated: true}];
filter = new Filter(stream_negated_operator);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
const stream_topic_operators = [
{operator: "stream", operand: "foo"},
{operator: "topic", operand: "bar"},
];
filter = new Filter(stream_topic_operators);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_has_operands(stream_topic_operators);
assert_not_mark_read_with_is_operands(stream_topic_operators);
assert_not_mark_read_when_searching(stream_topic_operators);
@@ -311,7 +311,7 @@ test("can_mark_messages_read", () => {
{operator: "topic", operand: "bar", negated: true},
];
filter = new Filter(stream_negated_topic_operators);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
const pm_with = [{operator: "pm-with", operand: "[email protected],"}];
@@ -319,11 +319,11 @@ test("can_mark_messages_read", () => {
const group_pm = [{operator: "pm-with", operand: "[email protected],[email protected]"}];
filter = new Filter(pm_with);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
filter = new Filter(pm_with_negated);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
filter = new Filter(group_pm);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_is_operands(group_pm);
assert_not_mark_read_with_is_operands(pm_with);
assert_not_mark_read_with_has_operands(group_pm);
@@ -333,14 +333,14 @@ test("can_mark_messages_read", () => {
const is_private = [{operator: "is", operand: "private"}];
filter = new Filter(is_private);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_is_operands(is_private);
assert_not_mark_read_with_has_operands(is_private);
assert_not_mark_read_when_searching(is_private);
const in_all = [{operator: "in", operand: "all"}];
filter = new Filter(in_all);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_is_operands(in_all);
assert_not_mark_read_with_has_operands(in_all);
assert_not_mark_read_when_searching(in_all);
@@ -348,20 +348,20 @@ test("can_mark_messages_read", () => {
const in_home = [{operator: "in", operand: "home"}];
const in_home_negated = [{operator: "in", operand: "home", negated: true}];
filter = new Filter(in_home);
- assert(filter.can_mark_messages_read());
+ assert.ok(filter.can_mark_messages_read());
assert_not_mark_read_with_is_operands(in_home);
assert_not_mark_read_with_has_operands(in_home);
assert_not_mark_read_when_searching(in_home);
filter = new Filter(in_home_negated);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
// Do not mark messages as read when in an unsupported 'in:*' filter.
const in_random = [{operator: "in", operand: "xxxxxxxxx"}];
const in_random_negated = [{operator: "in", operand: "xxxxxxxxx", negated: true}];
filter = new Filter(in_random);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
filter = new Filter(in_random_negated);
- assert(!filter.can_mark_messages_read());
+ assert.ok(!filter.can_mark_messages_read());
// test caching of term types
// init and stub
@@ -374,33 +374,33 @@ test("can_mark_messages_read", () => {
// uncached trial
filter.calc_can_mark_messages_read_called = false;
- assert(filter.can_mark_messages_read());
- assert(filter.calc_can_mark_messages_read_called);
+ assert.ok(filter.can_mark_messages_read());
+ assert.ok(filter.calc_can_mark_messages_read_called);
// cached trial
filter.calc_can_mark_messages_read_called = false;
- assert(filter.can_mark_messages_read());
- assert(!filter.calc_can_mark_messages_read_called);
+ assert.ok(filter.can_mark_messages_read());
+ assert.ok(!filter.calc_can_mark_messages_read_called);
});
test("show_first_unread", () => {
let operators = [{operator: "is", operand: "any"}];
let filter = new Filter(operators);
- assert(filter.allow_use_first_unread_when_narrowing());
+ assert.ok(filter.allow_use_first_unread_when_narrowing());
operators = [{operator: "search", operand: "query to search"}];
filter = new Filter(operators);
- assert(!filter.allow_use_first_unread_when_narrowing());
+ assert.ok(!filter.allow_use_first_unread_when_narrowing());
filter = new Filter();
- assert(filter.can_mark_messages_read());
- assert(filter.allow_use_first_unread_when_narrowing());
+ assert.ok(filter.can_mark_messages_read());
+ assert.ok(filter.allow_use_first_unread_when_narrowing());
// Side case
operators = [{operator: "is", operand: "any"}];
filter = new Filter(operators);
- assert(!filter.can_mark_messages_read());
- assert(filter.allow_use_first_unread_when_narrowing());
+ assert.ok(!filter.can_mark_messages_read());
+ assert.ok(filter.allow_use_first_unread_when_narrowing());
});
test("filter_with_new_params_topic", () => {
@@ -410,9 +410,9 @@ test("filter_with_new_params_topic", () => {
];
const filter = new Filter(operators);
- assert(filter.has_topic("foo", "old topic"));
- assert(!filter.has_topic("wrong", "old topic"));
- assert(!filter.has_topic("foo", "wrong"));
+ assert.ok(filter.has_topic("foo", "old topic"));
+ assert.ok(!filter.has_topic("wrong", "old topic"));
+ assert.ok(!filter.has_topic("foo", "wrong"));
const new_filter = filter.filter_with_new_params({
operator: "topic",
@@ -430,9 +430,9 @@ test("filter_with_new_params_stream", () => {
];
const filter = new Filter(operators);
- assert(filter.has_topic("foo", "old topic"));
- assert(!filter.has_topic("wrong", "old topic"));
- assert(!filter.has_topic("foo", "wrong"));
+ assert.ok(filter.has_topic("foo", "old topic"));
+ assert.ok(!filter.has_topic("wrong", "old topic"));
+ assert.ok(!filter.has_topic("foo", "wrong"));
const new_filter = filter.filter_with_new_params({
operator: "stream",
@@ -452,7 +452,7 @@ test("new_style_operators", () => {
const filter = new Filter(operators);
assert.deepEqual(filter.operands("stream"), ["foo"]);
- assert(filter.can_bucket_by("stream"));
+ assert.ok(filter.can_bucket_by("stream"));
});
test("public_operators", () => {
@@ -467,7 +467,7 @@ test("public_operators", () => {
with_field(page_params, "narrow_stream", undefined, () => {
assert_same_operators(filter.public_operators(), operators);
});
- assert(filter.can_bucket_by("stream"));
+ assert.ok(filter.can_bucket_by("stream"));
operators = [{operator: "stream", operand: "default"}];
filter = new Filter(operators);
@@ -485,14 +485,14 @@ test("redundancies", () => {
{operator: "is", operand: "private"},
];
filter = new Filter(terms);
- assert(filter.can_bucket_by("pm-with"));
+ assert.ok(filter.can_bucket_by("pm-with"));
terms = [
{operator: "pm-with", operand: "[email protected],", negated: true},
{operator: "is", operand: "private"},
];
filter = new Filter(terms);
- assert(filter.can_bucket_by("is-private", "not-pm-with"));
+ assert.ok(filter.can_bucket_by("is-private", "not-pm-with"));
});
test("canonicalization", () => {
@@ -559,10 +559,10 @@ test("predicate_basics", () => {
["topic", "Bar"],
]);
- assert(predicate({type: "stream", stream_id, topic: "bar"}));
- assert(!predicate({type: "stream", stream_id, topic: "whatever"}));
- assert(!predicate({type: "stream", stream_id: 9999999}));
- assert(!predicate({type: "private"}));
+ assert.ok(predicate({type: "stream", stream_id, topic: "bar"}));
+ assert.ok(!predicate({type: "stream", stream_id, topic: "whatever"}));
+ assert.ok(!predicate({type: "stream", stream_id: 9999999}));
+ assert.ok(!predicate({type: "private"}));
// For old streams that we are no longer subscribed to, we may not have
// a sub, but these should still match by stream name.
@@ -570,92 +570,92 @@ test("predicate_basics", () => {
["stream", "old-Stream"],
["topic", "Bar"],
]);
- assert(predicate({type: "stream", stream: "Old-stream", topic: "bar"}));
- assert(!predicate({type: "stream", stream: "no-match", topic: "whatever"}));
+ assert.ok(predicate({type: "stream", stream: "Old-stream", topic: "bar"}));
+ assert.ok(!predicate({type: "stream", stream: "no-match", topic: "whatever"}));
predicate = get_predicate([["search", "emoji"]]);
- assert(predicate({}));
+ assert.ok(predicate({}));
predicate = get_predicate([["topic", "Bar"]]);
- assert(!predicate({type: "private"}));
+ assert.ok(!predicate({type: "private"}));
predicate = get_predicate([["is", "private"]]);
- assert(predicate({type: "private"}));
- assert(!predicate({type: "stream"}));
+ assert.ok(predicate({type: "private"}));
+ assert.ok(!predicate({type: "stream"}));
predicate = get_predicate([["streams", "public"]]);
- assert(predicate({}));
+ assert.ok(predicate({}));
predicate = get_predicate([["is", "starred"]]);
- assert(predicate({starred: true}));
- assert(!predicate({starred: false}));
+ assert.ok(predicate({starred: true}));
+ assert.ok(!predicate({starred: false}));
predicate = get_predicate([["is", "unread"]]);
- assert(predicate({unread: true}));
- assert(!predicate({unread: false}));
+ assert.ok(predicate({unread: true}));
+ assert.ok(!predicate({unread: false}));
predicate = get_predicate([["is", "alerted"]]);
- assert(predicate({alerted: true}));
- assert(!predicate({alerted: false}));
- assert(!predicate({}));
+ assert.ok(predicate({alerted: true}));
+ assert.ok(!predicate({alerted: false}));
+ assert.ok(!predicate({}));
predicate = get_predicate([["is", "mentioned"]]);
- assert(predicate({mentioned: true}));
- assert(!predicate({mentioned: false}));
+ assert.ok(predicate({mentioned: true}));
+ assert.ok(!predicate({mentioned: false}));
predicate = get_predicate([["in", "all"]]);
- assert(predicate({}));
+ assert.ok(predicate({}));
const unknown_stream_id = 999;
predicate = get_predicate([["in", "home"]]);
- assert(!predicate({stream_id: unknown_stream_id, stream: "unknown"}));
- assert(predicate({type: "private"}));
+ assert.ok(!predicate({stream_id: unknown_stream_id, stream: "unknown"}));
+ assert.ok(predicate({type: "private"}));
with_field(page_params, "narrow_stream", "kiosk", () => {
- assert(predicate({stream: "kiosk"}));
+ assert.ok(predicate({stream: "kiosk"}));
});
predicate = get_predicate([["near", 5]]);
- assert(predicate({}));
+ assert.ok(predicate({}));
predicate = get_predicate([["id", 5]]);
- assert(predicate({id: 5}));
- assert(!predicate({id: 6}));
+ assert.ok(predicate({id: 5}));
+ assert.ok(!predicate({id: 6}));
predicate = get_predicate([
["id", 5],
["topic", "lunch"],
]);
- assert(predicate({type: "stream", id: 5, topic: "lunch"}));
- assert(!predicate({type: "stream", id: 5, topic: "dinner"}));
+ assert.ok(predicate({type: "stream", id: 5, topic: "lunch"}));
+ assert.ok(!predicate({type: "stream", id: 5, topic: "dinner"}));
predicate = get_predicate([["sender", "[email protected]"]]);
- assert(predicate({sender_id: joe.user_id}));
- assert(!predicate({sender_email: steve.user_id}));
+ assert.ok(predicate({sender_id: joe.user_id}));
+ assert.ok(!predicate({sender_email: steve.user_id}));
predicate = get_predicate([["pm-with", "[email protected]"]]);
- assert(
+ assert.ok(
predicate({
type: "private",
display_recipient: [{id: joe.user_id}],
}),
);
- assert(
+ assert.ok(
!predicate({
type: "private",
display_recipient: [{id: steve.user_id}],
}),
);
- assert(
+ assert.ok(
!predicate({
type: "private",
display_recipient: [{id: 999999}],
}),
);
- assert(!predicate({type: "stream"}));
+ assert.ok(!predicate({type: "stream"}));
predicate = get_predicate([["pm-with", "[email protected],[email protected]"]]);
- assert(
+ assert.ok(
predicate({
type: "private",
display_recipient: [{id: joe.user_id}, {id: steve.user_id}],
@@ -664,7 +664,7 @@ test("predicate_basics", () => {
// Make sure your own email is ignored
predicate = get_predicate([["pm-with", "[email protected],[email protected],[email protected]"]]);
- assert(
+ assert.ok(
predicate({
type: "private",
display_recipient: [{id: joe.user_id}, {id: steve.user_id}],
@@ -672,7 +672,7 @@ test("predicate_basics", () => {
);
predicate = get_predicate([["pm-with", "[email protected]"]]);
- assert(
+ assert.ok(
!predicate({
type: "private",
display_recipient: [{id: joe.user_id}],
@@ -680,7 +680,7 @@ test("predicate_basics", () => {
);
predicate = get_predicate([["group-pm-with", "[email protected]"]]);
- assert(
+ assert.ok(
!predicate({
type: "private",
display_recipient: [{id: joe.user_id}],
@@ -688,26 +688,26 @@ test("predicate_basics", () => {
);
predicate = get_predicate([["group-pm-with", "[email protected]"]]);
- assert(
+ assert.ok(
predicate({
type: "private",
display_recipient: [{id: joe.user_id}, {id: steve.user_id}, {id: me.user_id}],
}),
);
- assert(
+ assert.ok(
!predicate({
// you must be a part of the group pm
type: "private",
display_recipient: [{id: joe.user_id}, {id: steve.user_id}],
}),
);
- assert(
+ assert.ok(
!predicate({
type: "private",
display_recipient: [{id: steve.user_id}, {id: me.user_id}],
}),
);
- assert(!predicate({type: "stream"}));
+ assert.ok(!predicate({type: "stream"}));
const img_msg = {
content:
@@ -727,10 +727,10 @@ test("predicate_basics", () => {
};
predicate = get_predicate([["has", "non_valid_operand"]]);
- assert(!predicate(img_msg));
- assert(!predicate(non_img_attachment_msg));
- assert(!predicate(link_msg));
- assert(!predicate(no_has_filter_matching_msg));
+ assert.ok(!predicate(img_msg));
+ assert.ok(!predicate(non_img_attachment_msg));
+ assert.ok(!predicate(link_msg));
+ assert.ok(!predicate(no_has_filter_matching_msg));
// HTML content of message is used to determine if image have link, image or attachment.
// We are using jquery to parse the html and find existence of relevant tags/elements.
@@ -741,33 +741,33 @@ test("predicate_basics", () => {
const has_link = get_predicate([["has", "link"]]);
set_find_results_for_msg_content(img_msg, "a", ["stub"]);
- assert(has_link(img_msg));
+ assert.ok(has_link(img_msg));
set_find_results_for_msg_content(non_img_attachment_msg, "a", ["stub"]);
- assert(has_link(non_img_attachment_msg));
+ assert.ok(has_link(non_img_attachment_msg));
set_find_results_for_msg_content(link_msg, "a", ["stub"]);
- assert(has_link(link_msg));
+ assert.ok(has_link(link_msg));
set_find_results_for_msg_content(no_has_filter_matching_msg, "a", false);
- assert(!has_link(no_has_filter_matching_msg));
+ assert.ok(!has_link(no_has_filter_matching_msg));
const has_attachment = get_predicate([["has", "attachment"]]);
set_find_results_for_msg_content(img_msg, "a[href^='/user_uploads']", ["stub"]);
- assert(has_attachment(img_msg));
+ assert.ok(has_attachment(img_msg));
set_find_results_for_msg_content(non_img_attachment_msg, "a[href^='/user_uploads']", ["stub"]);
- assert(has_attachment(non_img_attachment_msg));
+ assert.ok(has_attachment(non_img_attachment_msg));
set_find_results_for_msg_content(link_msg, "a[href^='/user_uploads']", false);
- assert(!has_attachment(link_msg));
+ assert.ok(!has_attachment(link_msg));
set_find_results_for_msg_content(no_has_filter_matching_msg, "a[href^='/user_uploads']", false);
- assert(!has_attachment(no_has_filter_matching_msg));
+ assert.ok(!has_attachment(no_has_filter_matching_msg));
const has_image = get_predicate([["has", "image"]]);
set_find_results_for_msg_content(img_msg, ".message_inline_image", ["stub"]);
- assert(has_image(img_msg));
+ assert.ok(has_image(img_msg));
set_find_results_for_msg_content(non_img_attachment_msg, ".message_inline_image", false);
- assert(!has_image(non_img_attachment_msg));
+ assert.ok(!has_image(non_img_attachment_msg));
set_find_results_for_msg_content(link_msg, ".message_inline_image", false);
- assert(!has_image(link_msg));
+ assert.ok(!has_image(link_msg));
set_find_results_for_msg_content(no_has_filter_matching_msg, ".message_inline_image", false);
- assert(!has_image(no_has_filter_matching_msg));
+ assert.ok(!has_image(no_has_filter_matching_msg));
});
test("negated_predicates", () => {
@@ -779,12 +779,12 @@ test("negated_predicates", () => {
narrow = [{operator: "stream", operand: "social", negated: true}];
predicate = new Filter(narrow).predicate();
- assert(predicate({type: "stream", stream_id: 999999}));
- assert(!predicate({type: "stream", stream_id: social_stream_id}));
+ assert.ok(predicate({type: "stream", stream_id: 999999}));
+ assert.ok(!predicate({type: "stream", stream_id: social_stream_id}));
narrow = [{operator: "streams", operand: "public", negated: true}];
predicate = new Filter(narrow).predicate();
- assert(predicate({}));
+ assert.ok(predicate({}));
});
function test_mit_exceptions() {
@@ -792,18 +792,18 @@ function test_mit_exceptions() {
["stream", "Foo"],
["topic", "personal"],
]);
- assert(predicate({type: "stream", stream: "foo", topic: "personal"}));
- assert(predicate({type: "stream", stream: "foo.d", topic: "personal"}));
- assert(predicate({type: "stream", stream: "foo.d", topic: ""}));
- assert(!predicate({type: "stream", stream: "wrong"}));
- assert(!predicate({type: "stream", stream: "foo", topic: "whatever"}));
- assert(!predicate({type: "private"}));
+ assert.ok(predicate({type: "stream", stream: "foo", topic: "personal"}));
+ assert.ok(predicate({type: "stream", stream: "foo.d", topic: "personal"}));
+ assert.ok(predicate({type: "stream", stream: "foo.d", topic: ""}));
+ assert.ok(!predicate({type: "stream", stream: "wrong"}));
+ assert.ok(!predicate({type: "stream", stream: "foo", topic: "whatever"}));
+ assert.ok(!predicate({type: "private"}));
predicate = get_predicate([
["stream", "Foo"],
["topic", "bar"],
]);
- assert(predicate({type: "stream", stream: "foo", topic: "bar.d"}));
+ assert.ok(predicate({type: "stream", stream: "foo", topic: "bar.d"}));
// Try to get the MIT regex to explode for an empty stream.
let terms = [
@@ -811,7 +811,7 @@ function test_mit_exceptions() {
{operator: "topic", operand: "bar"},
];
predicate = new Filter(terms).predicate();
- assert(!predicate({type: "stream", stream: "foo", topic: "bar"}));
+ assert.ok(!predicate({type: "stream", stream: "foo", topic: "bar"}));
// Try to get the MIT regex to explode for an empty topic.
terms = [
@@ -819,7 +819,7 @@ function test_mit_exceptions() {
{operator: "topic", operand: ""},
];
predicate = new Filter(terms).predicate();
- assert(!predicate({type: "stream", stream: "foo", topic: "bar"}));
+ assert.ok(!predicate({type: "stream", stream: "foo", topic: "bar"}));
}
test("mit_exceptions", () => {
@@ -833,19 +833,19 @@ test("predicate_edge_cases", () => {
// The code supports undefined as an operator to Filter, which results
// in a predicate that accepts any message.
predicate = new Filter().predicate();
- assert(predicate({}));
+ assert.ok(predicate({}));
// Upstream code should prevent Filter.predicate from being called with
// invalid operator/operand combinations, but right now we just silently
// return a function that accepts all messages.
predicate = get_predicate([["in", "bogus"]]);
- assert(!predicate({}));
+ assert.ok(!predicate({}));
predicate = get_predicate([["bogus", 33]]);
- assert(predicate({}));
+ assert.ok(predicate({}));
predicate = get_predicate([["is", "bogus"]]);
- assert(!predicate({}));
+ assert.ok(!predicate({}));
// Exercise caching feature.
const stream_id = 101;
@@ -857,7 +857,7 @@ test("predicate_edge_cases", () => {
const filter = new Filter(terms);
filter.predicate();
predicate = filter.predicate(); // get cached version
- assert(predicate({type: "stream", stream_id, topic: "Mars"}));
+ assert.ok(predicate({type: "stream", stream_id, topic: "Mars"}));
});
test("parse", () => {
@@ -1254,13 +1254,13 @@ test("term_type", () => {
filter._build_sorted_term_types_called = false;
const built_terms = filter.sorted_term_types();
assert.deepEqual(built_terms, ["stream", "topic", "sender"]);
- assert(filter._build_sorted_term_types_called);
+ assert.ok(filter._build_sorted_term_types_called);
// cached trial
filter._build_sorted_term_types_called = false;
const cached_terms = filter.sorted_term_types();
assert.deepEqual(cached_terms, ["stream", "topic", "sender"]);
- assert(!filter._build_sorted_term_types_called);
+ assert.ok(!filter._build_sorted_term_types_called);
});
test("first_valid_id_from", (override) => {
@@ -1570,5 +1570,5 @@ test("error_cases", (override) => {
override(people, "pm_with_user_ids", () => {});
const predicate = get_predicate([["pm-with", "[email protected]"]]);
- assert(!predicate({type: "private"}));
+ assert.ok(!predicate({type: "private"}));
});
diff --git a/frontend_tests/node_tests/fold_dict.js b/frontend_tests/node_tests/fold_dict.js
--- a/frontend_tests/node_tests/fold_dict.js
+++ b/frontend_tests/node_tests/fold_dict.js
@@ -53,12 +53,12 @@ run_test("case insensitivity", () => {
assert.deepEqual(Array.from(d.keys()), []);
- assert(!d.has("foo"));
+ assert.ok(!d.has("foo"));
d.set("fOO", "Hello world");
assert.equal(d.get("foo"), "Hello world");
- assert(d.has("foo"));
- assert(d.has("FOO"));
- assert(!d.has("not_a_key"));
+ assert.ok(d.has("foo"));
+ assert.ok(d.has("FOO"));
+ assert.ok(!d.has("not_a_key"));
assert.deepEqual(Array.from(d.keys()), ["fOO"]);
diff --git a/frontend_tests/node_tests/hotkey.js b/frontend_tests/node_tests/hotkey.js
--- a/frontend_tests/node_tests/hotkey.js
+++ b/frontend_tests/node_tests/hotkey.js
@@ -233,7 +233,7 @@ function process(s) {
function assert_mapping(c, module, func_name, shiftKey) {
stubbing(module, func_name, (stub) => {
- assert(process(c, shiftKey));
+ assert.ok(process(c, shiftKey));
assert.equal(stub.num_calls, 1);
});
}
@@ -432,7 +432,7 @@ run_test("motion_keys", () => {
function assert_mapping(key_name, module, func_name) {
stubbing(module, func_name, (stub) => {
- assert(process(key_name));
+ assert.ok(process(key_name));
assert.equal(stub.num_calls, 1);
});
}
diff --git a/frontend_tests/node_tests/i18n.js b/frontend_tests/node_tests/i18n.js
--- a/frontend_tests/node_tests/i18n.js
+++ b/frontend_tests/node_tests/i18n.js
@@ -79,7 +79,7 @@ run_test("t_tag", () => {
};
const html = require("../../static/templates/actions_popover_content.hbs")(args);
- assert(html.indexOf("Citer et répondre ou transférer") > 0);
+ assert.ok(html.indexOf("Citer et répondre ou transférer") > 0);
});
run_test("tr_tag", () => {
@@ -103,5 +103,5 @@ run_test("tr_tag", () => {
};
const html = require("../../static/templates/settings_tab.hbs")(args);
- assert(html.indexOf("Déclencheurs de notification") > 0);
+ assert.ok(html.indexOf("Déclencheurs de notification") > 0);
});
diff --git a/frontend_tests/node_tests/input_pill.js b/frontend_tests/node_tests/input_pill.js
--- a/frontend_tests/node_tests/input_pill.js
+++ b/frontend_tests/node_tests/input_pill.js
@@ -93,7 +93,7 @@ run_test("basics", (override) => {
};
widget.appendValidatedData(item);
- assert(inserted_before);
+ assert.ok(inserted_before);
assert.deepEqual(widget.items(), [item]);
});
@@ -222,7 +222,7 @@ run_test("paste to input", () => {
});
paste_handler(e);
- assert(entered);
+ assert.ok(entered);
});
run_test("arrows on pills", () => {
@@ -267,10 +267,10 @@ run_test("arrows on pills", () => {
// actually cause any real state changes here. We stub out
// the only interaction, which is to move the focus.
test_key("ArrowLeft");
- assert(prev_focused);
+ assert.ok(prev_focused);
test_key("ArrowRight");
- assert(next_focused);
+ assert.ok(next_focused);
});
run_test("left arrow on input", () => {
@@ -299,7 +299,7 @@ run_test("left arrow on input", () => {
key: "ArrowLeft",
});
- assert(last_pill_focused);
+ assert.ok(last_pill_focused);
});
run_test("comma", () => {
@@ -389,8 +389,8 @@ run_test("insert_remove", (override) => {
widget.appendValue("blue,chartreuse,red,yellow,mauve");
- assert(created);
- assert(!removed);
+ assert.ok(created);
+ assert.ok(!removed);
assert.deepEqual(inserted_html, [
pill_html("BLUE", "some_id1", example_img_link),
@@ -429,7 +429,7 @@ run_test("insert_remove", (override) => {
preventDefault: noop,
});
- assert(removed);
+ assert.ok(removed);
assert.equal(color_removed, "YELLOW");
assert.deepEqual(widget.items(), [items.blue, items.red]);
@@ -461,7 +461,7 @@ run_test("insert_remove", (override) => {
});
assert.equal(color_removed, "BLUE");
- assert(next_pill_focused);
+ assert.ok(next_pill_focused);
});
run_test("exit button on pill", (override) => {
@@ -515,7 +515,7 @@ run_test("exit button on pill", (override) => {
exit_click_handler.call(exit_button_stub, e);
- assert(next_pill_focused);
+ assert.ok(next_pill_focused);
assert.deepEqual(widget.items(), [items.red]);
});
@@ -544,7 +544,7 @@ run_test("misc things", () => {
};
animation_end_handler.call(input_stub);
- assert(shake_class_removed);
+ assert.ok(shake_class_removed);
// bad data
blueslip.expect("error", "no display_value returned");
diff --git a/frontend_tests/node_tests/lazy_set.js b/frontend_tests/node_tests/lazy_set.js
--- a/frontend_tests/node_tests/lazy_set.js
+++ b/frontend_tests/node_tests/lazy_set.js
@@ -27,5 +27,5 @@ run_test("conversions", () => {
blueslip.expect("error", "not a number", 2);
const ls = new LazySet([1, 2]);
ls.add("3");
- assert(ls.has("3"));
+ assert.ok(ls.has("3"));
});
diff --git a/frontend_tests/node_tests/list_cursor.js b/frontend_tests/node_tests/list_cursor.js
--- a/frontend_tests/node_tests/list_cursor.js
+++ b/frontend_tests/node_tests/list_cursor.js
@@ -112,30 +112,30 @@ run_test("multiple item list", (override) => {
cursor.go_to(2);
assert.equal(cursor.get_key(), 2);
- assert(!list_items[1].hasClass("highlight"));
- assert(list_items[2].hasClass("highlight"));
- assert(!list_items[3].hasClass("highlight"));
+ assert.ok(!list_items[1].hasClass("highlight"));
+ assert.ok(list_items[2].hasClass("highlight"));
+ assert.ok(!list_items[3].hasClass("highlight"));
cursor.next();
cursor.next();
cursor.next();
assert.equal(cursor.get_key(), 3);
- assert(!list_items[1].hasClass("highlight"));
- assert(!list_items[2].hasClass("highlight"));
- assert(list_items[3].hasClass("highlight"));
+ assert.ok(!list_items[1].hasClass("highlight"));
+ assert.ok(!list_items[2].hasClass("highlight"));
+ assert.ok(list_items[3].hasClass("highlight"));
cursor.prev();
cursor.prev();
cursor.prev();
assert.equal(cursor.get_key(), 1);
- assert(list_items[1].hasClass("highlight"));
- assert(!list_items[2].hasClass("highlight"));
- assert(!list_items[3].hasClass("highlight"));
+ assert.ok(list_items[1].hasClass("highlight"));
+ assert.ok(!list_items[2].hasClass("highlight"));
+ assert.ok(!list_items[3].hasClass("highlight"));
cursor.clear();
assert.equal(cursor.get_key(), undefined);
cursor.redraw();
- assert(!list_items[1].hasClass("highlight"));
+ assert.ok(!list_items[1].hasClass("highlight"));
});
diff --git a/frontend_tests/node_tests/list_widget.js b/frontend_tests/node_tests/list_widget.js
--- a/frontend_tests/node_tests/list_widget.js
+++ b/frontend_tests/node_tests/list_widget.js
@@ -435,8 +435,8 @@ run_test("sorting", () => {
sort_container.f.apply(button);
- assert(cleared);
- assert(button.siblings_deactivated);
+ assert.ok(cleared);
+ assert.ok(button.siblings_deactivated);
expected_html = html_for([alice, bob, cal, dave, ellen]);
assert.deepEqual(container.appended_data.html(), expected_html);
@@ -444,18 +444,18 @@ run_test("sorting", () => {
// Hit same button again to reverse the data.
cleared = false;
sort_container.f.apply(button);
- assert(cleared);
+ assert.ok(cleared);
expected_html = html_for([ellen, dave, cal, bob, alice]);
assert.deepEqual(container.appended_data.html(), expected_html);
- assert(button.hasClass("descend"));
+ assert.ok(button.hasClass("descend"));
// And then hit a third time to go back to the forward sort.
cleared = false;
sort_container.f.apply(button);
- assert(cleared);
+ assert.ok(cleared);
expected_html = html_for([alice, bob, cal, dave, ellen]);
assert.deepEqual(container.appended_data.html(), expected_html);
- assert(!button.hasClass("descend"));
+ assert.ok(!button.hasClass("descend"));
// Now try a numeric sort.
button_opts = {
@@ -472,8 +472,8 @@ run_test("sorting", () => {
sort_container.f.apply(button);
- assert(cleared);
- assert(button.siblings_deactivated);
+ assert.ok(cleared);
+ assert.ok(button.siblings_deactivated);
expected_html = html_for([dave, cal, bob, alice, ellen]);
assert.deepEqual(container.appended_data.html(), expected_html);
@@ -481,10 +481,10 @@ run_test("sorting", () => {
// Hit same button again to reverse the numeric sort.
cleared = false;
sort_container.f.apply(button);
- assert(cleared);
+ assert.ok(cleared);
expected_html = html_for([ellen, alice, bob, cal, dave]);
assert.deepEqual(container.appended_data.html(), expected_html);
- assert(button.hasClass("descend"));
+ assert.ok(button.hasClass("descend"));
});
run_test("custom sort", () => {
@@ -735,7 +735,7 @@ run_test("render item", () => {
const item = INITIAL_RENDER_COUNT - 1;
const new_html = `<tr data-item=${item}>updated: ${item}</tr>\n`;
const regex = new RegExp(`\\<tr data-item=${item}\\>.*?<\\/tr\\>`);
- assert(expected_queries.includes(query));
+ assert.ok(expected_queries.includes(query));
if (query.includes(`data-item='${INITIAL_RENDER_COUNT}'`)) {
return undefined; // This item is not rendered, so we find nothing
}
@@ -765,17 +765,19 @@ run_test("render item", () => {
});
const item = INITIAL_RENDER_COUNT - 1;
- assert(container.appended_data.html().includes("<tr data-item=2>initial: 2</tr>"));
- assert(container.appended_data.html().includes("<tr data-item=3>initial: 3</tr>"));
+ assert.ok(container.appended_data.html().includes("<tr data-item=2>initial: 2</tr>"));
+ assert.ok(container.appended_data.html().includes("<tr data-item=3>initial: 3</tr>"));
text = "updated";
called = false;
widget.render_item(INITIAL_RENDER_COUNT - 1);
- assert(called);
- assert(container.appended_data.html().includes("<tr data-item=2>initial: 2</tr>"));
- assert(container.appended_data.html().includes(`<tr data-item=${item}>updated: ${item}</tr>`));
+ assert.ok(called);
+ assert.ok(container.appended_data.html().includes("<tr data-item=2>initial: 2</tr>"));
+ assert.ok(
+ container.appended_data.html().includes(`<tr data-item=${item}>updated: ${item}</tr>`),
+ );
// Item 80 should not be in the rendered list. (0 indexed)
- assert(
+ assert.ok(
!container.appended_data
.html()
.includes(
@@ -784,9 +786,9 @@ run_test("render item", () => {
);
called = false;
widget.render_item(INITIAL_RENDER_COUNT);
- assert(!called);
+ assert.ok(!called);
widget.render_item(INITIAL_RENDER_COUNT - 1);
- assert(called);
+ assert.ok(called);
// Tests below this are for the corner cases, where we abort the rerender.
@@ -813,7 +815,7 @@ run_test("render item", () => {
get_item_called = false;
widget_2.render_item(item);
// Test that we didn't try to render the item.
- assert(!get_item_called);
+ assert.ok(!get_item_called);
let rendering_item = false;
const widget_3 = ListWidget.create(container, list, {
diff --git a/frontend_tests/node_tests/markdown.js b/frontend_tests/node_tests/markdown.js
--- a/frontend_tests/node_tests/markdown.js
+++ b/frontend_tests/node_tests/markdown.js
@@ -272,18 +272,18 @@ test("marked_shared", () => {
test("message_flags", () => {
let message = {raw_content: "@**Leo**"};
markdown.apply_markdown(message);
- assert(!message.mentioned);
- assert(!message.mentioned_me_directly);
+ assert.ok(!message.mentioned);
+ assert.ok(!message.mentioned_me_directly);
message = {raw_content: "@**Cordelia, Lear's daughter**"};
markdown.apply_markdown(message);
- assert(message.mentioned);
- assert(message.mentioned_me_directly);
+ assert.ok(message.mentioned);
+ assert.ok(message.mentioned_me_directly);
message = {raw_content: "@**all**"};
markdown.apply_markdown(message);
- assert(message.mentioned);
- assert(!message.mentioned_me_directly);
+ assert.ok(message.mentioned);
+ assert.ok(!message.mentioned_me_directly);
});
test("marked", () => {
@@ -671,7 +671,7 @@ test("message_flags", () => {
markdown.apply_markdown(message);
assert.equal(message.is_me_message, true);
- assert(!message.unread);
+ assert.ok(!message.unread);
input = "/me is testing\nthis";
message = {topic: "No links here", raw_content: input};
diff --git a/frontend_tests/node_tests/message_events.js b/frontend_tests/node_tests/message_events.js
--- a/frontend_tests/node_tests/message_events.js
+++ b/frontend_tests/node_tests/message_events.js
@@ -79,7 +79,7 @@ run_test("update_messages", () => {
assert.deepEqual(stream_topic_history.get_recent_topic_names(denmark.stream_id), ["lunch"]);
unread.update_message_for_mention(original_message);
- assert(unread.unread_mentions_counter.has(original_message.id));
+ assert.ok(unread.unread_mentions_counter.has(original_message.id));
const events = [
{
@@ -121,7 +121,7 @@ run_test("update_messages", () => {
// TEST THIS:
message_events.update_messages(events);
- assert(!unread.unread_mentions_counter.has(original_message.id));
+ assert.ok(!unread.unread_mentions_counter.has(original_message.id));
helper.verify();
diff --git a/frontend_tests/node_tests/message_fetch.js b/frontend_tests/node_tests/message_fetch.js
--- a/frontend_tests/node_tests/message_fetch.js
+++ b/frontend_tests/node_tests/message_fetch.js
@@ -293,9 +293,9 @@ run_test("initialize", () => {
step2.prep();
step1.finish();
- assert(!home_loaded);
+ assert.ok(!home_loaded);
const idle_config = step2.finish();
- assert(home_loaded);
+ assert.ok(home_loaded);
test_backfill_idle(idle_config);
});
diff --git a/frontend_tests/node_tests/message_flags.js b/frontend_tests/node_tests/message_flags.js
--- a/frontend_tests/node_tests/message_flags.js
+++ b/frontend_tests/node_tests/message_flags.js
@@ -35,7 +35,7 @@ run_test("starred", (override) => {
message_flags.toggle_starred_and_update_server(message);
- assert(ui_updated);
+ assert.ok(ui_updated);
assert.deepEqual(posted_data, {
messages: "[50]",
@@ -52,7 +52,7 @@ run_test("starred", (override) => {
message_flags.toggle_starred_and_update_server(message);
- assert(ui_updated);
+ assert.ok(ui_updated);
assert.deepEqual(posted_data, {
messages: "[50]",
@@ -236,7 +236,7 @@ run_test("read", (override) => {
messages: [3, 4, 5, 6, 7],
};
channel_post_opts.success(success_response_data);
- assert(events.timer_set);
+ assert.ok(events.timer_set);
// Mark them non local
local_msg_1.locally_echoed = false;
diff --git a/frontend_tests/node_tests/message_list_data.js b/frontend_tests/node_tests/message_list_data.js
--- a/frontend_tests/node_tests/message_list_data.js
+++ b/frontend_tests/node_tests/message_list_data.js
@@ -46,7 +46,7 @@ run_test("basics", () => {
});
assert.equal(mld.is_search(), false);
- assert(mld.can_mark_messages_read());
+ assert.ok(mld.can_mark_messages_read());
mld.add_anywhere(make_msgs([35, 25, 15, 45]));
assert_contents(mld, [15, 25, 35, 45]);
diff --git a/frontend_tests/node_tests/message_list_view.js b/frontend_tests/node_tests/message_list_view.js
--- a/frontend_tests/node_tests/message_list_view.js
+++ b/frontend_tests/node_tests/message_list_view.js
@@ -282,7 +282,7 @@ test("merge_message_groups", () => {
function assert_message_list_equal(list1, list2) {
const ids1 = extract_message_ids(list1);
const ids2 = extract_message_ids(list2);
- assert(ids1.length);
+ assert.ok(ids1.length);
assert.deepEqual(ids1, ids2);
}
@@ -293,7 +293,7 @@ test("merge_message_groups", () => {
function assert_message_groups_list_equal(list1, list2) {
const ids1 = list1.map((group) => extract_group(group));
const ids2 = list2.map((group) => extract_group(group));
- assert(ids1.length);
+ assert.ok(ids1.length);
assert.deepEqual(ids1, ids2);
}
@@ -341,7 +341,7 @@ test("merge_message_groups", () => {
const list = build_list([message_group1]);
const result = list.merge_message_groups([message_group2], "bottom");
- assert(!message_group2.group_date_divider_html);
+ assert.ok(!message_group2.group_date_divider_html);
assert_message_groups_list_equal(list._message_groups, [message_group1, message_group2]);
assert_message_groups_list_equal(result.append_groups, [message_group2]);
assert.deepEqual(result.prepend_groups, []);
@@ -385,7 +385,7 @@ test("merge_message_groups", () => {
assert.deepEqual(result.rerender_groups, []);
assert.deepEqual(result.append_messages, [message2]);
assert.deepEqual(result.rerender_messages_next_same_sender, [message1]);
- assert(list._message_groups[0].message_containers[1].want_date_divider);
+ assert.ok(list._message_groups[0].message_containers[1].want_date_divider);
})();
(function test_append_message_historical() {
@@ -398,7 +398,7 @@ test("merge_message_groups", () => {
const list = build_list([message_group1]);
const result = list.merge_message_groups([message_group2], "bottom");
- assert(message_group2.bookend_top);
+ assert.ok(message_group2.bookend_top);
assert_message_groups_list_equal(list._message_groups, [message_group1, message_group2]);
assert_message_groups_list_equal(result.append_groups, [message_group2]);
assert.deepEqual(result.prepend_groups, []);
@@ -417,7 +417,7 @@ test("merge_message_groups", () => {
const list = build_list([message_group1]);
const result = list.merge_message_groups([message_group2], "bottom");
- assert(message2.include_sender);
+ assert.ok(message2.include_sender);
assert_message_groups_list_equal(list._message_groups, [
build_message_group([message1, message2]),
]);
@@ -518,7 +518,7 @@ test("merge_message_groups", () => {
const list = build_list([message_group1]);
const result = list.merge_message_groups([message_group2], "top");
- assert(message_group1.bookend_top);
+ assert.ok(message_group1.bookend_top);
assert_message_groups_list_equal(list._message_groups, [message_group2, message_group1]);
assert.deepEqual(result.append_groups, []);
assert_message_groups_list_equal(result.prepend_groups, [message_group2]);
diff --git a/frontend_tests/node_tests/muting.js b/frontend_tests/node_tests/muting.js
--- a/frontend_tests/node_tests/muting.js
+++ b/frontend_tests/node_tests/muting.js
@@ -50,46 +50,46 @@ function test(label, f) {
test("edge_cases", () => {
// private messages
- assert(!muting.is_topic_muted(undefined, undefined));
+ assert.ok(!muting.is_topic_muted(undefined, undefined));
// invalid user
- assert(!muting.is_user_muted(undefined));
+ assert.ok(!muting.is_user_muted(undefined));
});
test("add_and_remove_mutes", () => {
- assert(!muting.is_topic_muted(devel.stream_id, "java"));
+ assert.ok(!muting.is_topic_muted(devel.stream_id, "java"));
muting.add_muted_topic(devel.stream_id, "java");
- assert(muting.is_topic_muted(devel.stream_id, "java"));
+ assert.ok(muting.is_topic_muted(devel.stream_id, "java"));
// test idempotentcy
muting.add_muted_topic(devel.stream_id, "java");
- assert(muting.is_topic_muted(devel.stream_id, "java"));
+ assert.ok(muting.is_topic_muted(devel.stream_id, "java"));
muting.remove_muted_topic(devel.stream_id, "java");
- assert(!muting.is_topic_muted(devel.stream_id, "java"));
+ assert.ok(!muting.is_topic_muted(devel.stream_id, "java"));
// test idempotentcy
muting.remove_muted_topic(devel.stream_id, "java");
- assert(!muting.is_topic_muted(devel.stream_id, "java"));
+ assert.ok(!muting.is_topic_muted(devel.stream_id, "java"));
// test unknown stream is harmless too
muting.remove_muted_topic(unknown.stream_id, "java");
- assert(!muting.is_topic_muted(unknown.stream_id, "java"));
+ assert.ok(!muting.is_topic_muted(unknown.stream_id, "java"));
- assert(!muting.is_user_muted(1));
+ assert.ok(!muting.is_user_muted(1));
muting.add_muted_user(1);
- assert(muting.is_user_muted(1));
+ assert.ok(muting.is_user_muted(1));
// test idempotentcy
muting.add_muted_user(1);
- assert(muting.is_user_muted(1));
+ assert.ok(muting.is_user_muted(1));
muting.remove_muted_user(1);
- assert(!muting.is_user_muted(1));
+ assert.ok(!muting.is_user_muted(1));
// test idempotentcy
muting.remove_muted_user(1);
- assert(!muting.is_user_muted(1));
+ assert.ok(!muting.is_user_muted(1));
});
test("get_unmuted_users", () => {
@@ -204,8 +204,8 @@ test("unknown streams", () => {
test("case_insensitivity", () => {
muting.set_muted_topics([]);
- assert(!muting.is_topic_muted(social.stream_id, "breakfast"));
+ assert.ok(!muting.is_topic_muted(social.stream_id, "breakfast"));
muting.set_muted_topics([["SOCial", "breakfast"]]);
- assert(muting.is_topic_muted(social.stream_id, "breakfast"));
- assert(muting.is_topic_muted(social.stream_id, "breakFAST"));
+ assert.ok(muting.is_topic_muted(social.stream_id, "breakfast"));
+ assert.ok(muting.is_topic_muted(social.stream_id, "breakFAST"));
});
diff --git a/frontend_tests/node_tests/narrow.js b/frontend_tests/node_tests/narrow.js
--- a/frontend_tests/node_tests/narrow.js
+++ b/frontend_tests/node_tests/narrow.js
@@ -101,83 +101,83 @@ run_test("show_empty_narrow_message", () => {
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
assert.equal($(".empty_feed_notice").visible(), false);
- assert($("#empty_narrow_message").visible());
+ assert.ok($("#empty_narrow_message").visible());
// for non-existent or private stream
set_filter([["stream", "Foo"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#nonsubbed_private_nonexistent_stream_narrow_message").visible());
+ assert.ok($("#nonsubbed_private_nonexistent_stream_narrow_message").visible());
// for non sub public stream
stream_data.add_sub({name: "ROME", stream_id: 99});
set_filter([["stream", "Rome"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#nonsubbed_stream_narrow_message").visible());
+ assert.ok($("#nonsubbed_stream_narrow_message").visible());
set_filter([["is", "starred"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_star_narrow_message").visible());
+ assert.ok($("#empty_star_narrow_message").visible());
set_filter([["is", "mentioned"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_all_mentioned").visible());
+ assert.ok($("#empty_narrow_all_mentioned").visible());
set_filter([["is", "private"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_all_private_message").visible());
+ assert.ok($("#empty_narrow_all_private_message").visible());
set_filter([["is", "unread"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#no_unread_narrow_message").visible());
+ assert.ok($("#no_unread_narrow_message").visible());
set_filter([["pm-with", ["Yo"]]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#non_existing_user").visible());
+ assert.ok($("#non_existing_user").visible());
people.add_active_user(alice);
set_filter([["pm-with", ["[email protected]", "Yo"]]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#non_existing_users").visible());
+ assert.ok($("#non_existing_users").visible());
set_filter([["pm-with", "[email protected]"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_private_message").visible());
+ assert.ok($("#empty_narrow_private_message").visible());
people.add_active_user(me);
people.initialize_current_user(me.user_id);
set_filter([["pm-with", me.email]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_self_private_message").visible());
+ assert.ok($("#empty_narrow_self_private_message").visible());
set_filter([["pm-with", me.email + "," + alice.email]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_multi_private_message").visible());
+ assert.ok($("#empty_narrow_multi_private_message").visible());
set_filter([["group-pm-with", "[email protected]"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_group_private_message").visible());
+ assert.ok($("#empty_narrow_group_private_message").visible());
set_filter([["sender", "[email protected]"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#silent_user").visible());
+ assert.ok($("#silent_user").visible());
set_filter([["sender", "[email protected]"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#non_existing_user").visible());
+ assert.ok($("#non_existing_user").visible());
const display = $("#empty_search_stop_words_string");
@@ -189,7 +189,7 @@ run_test("show_empty_narrow_message", () => {
set_filter([["search", "grail"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(items.length, 2);
assert.equal(items[0], " ");
@@ -201,12 +201,12 @@ run_test("show_empty_narrow_message", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_message").visible());
+ assert.ok($("#empty_narrow_message").visible());
set_filter([["is", "invalid"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_message").visible());
+ assert.ok($("#empty_narrow_message").visible());
const my_stream = {
name: "my stream",
@@ -218,18 +218,18 @@ run_test("show_empty_narrow_message", () => {
set_filter([["stream", "my stream"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_narrow_message").visible());
+ assert.ok($("#empty_narrow_message").visible());
set_filter([["stream", ""]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#nonsubbed_private_nonexistent_stream_narrow_message").visible());
+ assert.ok($("#nonsubbed_private_nonexistent_stream_narrow_message").visible());
});
run_test("hide_empty_narrow_message", () => {
$(".empty_feed_notice").show();
narrow_banner.hide_empty_narrow_message();
- assert(!$(".empty_feed_notice").visible());
+ assert.ok(!$(".empty_feed_notice").visible());
});
run_test("show_search_stopwords", () => {
@@ -249,7 +249,7 @@ run_test("show_search_stopwords", () => {
set_filter([["search", "what about grail"]]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(items.length, 3);
assert.equal(items[0], "<del>what");
@@ -263,7 +263,7 @@ run_test("show_search_stopwords", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(items.length, 4);
assert.equal(items[0], "<span>stream: streamA");
@@ -279,7 +279,7 @@ run_test("show_search_stopwords", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(items.length, 4);
assert.equal(items[0], "<span>stream: streamA topic: topicA");
@@ -301,7 +301,7 @@ run_test("show_invalid_narrow_message", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(
display.text(),
"translated: You are searching for messages that belong to more than one stream, which is not possible.",
@@ -313,7 +313,7 @@ run_test("show_invalid_narrow_message", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(
display.text(),
"translated: You are searching for messages that belong to more than one topic, which is not possible.",
@@ -328,7 +328,7 @@ run_test("show_invalid_narrow_message", () => {
]);
hide_all_empty_narrow_messages();
narrow_banner.show_empty_narrow_message();
- assert($("#empty_search_narrow_message").visible());
+ assert.ok($("#empty_search_narrow_message").visible());
assert.equal(
display.text(),
"translated: You are searching for messages that are sent by more than one person, which is not possible.",
diff --git a/frontend_tests/node_tests/narrow_local.js b/frontend_tests/node_tests/narrow_local.js
--- a/frontend_tests/node_tests/narrow_local.js
+++ b/frontend_tests/node_tests/narrow_local.js
@@ -22,7 +22,7 @@ function test_with(fixture) {
if (fixture.unread_info.flavor === "found") {
for (const msg of fixture.all_messages) {
if (msg.id === fixture.unread_info.msg_id) {
- assert(filter.predicate()(msg));
+ assert.ok(filter.predicate()(msg));
}
}
}
diff --git a/frontend_tests/node_tests/narrow_state.js b/frontend_tests/node_tests/narrow_state.js
--- a/frontend_tests/node_tests/narrow_state.js
+++ b/frontend_tests/node_tests/narrow_state.js
@@ -28,24 +28,24 @@ function test(label, f) {
test("stream", () => {
assert.equal(narrow_state.public_operators(), undefined);
- assert(!narrow_state.active());
+ assert.ok(!narrow_state.active());
const test_stream = {name: "Test", stream_id: 15};
stream_data.add_sub(test_stream);
- assert(!narrow_state.is_for_stream_id(test_stream.stream_id));
+ assert.ok(!narrow_state.is_for_stream_id(test_stream.stream_id));
set_filter([
["stream", "Test"],
["topic", "Bar"],
["search", "yo"],
]);
- assert(narrow_state.active());
+ assert.ok(narrow_state.active());
assert.equal(narrow_state.stream(), "Test");
assert.equal(narrow_state.stream_sub().stream_id, test_stream.stream_id);
assert.equal(narrow_state.topic(), "Bar");
- assert(narrow_state.is_for_stream_id(test_stream.stream_id));
+ assert.ok(narrow_state.is_for_stream_id(test_stream.stream_id));
const expected_operators = [
{negated: false, operator: "stream", operand: "Test"},
@@ -59,68 +59,68 @@ test("stream", () => {
});
test("narrowed", () => {
- assert(!narrow_state.narrowed_to_pms());
- assert(!narrow_state.narrowed_by_reply());
- assert(!narrow_state.narrowed_by_pm_reply());
- assert(!narrow_state.narrowed_by_topic_reply());
- assert(!narrow_state.narrowed_to_search());
- assert(!narrow_state.narrowed_to_topic());
- assert(!narrow_state.narrowed_by_stream_reply());
+ assert.ok(!narrow_state.narrowed_to_pms());
+ assert.ok(!narrow_state.narrowed_by_reply());
+ assert.ok(!narrow_state.narrowed_by_pm_reply());
+ assert.ok(!narrow_state.narrowed_by_topic_reply());
+ assert.ok(!narrow_state.narrowed_to_search());
+ assert.ok(!narrow_state.narrowed_to_topic());
+ assert.ok(!narrow_state.narrowed_by_stream_reply());
assert.equal(narrow_state.stream_sub(), undefined);
- assert(!narrow_state.narrowed_to_starred());
+ assert.ok(!narrow_state.narrowed_to_starred());
set_filter([["stream", "Foo"]]);
- assert(!narrow_state.narrowed_to_pms());
- assert(!narrow_state.narrowed_by_reply());
- assert(!narrow_state.narrowed_by_pm_reply());
- assert(!narrow_state.narrowed_by_topic_reply());
- assert(!narrow_state.narrowed_to_search());
- assert(!narrow_state.narrowed_to_topic());
- assert(narrow_state.narrowed_by_stream_reply());
- assert(!narrow_state.narrowed_to_starred());
+ assert.ok(!narrow_state.narrowed_to_pms());
+ assert.ok(!narrow_state.narrowed_by_reply());
+ assert.ok(!narrow_state.narrowed_by_pm_reply());
+ assert.ok(!narrow_state.narrowed_by_topic_reply());
+ assert.ok(!narrow_state.narrowed_to_search());
+ assert.ok(!narrow_state.narrowed_to_topic());
+ assert.ok(narrow_state.narrowed_by_stream_reply());
+ assert.ok(!narrow_state.narrowed_to_starred());
set_filter([["pm-with", "[email protected]"]]);
- assert(narrow_state.narrowed_to_pms());
- assert(narrow_state.narrowed_by_reply());
- assert(narrow_state.narrowed_by_pm_reply());
- assert(!narrow_state.narrowed_by_topic_reply());
- assert(!narrow_state.narrowed_to_search());
- assert(!narrow_state.narrowed_to_topic());
- assert(!narrow_state.narrowed_by_stream_reply());
- assert(!narrow_state.narrowed_to_starred());
+ assert.ok(narrow_state.narrowed_to_pms());
+ assert.ok(narrow_state.narrowed_by_reply());
+ assert.ok(narrow_state.narrowed_by_pm_reply());
+ assert.ok(!narrow_state.narrowed_by_topic_reply());
+ assert.ok(!narrow_state.narrowed_to_search());
+ assert.ok(!narrow_state.narrowed_to_topic());
+ assert.ok(!narrow_state.narrowed_by_stream_reply());
+ assert.ok(!narrow_state.narrowed_to_starred());
set_filter([
["stream", "Foo"],
["topic", "bar"],
]);
- assert(!narrow_state.narrowed_to_pms());
- assert(narrow_state.narrowed_by_reply());
- assert(!narrow_state.narrowed_by_pm_reply());
- assert(narrow_state.narrowed_by_topic_reply());
- assert(!narrow_state.narrowed_to_search());
- assert(narrow_state.narrowed_to_topic());
- assert(!narrow_state.narrowed_by_stream_reply());
- assert(!narrow_state.narrowed_to_starred());
+ assert.ok(!narrow_state.narrowed_to_pms());
+ assert.ok(narrow_state.narrowed_by_reply());
+ assert.ok(!narrow_state.narrowed_by_pm_reply());
+ assert.ok(narrow_state.narrowed_by_topic_reply());
+ assert.ok(!narrow_state.narrowed_to_search());
+ assert.ok(narrow_state.narrowed_to_topic());
+ assert.ok(!narrow_state.narrowed_by_stream_reply());
+ assert.ok(!narrow_state.narrowed_to_starred());
set_filter([["search", "grail"]]);
- assert(!narrow_state.narrowed_to_pms());
- assert(!narrow_state.narrowed_by_reply());
- assert(!narrow_state.narrowed_by_pm_reply());
- assert(!narrow_state.narrowed_by_topic_reply());
- assert(narrow_state.narrowed_to_search());
- assert(!narrow_state.narrowed_to_topic());
- assert(!narrow_state.narrowed_by_stream_reply());
- assert(!narrow_state.narrowed_to_starred());
+ assert.ok(!narrow_state.narrowed_to_pms());
+ assert.ok(!narrow_state.narrowed_by_reply());
+ assert.ok(!narrow_state.narrowed_by_pm_reply());
+ assert.ok(!narrow_state.narrowed_by_topic_reply());
+ assert.ok(narrow_state.narrowed_to_search());
+ assert.ok(!narrow_state.narrowed_to_topic());
+ assert.ok(!narrow_state.narrowed_by_stream_reply());
+ assert.ok(!narrow_state.narrowed_to_starred());
set_filter([["is", "starred"]]);
- assert(!narrow_state.narrowed_to_pms());
- assert(!narrow_state.narrowed_by_reply());
- assert(!narrow_state.narrowed_by_pm_reply());
- assert(!narrow_state.narrowed_by_topic_reply());
- assert(!narrow_state.narrowed_to_search());
- assert(!narrow_state.narrowed_to_topic());
- assert(!narrow_state.narrowed_by_stream_reply());
- assert(narrow_state.narrowed_to_starred());
+ assert.ok(!narrow_state.narrowed_to_pms());
+ assert.ok(!narrow_state.narrowed_by_reply());
+ assert.ok(!narrow_state.narrowed_by_pm_reply());
+ assert.ok(!narrow_state.narrowed_by_topic_reply());
+ assert.ok(!narrow_state.narrowed_to_search());
+ assert.ok(!narrow_state.narrowed_to_topic());
+ assert.ok(!narrow_state.narrowed_by_stream_reply());
+ assert.ok(narrow_state.narrowed_to_starred());
});
test("operators", () => {
@@ -147,25 +147,25 @@ test("operators", () => {
test("excludes_muted_topics", () => {
set_filter([["stream", "devel"]]);
- assert(narrow_state.excludes_muted_topics());
+ assert.ok(narrow_state.excludes_muted_topics());
narrow_state.reset_current_filter(); // not narrowed, basically
- assert(narrow_state.excludes_muted_topics());
+ assert.ok(narrow_state.excludes_muted_topics());
set_filter([
["stream", "devel"],
["topic", "mac"],
]);
- assert(!narrow_state.excludes_muted_topics());
+ assert.ok(!narrow_state.excludes_muted_topics());
set_filter([["search", "whatever"]]);
- assert(!narrow_state.excludes_muted_topics());
+ assert.ok(!narrow_state.excludes_muted_topics());
set_filter([["is", "private"]]);
- assert(!narrow_state.excludes_muted_topics());
+ assert.ok(!narrow_state.excludes_muted_topics());
set_filter([["is", "starred"]]);
- assert(!narrow_state.excludes_muted_topics());
+ assert.ok(!narrow_state.excludes_muted_topics());
});
test("set_compose_defaults", () => {
diff --git a/frontend_tests/node_tests/password.js b/frontend_tests/node_tests/password.js
--- a/frontend_tests/node_tests/password.js
+++ b/frontend_tests/node_tests/password.js
@@ -50,7 +50,7 @@ run_test("basics w/progress bar", () => {
password = "z!X4@S_&";
accepted = password_quality(password, bar, password_field(10, 80000));
- assert(!accepted);
+ assert.ok(!accepted);
assert.equal(bar.w, "39.7%");
assert.equal(bar.added_class, "bar-danger");
warning = password_warning(password, password_field(10));
@@ -58,7 +58,7 @@ run_test("basics w/progress bar", () => {
password = "foo";
accepted = password_quality(password, bar, password_field(2, 200));
- assert(accepted);
+ assert.ok(accepted);
assert.equal(bar.w, "10.390277164940581%");
assert.equal(bar.added_class, "bar-success");
warning = password_warning(password, password_field(2));
@@ -66,7 +66,7 @@ run_test("basics w/progress bar", () => {
password = "aaaaaaaa";
accepted = password_quality(password, bar, password_field(6, 1e100));
- assert(!accepted);
+ assert.ok(!accepted);
assert.equal(bar.added_class, "bar-danger");
warning = password_warning(password, password_field(6));
assert.equal(warning, 'Repeats like "aaa" are easy to guess');
diff --git a/frontend_tests/node_tests/peer_data.js b/frontend_tests/node_tests/peer_data.js
--- a/frontend_tests/node_tests/peer_data.js
+++ b/frontend_tests/node_tests/peer_data.js
@@ -66,25 +66,25 @@ test("unsubscribe", () => {
stream_data.add_sub(devel);
// verify clean slate
- assert(!stream_data.is_subscribed("devel"));
+ assert.ok(!stream_data.is_subscribed("devel"));
// set up our subscription
devel.subscribed = true;
peer_data.set_subscribers(devel.stream_id, [me.user_id]);
// ensure our setup is accurate
- assert(stream_data.is_subscribed("devel"));
+ assert.ok(stream_data.is_subscribed("devel"));
// DO THE UNSUBSCRIBE HERE
stream_data.unsubscribe_myself(devel);
- assert(!devel.subscribed);
- assert(!stream_data.is_subscribed("devel"));
- assert(!contains_sub(stream_data.subscribed_subs(), devel));
- assert(contains_sub(stream_data.unsubscribed_subs(), devel));
+ assert.ok(!devel.subscribed);
+ assert.ok(!stream_data.is_subscribed("devel"));
+ assert.ok(!contains_sub(stream_data.subscribed_subs(), devel));
+ assert.ok(contains_sub(stream_data.unsubscribed_subs(), devel));
// make sure subsequent calls work
const sub = stream_data.get_sub("devel");
- assert(!sub.subscribed);
+ assert.ok(!sub.subscribed);
});
test("subscribers", () => {
@@ -96,7 +96,7 @@ test("subscribers", () => {
people.add_active_user(george);
// verify setup
- assert(stream_data.is_subscribed(sub.name));
+ assert.ok(stream_data.is_subscribed(sub.name));
const stream_id = sub.stream_id;
@@ -113,10 +113,10 @@ test("subscribers", () => {
]);
peer_data.set_subscribers(stream_id, [me.user_id, fred.user_id, george.user_id]);
- assert(stream_data.is_user_subscribed(stream_id, me.user_id));
- assert(stream_data.is_user_subscribed(stream_id, fred.user_id));
- assert(stream_data.is_user_subscribed(stream_id, george.user_id));
- assert(!stream_data.is_user_subscribed(stream_id, gail.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, me.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, fred.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, george.user_id));
+ assert.ok(!stream_data.is_user_subscribed(stream_id, gail.user_id));
assert.deepEqual(potential_subscriber_ids(), [gail.user_id]);
@@ -128,11 +128,11 @@ test("subscribers", () => {
user_id: 104,
};
people.add_active_user(brutus);
- assert(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
// add
peer_data.add_subscriber(stream_id, brutus.user_id);
- assert(stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, brutus.user_id));
assert.equal(peer_data.get_subscriber_count(stream_id), 1);
const sub_email = "Rome:[email protected]:9991";
stream_data.update_stream_email_address(sub, sub_email);
@@ -140,13 +140,13 @@ test("subscribers", () => {
// verify that adding an already-added subscriber is a noop
peer_data.add_subscriber(stream_id, brutus.user_id);
- assert(stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, brutus.user_id));
assert.equal(peer_data.get_subscriber_count(stream_id), 1);
// remove
let ok = peer_data.remove_subscriber(stream_id, brutus.user_id);
- assert(ok);
- assert(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(ok);
+ assert.ok(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
assert.equal(peer_data.get_subscriber_count(stream_id), 0);
// verify that checking subscription with undefined user id
@@ -160,14 +160,14 @@ test("subscribers", () => {
blueslip.expect("warn", "We called get_user_set for an untracked stream: " + bad_stream_id);
blueslip.expect("warn", "We tried to remove invalid subscriber: 104");
ok = peer_data.remove_subscriber(bad_stream_id, brutus.user_id);
- assert(!ok);
+ assert.ok(!ok);
blueslip.reset();
// verify that removing an already-removed subscriber is a noop
blueslip.expect("warn", "We tried to remove invalid subscriber: 104");
ok = peer_data.remove_subscriber(stream_id, brutus.user_id);
- assert(!ok);
- assert(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(!ok);
+ assert.ok(!stream_data.is_user_subscribed(stream_id, brutus.user_id));
assert.equal(peer_data.get_subscriber_count(stream_id), 0);
blueslip.reset();
@@ -176,7 +176,7 @@ test("subscribers", () => {
stream_data.add_sub(sub);
peer_data.add_subscriber(stream_id, brutus.user_id);
sub.subscribed = true;
- assert(stream_data.is_user_subscribed(stream_id, brutus.user_id));
+ assert.ok(stream_data.is_user_subscribed(stream_id, brutus.user_id));
// Verify that we noop and don't crash when unsubscribed.
sub.subscribed = false;
diff --git a/frontend_tests/node_tests/people.js b/frontend_tests/node_tests/people.js
--- a/frontend_tests/node_tests/people.js
+++ b/frontend_tests/node_tests/people.js
@@ -279,19 +279,19 @@ test_people("basics", () => {
const full_name = "Isaac Newton";
const email = "[email protected]";
- assert(!people.is_known_user_id(32));
- assert(!people.is_known_user(isaac));
- assert(!people.is_known_user(undefined));
- assert(!people.is_valid_full_name_and_user_id(full_name, 32));
+ assert.ok(!people.is_known_user_id(32));
+ assert.ok(!people.is_known_user(isaac));
+ assert.ok(!people.is_known_user(undefined));
+ assert.ok(!people.is_valid_full_name_and_user_id(full_name, 32));
assert.equal(people.get_user_id_from_name(full_name), undefined);
people.add_active_user(isaac);
assert.equal(people.get_actual_name_from_user_id(32), full_name);
- assert(people.is_valid_full_name_and_user_id(full_name, 32));
- assert(people.is_known_user_id(32));
- assert(people.is_known_user(isaac));
+ assert.ok(people.is_valid_full_name_and_user_id(full_name, 32));
+ assert.ok(people.is_known_user_id(32));
+ assert.ok(people.is_known_user(isaac));
assert.equal(people.get_active_human_count(), 2);
assert.equal(people.get_user_id_from_name(full_name), 32);
@@ -305,7 +305,7 @@ test_people("basics", () => {
const active_user_ids = people.get_active_user_ids().sort();
assert.deepEqual(active_user_ids, [me.user_id, isaac.user_id]);
assert.equal(people.is_active_user_for_popover(isaac.user_id), true);
- assert(people.is_valid_email_for_compose(isaac.email));
+ assert.ok(people.is_valid_email_for_compose(isaac.email));
// Now deactivate isaac
people.deactivate(isaac);
@@ -606,40 +606,40 @@ test_people("filtered_users", () => {
const users = people.get_people_for_stream_create();
let filtered_people = people.filter_people_by_search_terms(users, [search_term]);
assert.equal(filtered_people.size, 2);
- assert(filtered_people.has(ashton.user_id));
- assert(filtered_people.has(maria.user_id));
- assert(!filtered_people.has(charles.user_id));
+ assert.ok(filtered_people.has(ashton.user_id));
+ assert.ok(filtered_people.has(maria.user_id));
+ assert.ok(!filtered_people.has(charles.user_id));
filtered_people = people.filter_people_by_search_terms(users, []);
assert.equal(filtered_people.size, 0);
filtered_people = people.filter_people_by_search_terms(users, ["ltorv"]);
assert.equal(filtered_people.size, 1);
- assert(filtered_people.has(linus.user_id));
+ assert.ok(filtered_people.has(linus.user_id));
filtered_people = people.filter_people_by_search_terms(users, ["ch di", "maria"]);
assert.equal(filtered_people.size, 2);
- assert(filtered_people.has(charles.user_id));
- assert(filtered_people.has(maria.user_id));
+ assert.ok(filtered_people.has(charles.user_id));
+ assert.ok(filtered_people.has(maria.user_id));
// Test filtering of names with diacritics
// This should match Nöôáàh by ignoring diacritics, and also match Nooaah
filtered_people = people.filter_people_by_search_terms(users, ["noOa"]);
assert.equal(filtered_people.size, 2);
- assert(filtered_people.has(noah.user_id));
- assert(filtered_people.has(plain_noah.user_id));
+ assert.ok(filtered_people.has(noah.user_id));
+ assert.ok(filtered_people.has(plain_noah.user_id));
// This should match ëmerson, but not emerson
filtered_people = people.filter_people_by_search_terms(users, ["ëm"]);
assert.equal(filtered_people.size, 1);
- assert(filtered_people.has(noah.user_id));
+ assert.ok(filtered_people.has(noah.user_id));
// Test filtering with undefined user
users.push(unknown_user);
filtered_people = people.filter_people_by_search_terms(users, ["ltorv"]);
assert.equal(filtered_people.size, 1);
- assert(filtered_people.has(linus.user_id));
+ assert.ok(filtered_people.has(linus.user_id));
});
people.init();
@@ -821,7 +821,7 @@ test_people("extract_people_from_message", (override) => {
sender_id: maria.user_id,
sender_email: maria.email,
};
- assert(!people.is_known_user_id(maria.user_id));
+ assert.ok(!people.is_known_user_id(maria.user_id));
let reported;
override(people, "report_late_add", (user_id, email) => {
@@ -831,8 +831,8 @@ test_people("extract_people_from_message", (override) => {
});
people.extract_people_from_message(message);
- assert(people.is_known_user_id(maria.user_id));
- assert(reported);
+ assert.ok(people.is_known_user_id(maria.user_id));
+ assert.ok(reported);
// Get line coverage
people.__Rewire__("report_late_add", () => {
@@ -947,7 +947,7 @@ test_people("updates", () => {
// Do sanity checks on our data.
assert.equal(people.get_by_email(old_email).user_id, user_id);
- assert(!people.is_cross_realm_email(old_email));
+ assert.ok(!people.is_cross_realm_email(old_email));
assert.equal(people.get_by_email(new_email), undefined);
@@ -956,7 +956,7 @@ test_people("updates", () => {
// Now look up using the new email.
assert.equal(people.get_by_email(new_email).user_id, user_id);
- assert(!people.is_cross_realm_email(new_email));
+ assert.ok(!people.is_cross_realm_email(new_email));
const all_people = get_all_persons();
assert.equal(all_people.length, 2);
@@ -993,7 +993,7 @@ test_people("track_duplicate_full_names", () => {
people.add_active_user(maria);
people.add_active_user(stephen1);
- assert(!people.is_duplicate_full_name("Stephen King"));
+ assert.ok(!people.is_duplicate_full_name("Stephen King"));
assert.equal(people.get_user_id_from_name("Stephen King"), stephen1.user_id);
// Now duplicate the Stephen King name.
@@ -1004,16 +1004,16 @@ test_people("track_duplicate_full_names", () => {
// other codepaths for disambiguation.
assert.equal(people.get_user_id_from_name("Stephen King"), undefined);
- assert(people.is_duplicate_full_name("Stephen King"));
- assert(!people.is_duplicate_full_name("Maria Athens"));
- assert(!people.is_duplicate_full_name("Some Random Name"));
+ assert.ok(people.is_duplicate_full_name("Stephen King"));
+ assert.ok(!people.is_duplicate_full_name("Maria Athens"));
+ assert.ok(!people.is_duplicate_full_name("Some Random Name"));
// It is somewhat janky that we have to clone
// stephen2 here. It would be nice if people.set_full_name
// just took a user_id as the first parameter.
people.set_full_name({...stephen2}, "Stephen King JP");
- assert(!people.is_duplicate_full_name("Stephen King"));
- assert(!people.is_duplicate_full_name("Stephen King JP"));
+ assert.ok(!people.is_duplicate_full_name("Stephen King"));
+ assert.ok(!people.is_duplicate_full_name("Stephen King JP"));
});
test_people("get_mention_syntax", () => {
@@ -1021,7 +1021,7 @@ test_people("get_mention_syntax", () => {
people.add_active_user(stephen2);
people.add_active_user(maria);
- assert(people.is_duplicate_full_name("Stephen King"));
+ assert.ok(people.is_duplicate_full_name("Stephen King"));
blueslip.expect("warn", "get_mention_syntax called without user_id.");
assert.equal(people.get_mention_syntax("Stephen King"), "@**Stephen King**");
@@ -1074,14 +1074,14 @@ test_people("initialize", () => {
people.initialize(my_user_id, params);
assert.equal(people.is_active_user_for_popover(17), true);
- assert(people.is_cross_realm_email("[email protected]"));
- assert(people.is_valid_email_for_compose("[email protected]"));
- assert(people.is_valid_email_for_compose("[email protected]"));
- assert(!people.is_valid_email_for_compose("[email protected]"));
- assert(!people.is_valid_email_for_compose("[email protected]"));
- assert(people.is_valid_bulk_emails_for_compose(["[email protected]", "[email protected]"]));
- assert(!people.is_valid_bulk_emails_for_compose(["[email protected]", "[email protected]"]));
- assert(people.is_my_user_id(42));
+ assert.ok(people.is_cross_realm_email("[email protected]"));
+ assert.ok(people.is_valid_email_for_compose("[email protected]"));
+ assert.ok(people.is_valid_email_for_compose("[email protected]"));
+ assert.ok(!people.is_valid_email_for_compose("[email protected]"));
+ assert.ok(!people.is_valid_email_for_compose("[email protected]"));
+ assert.ok(people.is_valid_bulk_emails_for_compose(["[email protected]", "[email protected]"]));
+ assert.ok(!people.is_valid_bulk_emails_for_compose(["[email protected]", "[email protected]"]));
+ assert.ok(people.is_my_user_id(42));
const fetched_retiree = people.get_by_user_id(15);
assert.equal(fetched_retiree.full_name, "Retiree");
@@ -1149,9 +1149,9 @@ test_people("matches_user_settings_search", () => {
});
test_people("is_valid_full_name_and_user_id", () => {
- assert(!people.is_valid_full_name_and_user_id("bogus", 99));
- assert(!people.is_valid_full_name_and_user_id(me.full_name, 99));
- assert(people.is_valid_full_name_and_user_id(me.full_name, me.user_id));
+ assert.ok(!people.is_valid_full_name_and_user_id("bogus", 99));
+ assert.ok(!people.is_valid_full_name_and_user_id(me.full_name, 99));
+ assert.ok(people.is_valid_full_name_and_user_id(me.full_name, me.user_id));
});
test_people("emails_strings_to_user_ids_array", () => {
diff --git a/frontend_tests/node_tests/people_errors.js b/frontend_tests/node_tests/people_errors.js
--- a/frontend_tests/node_tests/people_errors.js
+++ b/frontend_tests/node_tests/people_errors.js
@@ -102,7 +102,7 @@ run_test("blueslip", (override) => {
};
blueslip.expect("error", "Unknown user id in message: 42");
const reply_to = people.pm_reply_to(message);
- assert(reply_to.includes("?"));
+ assert.ok(reply_to.includes("?"));
override(people, "pm_with_user_ids", () => [42]);
override(people, "get_by_user_id", () => {});
diff --git a/frontend_tests/node_tests/pill_typeahead.js b/frontend_tests/node_tests/pill_typeahead.js
--- a/frontend_tests/node_tests/pill_typeahead.js
+++ b/frontend_tests/node_tests/pill_typeahead.js
@@ -17,9 +17,9 @@ run_test("set_up", () => {
const fake_input = $.create(".input");
fake_input.typeahead = (config) => {
assert.equal(config.items, 5);
- assert(config.fixed);
- assert(config.dropup);
- assert(config.stopAdvance);
+ assert.ok(config.fixed);
+ assert.ok(config.dropup);
+ assert.ok(config.stopAdvance);
// Working of functions that are part of config
// is tested separately based on the widgets that
@@ -47,31 +47,31 @@ run_test("set_up", () => {
// call set_up with only user type in opts.
pill_typeahead.set_up(fake_input, pill_widget, {user: true});
- assert(input_pill_typeahead_called);
+ assert.ok(input_pill_typeahead_called);
// call set_up with only stream type in opts.
input_pill_typeahead_called = false;
pill_typeahead.set_up(fake_input, pill_widget, {stream: true});
- assert(input_pill_typeahead_called);
+ assert.ok(input_pill_typeahead_called);
// call set_up with only user_group type in opts.
input_pill_typeahead_called = false;
pill_typeahead.set_up(fake_input, pill_widget, {user_group: true});
- assert(input_pill_typeahead_called);
+ assert.ok(input_pill_typeahead_called);
// call set_up with combination two types in opts.
input_pill_typeahead_called = false;
pill_typeahead.set_up(fake_input, pill_widget, {user_group: true, stream: true});
- assert(input_pill_typeahead_called);
+ assert.ok(input_pill_typeahead_called);
// call set_up with all three types in opts.
input_pill_typeahead_called = false;
pill_typeahead.set_up(fake_input, pill_widget, {user_group: true, stream: true, user: true});
- assert(input_pill_typeahead_called);
+ assert.ok(input_pill_typeahead_called);
// call set_up without specifying type in opts.
input_pill_typeahead_called = false;
blueslip.expect("error", "Unspecified possible item types");
pill_typeahead.set_up(fake_input, pill_widget, {});
- assert(!input_pill_typeahead_called);
+ assert.ok(!input_pill_typeahead_called);
});
diff --git a/frontend_tests/node_tests/pm_list.js b/frontend_tests/node_tests/pm_list.js
--- a/frontend_tests/node_tests/pm_list.js
+++ b/frontend_tests/node_tests/pm_list.js
@@ -67,7 +67,7 @@ test("close", () => {
collapsed = true;
};
pm_list.close();
- assert(collapsed);
+ assert.ok(collapsed);
});
test("build_private_messages_list", (override) => {
@@ -167,7 +167,7 @@ test("update_dom_with_unread_counts", (override) => {
pm_list.update_dom_with_unread_counts(counts);
assert.equal(total_count.text(), "10");
- assert(total_count.visible());
+ assert.ok(total_count.visible());
counts = {
private_message_count: 0,
@@ -175,7 +175,7 @@ test("update_dom_with_unread_counts", (override) => {
pm_list.update_dom_with_unread_counts(counts);
assert.equal(total_count.text(), "");
- assert(!total_count.visible());
+ assert.ok(!total_count.visible());
});
test("get_active_user_ids_string", (override) => {
@@ -230,11 +230,11 @@ test("expand", (override) => {
html_updated = true;
});
- assert(!$(".top_left_private_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_private_messages").hasClass("active-filter"));
pm_list.expand();
- assert(html_updated);
- assert($(".top_left_private_messages").hasClass("active-filter"));
+ assert.ok(html_updated);
+ assert.ok($(".top_left_private_messages").hasClass("active-filter"));
});
test("update_private_messages", (override) => {
@@ -260,8 +260,8 @@ test("update_private_messages", (override) => {
pm_list.expand();
pm_list.update_private_messages();
- assert(html_updated);
- assert(container_found);
+ assert.ok(html_updated);
+ assert.ok(container_found);
});
test("ensure coverage", (override) => {
diff --git a/frontend_tests/node_tests/poll_widget.js b/frontend_tests/node_tests/poll_widget.js
--- a/frontend_tests/node_tests/poll_widget.js
+++ b/frontend_tests/node_tests/poll_widget.js
@@ -233,14 +233,14 @@ run_test("activate another person poll", () => {
poll_widget.activate(opts);
- assert(poll_option_container.visible());
- assert(poll_question_header.visible());
+ assert.ok(poll_option_container.visible());
+ assert.ok(poll_question_header.visible());
- assert(!poll_question_container.visible());
- assert(!poll_question_submit.visible());
- assert(!poll_edit_question.visible());
- assert(!poll_please_wait.visible());
- assert(!poll_author_help.visible());
+ assert.ok(!poll_question_container.visible());
+ assert.ok(!poll_question_submit.visible());
+ assert.ok(!poll_edit_question.visible());
+ assert.ok(!poll_please_wait.visible());
+ assert.ok(!poll_author_help.visible());
assert.equal(widget_elem.html(), "widgets/poll_widget");
assert.equal(widget_option_container.html(), "widgets/poll_widget_results");
@@ -352,18 +352,18 @@ run_test("activate own poll", () => {
set_widget_find_result("button.poll-question-remove");
function assert_visibility() {
- assert(poll_option_container.visible());
- assert(poll_question_header.visible());
- assert(!poll_question_container.visible());
- assert(poll_edit_question.visible());
- assert(!poll_please_wait.visible());
- assert(!poll_author_help.visible());
+ assert.ok(poll_option_container.visible());
+ assert.ok(poll_question_header.visible());
+ assert.ok(!poll_question_container.visible());
+ assert.ok(poll_edit_question.visible());
+ assert.ok(!poll_please_wait.visible());
+ assert.ok(!poll_author_help.visible());
}
poll_widget.activate(opts);
assert_visibility();
- assert(!poll_question_submit.visible());
+ assert.ok(!poll_question_submit.visible());
assert.equal(widget_elem.html(), "widgets/poll_widget");
assert.equal(widget_option_container.html(), "widgets/poll_widget_results");
@@ -377,7 +377,7 @@ run_test("activate own poll", () => {
assert.deepEqual(out_data, {type: "question", question: "Is it new?"});
assert_visibility();
- assert(poll_question_submit.visible());
+ assert.ok(poll_question_submit.visible());
poll_option_input.val("");
out_data = undefined;
diff --git a/frontend_tests/node_tests/presence.js b/frontend_tests/node_tests/presence.js
--- a/frontend_tests/node_tests/presence.js
+++ b/frontend_tests/node_tests/presence.js
@@ -210,7 +210,7 @@ test("set_presence_info", () => {
assert.equal(presence.get_status(zoe.user_id), "offline");
assert.equal(presence.last_active_date(zoe.user_id), undefined);
- assert(!presence.presence_info.has(bot.user_id));
+ assert.ok(!presence.presence_info.has(bot.user_id));
assert.equal(presence.get_status(bot.user_id), "offline");
assert.deepEqual(presence.presence_info.get(john.user_id), {
@@ -279,8 +279,8 @@ test("big realms", () => {
const get_active_human_count = people.get_active_human_count;
people.get_active_human_count = () => 1000;
presence.set_info(presences, now);
- assert(presence.presence_info.has(sally.user_id));
- assert(!presence.presence_info.has(zoe.user_id));
+ assert.ok(presence.presence_info.has(sally.user_id));
+ assert.ok(!presence.presence_info.has(zoe.user_id));
people.get_active_human_count = get_active_human_count;
});
diff --git a/frontend_tests/node_tests/reactions.js b/frontend_tests/node_tests/reactions.js
--- a/frontend_tests/node_tests/reactions.js
+++ b/frontend_tests/node_tests/reactions.js
@@ -134,8 +134,8 @@ test("open_reactions_popover (sent by me)", () => {
assert.equal(target, "action-stub");
};
- assert(reactions.open_reactions_popover());
- assert(called);
+ assert.ok(reactions.open_reactions_popover());
+ assert.ok(called);
});
test("open_reactions_popover (not sent by me)", () => {
@@ -149,16 +149,16 @@ test("open_reactions_popover (not sent by me)", () => {
assert.equal(target, "reaction-stub");
};
- assert(reactions.open_reactions_popover());
- assert(called);
+ assert.ok(reactions.open_reactions_popover());
+ assert.ok(called);
});
test("basics", () => {
const message = {...sample_message};
const result = reactions.get_message_reactions(message);
- assert(reactions.current_user_has_reacted_to_emoji(message, "unicode_emoji,1f642"));
- assert(!reactions.current_user_has_reacted_to_emoji(message, "bogus"));
+ assert.ok(reactions.current_user_has_reacted_to_emoji(message, "unicode_emoji,1f642"));
+ assert.ok(!reactions.current_user_has_reacted_to_emoji(message, "bogus"));
result.sort((a, b) => a.count - b.count);
@@ -616,8 +616,8 @@ test("view.insert_new_reaction (me w/unicode emoji)", (override) => {
};
reactions.view.insert_new_reaction(opts);
- assert(template_called);
- assert(insert_called);
+ assert.ok(template_called);
+ assert.ok(insert_called);
});
test("view.insert_new_reaction (them w/zulip emoji)", (override) => {
@@ -669,8 +669,8 @@ test("view.insert_new_reaction (them w/zulip emoji)", (override) => {
};
reactions.view.insert_new_reaction(opts);
- assert(template_called);
- assert(insert_called);
+ assert.ok(template_called);
+ assert.ok(insert_called);
});
test("view.update_existing_reaction (me)", (override) => {
@@ -698,7 +698,7 @@ test("view.update_existing_reaction (me)", (override) => {
reactions.view.update_existing_reaction(opts);
- assert(our_reaction.hasClass("reacted"));
+ assert.ok(our_reaction.hasClass("reacted"));
assert.equal(
our_reaction.attr("aria-label"),
"translated: You (click to remove) and Bob van Roberts reacted with :8ball:",
@@ -730,7 +730,7 @@ test("view.update_existing_reaction (them)", (override) => {
reactions.view.update_existing_reaction(opts);
- assert(!our_reaction.hasClass("reacted"));
+ assert.ok(!our_reaction.hasClass("reacted"));
assert.equal(
our_reaction.attr("aria-label"),
"translated: You (click to remove), Bob van Roberts, Cali and Alexus reacted with :8ball:",
@@ -763,7 +763,7 @@ test("view.remove_reaction (me)", (override) => {
reactions.view.remove_reaction(opts);
- assert(!our_reaction.hasClass("reacted"));
+ assert.ok(!our_reaction.hasClass("reacted"));
assert.equal(
our_reaction.attr("aria-label"),
"translated: Bob van Roberts and Cali reacted with :8ball:",
@@ -797,7 +797,7 @@ test("view.remove_reaction (them)", (override) => {
our_reaction.addClass("reacted");
reactions.view.remove_reaction(opts);
- assert(our_reaction.hasClass("reacted"));
+ assert.ok(our_reaction.hasClass("reacted"));
assert.equal(
our_reaction.attr("aria-label"),
"translated: You (click to remove) reacted with :8ball:",
@@ -827,7 +827,7 @@ test("view.remove_reaction (last person)", (override) => {
removed = true;
};
reactions.view.remove_reaction(opts);
- assert(removed);
+ assert.ok(removed);
});
test("error_handling", (override) => {
diff --git a/frontend_tests/node_tests/reload_state.js b/frontend_tests/node_tests/reload_state.js
--- a/frontend_tests/node_tests/reload_state.js
+++ b/frontend_tests/node_tests/reload_state.js
@@ -15,13 +15,13 @@ function test(label, f) {
}
test("set_state_to_pending", () => {
- assert(!reload_state.is_pending());
+ assert.ok(!reload_state.is_pending());
reload_state.set_state_to_pending();
- assert(reload_state.is_pending());
+ assert.ok(reload_state.is_pending());
});
test("set_state_to_in_progress", () => {
- assert(!reload_state.is_in_progress());
+ assert.ok(!reload_state.is_in_progress());
reload_state.set_state_to_in_progress();
- assert(reload_state.is_in_progress());
+ assert.ok(reload_state.is_in_progress());
});
diff --git a/frontend_tests/node_tests/rendered_markdown.js b/frontend_tests/node_tests/rendered_markdown.js
--- a/frontend_tests/node_tests/rendered_markdown.js
+++ b/frontend_tests/node_tests/rendered_markdown.js
@@ -126,14 +126,14 @@ run_test("user-mention", () => {
$content.set_find_results(".user-mention", $array([$iago, $cordelia]));
// Initial asserts
- assert(!$iago.hasClass("user-mention-me"));
+ assert.ok(!$iago.hasClass("user-mention-me"));
assert.equal($iago.text(), "never-been-set");
assert.equal($cordelia.text(), "never-been-set");
rm.update_elements($content);
// Final asserts
- assert($iago.hasClass("user-mention-me"));
+ assert.ok($iago.hasClass("user-mention-me"));
assert.equal($iago.text(), `@${iago.full_name}`);
assert.equal($cordelia.text(), `@${cordelia.full_name}`);
});
@@ -145,9 +145,9 @@ run_test("user-mention (wildcard)", () => {
$mention.attr("data-user-id", "*");
$content.set_find_results(".user-mention", $array([$mention]));
- assert(!$mention.hasClass("user-mention-me"));
+ assert.ok(!$mention.hasClass("user-mention-me"));
rm.update_elements($content);
- assert($mention.hasClass("user-mention-me"));
+ assert.ok($mention.hasClass("user-mention-me"));
});
run_test("user-mention (email)", () => {
@@ -159,7 +159,7 @@ run_test("user-mention (email)", () => {
$content.set_find_results(".user-mention", $array([$mention]));
rm.update_elements($content);
- assert(!$mention.hasClass("user-mention-me"));
+ assert.ok(!$mention.hasClass("user-mention-me"));
assert.equal($mention.text(), "@Cordelia Lear");
});
@@ -169,7 +169,7 @@ run_test("user-mention (missing)", () => {
$content.set_find_results(".user-mention", $array([$mention]));
rm.update_elements($content);
- assert(!$mention.hasClass("user-mention-me"));
+ assert.ok(!$mention.hasClass("user-mention-me"));
});
run_test("user-group-mention", () => {
@@ -184,14 +184,14 @@ run_test("user-group-mention", () => {
$content.set_find_results(".user-group-mention", $array([$group_me, $group_other]));
// Initial asserts
- assert(!$group_me.hasClass("user-mention-me"));
+ assert.ok(!$group_me.hasClass("user-mention-me"));
assert.equal($group_me.text(), "never-been-set");
assert.equal($group_other.text(), "never-been-set");
rm.update_elements($content);
// Final asserts
- assert($group_me.hasClass("user-mention-me"));
+ assert.ok($group_me.hasClass("user-mention-me"));
assert.equal($group_me.text(), `@${group_me.name}`);
assert.equal($group_other.text(), `@${group_other.name}`);
});
@@ -204,7 +204,7 @@ run_test("user-group-mention (error)", () => {
rm.update_elements($content);
- assert(!$group.hasClass("user-mention-me"));
+ assert.ok(!$group.hasClass("user-mention-me"));
});
run_test("user-group-mention (missing)", () => {
@@ -214,7 +214,7 @@ run_test("user-group-mention (missing)", () => {
rm.update_elements($content);
- assert(!$group.hasClass("user-mention-me"));
+ assert.ok(!$group.hasClass("user-mention-me"));
});
run_test("stream-links", () => {
@@ -328,7 +328,7 @@ run_test("emoji", () => {
rm.update_elements($content);
- assert(called);
+ assert.ok(called);
// Set page parameters back so that test run order is independent
page_params.emojiset = "apple";
@@ -476,7 +476,7 @@ run_test("rtl", () => {
$content.text("مرحبا");
- assert(!$content.hasClass("rtl"));
+ assert.ok(!$content.hasClass("rtl"));
rm.update_elements($content);
- assert($content.hasClass("rtl"));
+ assert.ok($content.hasClass("rtl"));
});
diff --git a/frontend_tests/node_tests/rtl.js b/frontend_tests/node_tests/rtl.js
--- a/frontend_tests/node_tests/rtl.js
+++ b/frontend_tests/node_tests/rtl.js
@@ -130,19 +130,19 @@ run_test("get_direction", () => {
run_test("set_rtl_class_for_textarea rtl", () => {
const textarea = $.create("some-textarea");
- assert(!textarea.hasClass("rtl"));
+ assert.ok(!textarea.hasClass("rtl"));
const text = "```quote\nمرحبا";
textarea.val(text);
rtl.set_rtl_class_for_textarea(textarea);
- assert(textarea.hasClass("rtl"));
+ assert.ok(textarea.hasClass("rtl"));
});
run_test("set_rtl_class_for_textarea ltr", () => {
const textarea = $.create("some-textarea");
textarea.addClass("rtl");
- assert(textarea.hasClass("rtl"));
+ assert.ok(textarea.hasClass("rtl"));
const text = "```quote\nEnglish text";
textarea.val(text);
rtl.set_rtl_class_for_textarea(textarea);
- assert(!textarea.hasClass("rtl"));
+ assert.ok(!textarea.hasClass("rtl"));
});
diff --git a/frontend_tests/node_tests/search.js b/frontend_tests/node_tests/search.js
--- a/frontend_tests/node_tests/search.js
+++ b/frontend_tests/node_tests/search.js
@@ -60,28 +60,28 @@ test("update_button_visibility", () => {
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(search_button.prop("disabled"));
+ assert.ok(search_button.prop("disabled"));
search_query.is = () => true;
search_query.val("");
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
search_query.is = () => false;
search_query.val("Test search term");
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
search_query.is = () => false;
search_query.val("");
narrow_state.active = () => true;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
});
test("initialize", () => {
@@ -177,8 +177,8 @@ test("initialize", () => {
_setup("ver");
assert.equal(opts.updater("ver"), "ver");
- assert(!is_blurred);
- assert(is_append_search_string_called);
+ assert.ok(!is_blurred);
+ assert.ok(is_append_search_string_called);
operators = [
{
@@ -190,15 +190,15 @@ test("initialize", () => {
_setup("stream:Verona");
assert.equal(opts.updater("stream:Verona"), "stream:Verona");
- assert(!is_blurred);
- assert(is_append_search_string_called);
+ assert.ok(!is_blurred);
+ assert.ok(is_append_search_string_called);
search.__Rewire__("is_using_input_method", true);
_setup("stream:Verona");
assert.equal(opts.updater("stream:Verona"), "stream:Verona");
- assert(!is_blurred);
- assert(is_append_search_string_called);
+ assert.ok(!is_blurred);
+ assert.ok(is_append_search_string_called);
search_query_box.off("blur");
}
@@ -225,7 +225,7 @@ test("initialize", () => {
search.__Rewire__("is_using_input_method", false);
searchbox_form.trigger("compositionend");
- assert(search.is_using_input_method);
+ assert.ok(search.is_using_input_method);
const keydown = searchbox_form.get_on_handler("keydown");
let default_prevented = false;
@@ -238,16 +238,16 @@ test("initialize", () => {
};
search_query_box.is = () => false;
assert.equal(keydown(ev), undefined);
- assert(!default_prevented);
+ assert.ok(!default_prevented);
ev.key = "Enter";
assert.equal(keydown(ev), undefined);
- assert(!default_prevented);
+ assert.ok(!default_prevented);
ev.key = "Enter";
search_query_box.is = () => true;
assert.equal(keydown(ev), undefined);
- assert(default_prevented);
+ assert.ok(default_prevented);
let operators;
let is_blurred;
@@ -288,38 +288,38 @@ test("initialize", () => {
search_query_box.is = () => false;
searchbox_form.trigger(ev);
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
ev.key = "Enter";
search_query_box.is = () => false;
searchbox_form.trigger(ev);
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
ev.key = "Enter";
search_query_box.is = () => true;
searchbox_form.trigger(ev);
- assert(is_blurred);
+ assert.ok(is_blurred);
_setup("ver");
search.__Rewire__("is_using_input_method", true);
searchbox_form.trigger(ev);
// No change on Enter keyup event when using input tool
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
_setup("ver");
ev.key = "Enter";
search_query_box.is = () => true;
searchbox_form.trigger(ev);
- assert(is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(is_blurred);
+ assert.ok(!search_button.prop("disabled"));
search_button.prop("disabled", true);
search_query_box.trigger("focus");
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
});
test("initiate_search", () => {
@@ -353,8 +353,8 @@ test("initiate_search", () => {
};
search.initiate_search();
- assert(typeahead_forced_open);
- assert(is_searchbox_text_selected);
- assert(is_searchbox_focused);
+ assert.ok(typeahead_forced_open);
+ assert.ok(is_searchbox_text_selected);
+ assert.ok(is_searchbox_focused);
assert.deepEqual(css_args, {"box-shadow": "inset 0px 0px 0px 2px hsl(204, 20%, 74%)"});
});
diff --git a/frontend_tests/node_tests/search_legacy.js b/frontend_tests/node_tests/search_legacy.js
--- a/frontend_tests/node_tests/search_legacy.js
+++ b/frontend_tests/node_tests/search_legacy.js
@@ -38,28 +38,28 @@ run_test("update_button_visibility", () => {
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(search_button.prop("disabled"));
+ assert.ok(search_button.prop("disabled"));
search_query.is = () => true;
search_query.val("");
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
search_query.is = () => false;
search_query.val("Test search term");
narrow_state.active = () => false;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
search_query.is = () => false;
search_query.val("");
narrow_state.active = () => true;
search_button.prop("disabled", true);
search.update_button_visibility();
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
});
run_test("initialize", () => {
@@ -142,7 +142,7 @@ run_test("initialize", () => {
];
_setup("ver");
assert.equal(opts.updater("ver"), "ver");
- assert(is_blurred);
+ assert.ok(is_blurred);
operators = [
{
@@ -153,12 +153,12 @@ run_test("initialize", () => {
];
_setup("stream:Verona");
assert.equal(opts.updater("stream:Verona"), "stream:Verona");
- assert(is_blurred);
+ assert.ok(is_blurred);
search.__Rewire__("is_using_input_method", true);
_setup("stream:Verona");
assert.equal(opts.updater("stream:Verona"), "stream:Verona");
- assert(!is_blurred);
+ assert.ok(!is_blurred);
search_query_box.off("blur");
}
@@ -168,7 +168,7 @@ run_test("initialize", () => {
search_button.prop("disabled", true);
search_query_box.trigger("focus");
- assert(!search_button.prop("disabled"));
+ assert.ok(!search_button.prop("disabled"));
search_query_box.val("test string");
narrow_state.search_string = () => "ver";
@@ -177,7 +177,7 @@ run_test("initialize", () => {
search.__Rewire__("is_using_input_method", false);
searchbox_form.trigger("compositionend");
- assert(search.is_using_input_method);
+ assert.ok(search.is_using_input_method);
const keydown = searchbox_form.get_on_handler("keydown");
let default_prevented = false;
@@ -190,16 +190,16 @@ run_test("initialize", () => {
};
search_query_box.is = () => false;
assert.equal(keydown(ev), undefined);
- assert(!default_prevented);
+ assert.ok(!default_prevented);
ev.key = "Enter";
assert.equal(keydown(ev), undefined);
- assert(!default_prevented);
+ assert.ok(!default_prevented);
ev.key = "Enter";
search_query_box.is = () => true;
assert.equal(keydown(ev), undefined);
- assert(default_prevented);
+ assert.ok(default_prevented);
ev = {
type: "keyup",
@@ -239,34 +239,34 @@ run_test("initialize", () => {
search_query_box.is = () => false;
searchbox_form.trigger(ev);
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
ev.key = "Enter";
search_query_box.is = () => false;
searchbox_form.trigger(ev);
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
ev.key = "Enter";
search_query_box.is = () => true;
searchbox_form.trigger(ev);
- assert(is_blurred);
+ assert.ok(is_blurred);
_setup("ver");
search.__Rewire__("is_using_input_method", true);
searchbox_form.trigger(ev);
// No change on Enter keyup event when using input tool
- assert(!is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(!is_blurred);
+ assert.ok(!search_button.prop("disabled"));
_setup("ver");
ev.key = "Enter";
search_query_box.is = () => true;
searchbox_form.trigger(ev);
- assert(is_blurred);
- assert(!search_button.prop("disabled"));
+ assert.ok(is_blurred);
+ assert.ok(!search_button.prop("disabled"));
});
run_test("initiate_search", () => {
@@ -294,8 +294,8 @@ run_test("initiate_search", () => {
};
search.initiate_search();
- assert(typeahead_forced_open);
- assert(is_searchbox_text_selected);
+ assert.ok(typeahead_forced_open);
+ assert.ok(is_searchbox_text_selected);
assert.equal($("#search_query").val(), "ver");
assert.deepEqual(searchbox_css_args, {
diff --git a/frontend_tests/node_tests/search_pill.js b/frontend_tests/node_tests/search_pill.js
--- a/frontend_tests/node_tests/search_pill.js
+++ b/frontend_tests/node_tests/search_pill.js
@@ -51,8 +51,8 @@ run_test("append", () => {
search_pill.append_search_string(is_starred_item.display_value, pill_widget);
- assert(appended);
- assert(cleared);
+ assert.ok(appended);
+ assert.ok(cleared);
});
run_test("get_items", () => {
@@ -79,6 +79,6 @@ run_test("create_pills", (override) => {
});
const pills = search_pill.create_pills({});
- assert(input_pill_create_called);
+ assert.ok(input_pill_create_called);
assert.deepEqual(pills, {dummy: "dummy"});
});
diff --git a/frontend_tests/node_tests/search_suggestion.js b/frontend_tests/node_tests/search_suggestion.js
--- a/frontend_tests/node_tests/search_suggestion.js
+++ b/frontend_tests/node_tests/search_suggestion.js
@@ -558,7 +558,7 @@ test("sent_by_me_suggestions", (override) => {
let query = "";
let suggestions = get_suggestions("", query);
- assert(suggestions.strings.includes("sender:[email protected]"));
+ assert.ok(suggestions.strings.includes("sender:[email protected]"));
assert.equal(suggestions.lookup_table.get("sender:[email protected]").description, "Sent by me");
query = "sender";
diff --git a/frontend_tests/node_tests/search_suggestion_legacy.js b/frontend_tests/node_tests/search_suggestion_legacy.js
--- a/frontend_tests/node_tests/search_suggestion_legacy.js
+++ b/frontend_tests/node_tests/search_suggestion_legacy.js
@@ -530,7 +530,7 @@ test("sent_by_me_suggestions", (override) => {
let query = "";
let suggestions = get_suggestions("", query);
- assert(suggestions.strings.includes("sender:[email protected]"));
+ assert.ok(suggestions.strings.includes("sender:[email protected]"));
assert.equal(suggestions.lookup_table.get("sender:[email protected]").description, "Sent by me");
query = "sender";
diff --git a/frontend_tests/node_tests/server_events.js b/frontend_tests/node_tests/server_events.js
--- a/frontend_tests/node_tests/server_events.js
+++ b/frontend_tests/node_tests/server_events.js
@@ -82,7 +82,7 @@ run_test("message_event", (override) => {
});
server_events._get_events_success([event]);
- assert(inserted);
+ assert.ok(inserted);
});
// Start blueslip tests here
diff --git a/frontend_tests/node_tests/settings_bots.js b/frontend_tests/node_tests/settings_bots.js
--- a/frontend_tests/node_tests/settings_bots.js
+++ b/frontend_tests/node_tests/settings_bots.js
@@ -101,23 +101,23 @@ function test_create_bot_type_input_box_toggle(f) {
$("#create_bot_type :selected").val(EMBEDDED_BOT_TYPE);
f();
- assert(!create_payload_url.hasClass("required"));
- assert(!payload_url_inputbox.visible());
- assert($("#select_service_name").hasClass("required"));
- assert($("#service_name_list").visible());
- assert(config_inputbox.visible());
+ assert.ok(!create_payload_url.hasClass("required"));
+ assert.ok(!payload_url_inputbox.visible());
+ assert.ok($("#select_service_name").hasClass("required"));
+ assert.ok($("#service_name_list").visible());
+ assert.ok(config_inputbox.visible());
$("#create_bot_type :selected").val(OUTGOING_WEBHOOK_BOT_TYPE);
f();
- assert(create_payload_url.hasClass("required"));
- assert(payload_url_inputbox.visible());
- assert(!config_inputbox.visible());
+ assert.ok(create_payload_url.hasClass("required"));
+ assert.ok(payload_url_inputbox.visible());
+ assert.ok(!config_inputbox.visible());
$("#create_bot_type :selected").val(GENERIC_BOT_TYPE);
f();
- assert(!create_payload_url.hasClass("required"));
- assert(!payload_url_inputbox.visible());
- assert(!config_inputbox.visible());
+ assert.ok(!create_payload_url.hasClass("required"));
+ assert.ok(!payload_url_inputbox.visible());
+ assert.ok(!config_inputbox.visible());
}
test("test tab clicks", (override) => {
@@ -162,41 +162,41 @@ test("test tab clicks", (override) => {
};
click_on_tab(tabs.add);
- assert(tabs.add.hasClass("active"));
- assert(!tabs.active.hasClass("active"));
- assert(!tabs.inactive.hasClass("active"));
+ assert.ok(tabs.add.hasClass("active"));
+ assert.ok(!tabs.active.hasClass("active"));
+ assert.ok(!tabs.inactive.hasClass("active"));
- assert(forms.add.visible());
- assert(!forms.active.visible());
- assert(!forms.inactive.visible());
+ assert.ok(forms.add.visible());
+ assert.ok(!forms.active.visible());
+ assert.ok(!forms.inactive.visible());
click_on_tab(tabs.active);
- assert(!tabs.add.hasClass("active"));
- assert(tabs.active.hasClass("active"));
- assert(!tabs.inactive.hasClass("active"));
+ assert.ok(!tabs.add.hasClass("active"));
+ assert.ok(tabs.active.hasClass("active"));
+ assert.ok(!tabs.inactive.hasClass("active"));
- assert(!forms.add.visible());
- assert(forms.active.visible());
- assert(!forms.inactive.visible());
+ assert.ok(!forms.add.visible());
+ assert.ok(forms.active.visible());
+ assert.ok(!forms.inactive.visible());
click_on_tab(tabs.inactive);
- assert(!tabs.add.hasClass("active"));
- assert(!tabs.active.hasClass("active"));
- assert(tabs.inactive.hasClass("active"));
+ assert.ok(!tabs.add.hasClass("active"));
+ assert.ok(!tabs.active.hasClass("active"));
+ assert.ok(tabs.inactive.hasClass("active"));
- assert(!forms.add.visible());
- assert(!forms.active.visible());
- assert(forms.inactive.visible());
+ assert.ok(!forms.add.visible());
+ assert.ok(!forms.active.visible());
+ assert.ok(forms.inactive.visible());
});
test("can_create_new_bots", () => {
page_params.is_admin = true;
- assert(settings_bots.can_create_new_bots());
+ assert.ok(settings_bots.can_create_new_bots());
page_params.is_admin = false;
page_params.realm_bot_creation_policy = 1;
- assert(settings_bots.can_create_new_bots());
+ assert.ok(settings_bots.can_create_new_bots());
page_params.realm_bot_creation_policy = 3;
- assert(!settings_bots.can_create_new_bots());
+ assert.ok(!settings_bots.can_create_new_bots());
});
diff --git a/frontend_tests/node_tests/settings_emoji.js b/frontend_tests/node_tests/settings_emoji.js
--- a/frontend_tests/node_tests/settings_emoji.js
+++ b/frontend_tests/node_tests/settings_emoji.js
@@ -27,5 +27,5 @@ run_test("build_emoji_upload_widget", () => {
build_widget_stub = true;
};
settings_emoji.build_emoji_upload_widget();
- assert(build_widget_stub);
+ assert.ok(build_widget_stub);
});
diff --git a/frontend_tests/node_tests/settings_muted_topics.js b/frontend_tests/node_tests/settings_muted_topics.js
--- a/frontend_tests/node_tests/settings_muted_topics.js
+++ b/frontend_tests/node_tests/settings_muted_topics.js
@@ -43,7 +43,7 @@ run_test("settings", (override) => {
settings_muted_topics.set_up();
assert.equal(settings_muted_topics.loaded, true);
- assert(populate_list_called);
+ assert.ok(populate_list_called);
const topic_click_handler = $("body").get_on_handler("click", ".settings-unmute-topic");
assert.equal(typeof topic_click_handler, "function");
@@ -79,6 +79,6 @@ run_test("settings", (override) => {
unmute_topic_called = true;
};
topic_click_handler.call(topic_fake_this, event);
- assert(unmute_topic_called);
+ assert.ok(unmute_topic_called);
assert.equal(topic_data_called, 2);
});
diff --git a/frontend_tests/node_tests/settings_muted_users.js b/frontend_tests/node_tests/settings_muted_users.js
--- a/frontend_tests/node_tests/settings_muted_users.js
+++ b/frontend_tests/node_tests/settings_muted_users.js
@@ -34,7 +34,7 @@ run_test("settings", (override) => {
settings_muted_users.set_up();
assert.equal(settings_muted_users.loaded, true);
- assert(populate_list_called);
+ assert.ok(populate_list_called);
const unmute_click_handler = $("body").get_on_handler("click", ".settings-unmute-user");
assert.equal(typeof unmute_click_handler, "function");
@@ -66,6 +66,6 @@ run_test("settings", (override) => {
};
unmute_click_handler.call(unmute_button, event);
- assert(unmute_user_called);
- assert(row_attribute_fetched);
+ assert.ok(unmute_user_called);
+ assert.ok(row_attribute_fetched);
});
diff --git a/frontend_tests/node_tests/settings_org.js b/frontend_tests/node_tests/settings_org.js
--- a/frontend_tests/node_tests/settings_org.js
+++ b/frontend_tests/node_tests/settings_org.js
@@ -23,7 +23,7 @@ const realm_icon = mock_esm("../../static/js/realm_icon");
stub_templates((name, data) => {
if (name === "settings/admin_realm_domains_list") {
- assert(data.realm_domain.domain);
+ assert.ok(data.realm_domain.domain);
return "stub-domains-list";
}
throw new Error(`Unknown template ${name}`);
@@ -95,7 +95,7 @@ function simulate_realm_domains_table() {
};
return function verify() {
- assert(appended);
+ assert.ok(appended);
};
}
@@ -124,7 +124,7 @@ function test_realms_domain_modal(override, add_realm_domain) {
add_realm_domain();
- assert(posted);
+ assert.ok(posted);
success_callback();
assert.equal(info.val(), "translated HTML: Added successfully!");
@@ -250,7 +250,7 @@ function test_submit_settings_form(override, submit_form) {
patched = false;
submit_form(ev);
- assert(patched);
+ assert.ok(patched);
let expected_value = {
bot_creation_policy: 1,
@@ -284,7 +284,7 @@ function test_submit_settings_form(override, submit_form) {
]);
submit_form(ev);
- assert(patched);
+ assert.ok(patched);
expected_value = {
default_language: "en",
@@ -361,7 +361,7 @@ function test_upload_realm_icon(override, upload_realm_logo_or_icon) {
});
upload_realm_logo_or_icon(file_input, null, true);
- assert(posted);
+ assert.ok(posted);
}
function test_change_allow_subdomains(change_allow_subdomains) {
@@ -842,64 +842,64 @@ test("misc", (override) => {
page_params.realm_name_changes_disabled = false;
page_params.server_name_changes_disabled = false;
settings_account.update_name_change_display();
- assert(!$("#full_name").prop("disabled"));
+ assert.ok(!$("#full_name").prop("disabled"));
assert.equal($(".change_name_tooltip").is(":visible"), false);
page_params.realm_name_changes_disabled = true;
page_params.server_name_changes_disabled = false;
settings_account.update_name_change_display();
- assert($("#full_name").prop("disabled"));
- assert($(".change_name_tooltip").is(":visible"));
+ assert.ok($("#full_name").prop("disabled"));
+ assert.ok($(".change_name_tooltip").is(":visible"));
page_params.realm_name_changes_disabled = true;
page_params.server_name_changes_disabled = true;
settings_account.update_name_change_display();
- assert($("#full_name").prop("disabled"));
- assert($(".change_name_tooltip").is(":visible"));
+ assert.ok($("#full_name").prop("disabled"));
+ assert.ok($(".change_name_tooltip").is(":visible"));
page_params.realm_name_changes_disabled = false;
page_params.server_name_changes_disabled = true;
settings_account.update_name_change_display();
- assert($("#full_name").prop("disabled"));
- assert($(".change_name_tooltip").is(":visible"));
+ assert.ok($("#full_name").prop("disabled"));
+ assert.ok($(".change_name_tooltip").is(":visible"));
page_params.realm_email_changes_disabled = false;
settings_account.update_email_change_display();
- assert(!$("#change_email .button").prop("disabled"));
+ assert.ok(!$("#change_email .button").prop("disabled"));
page_params.realm_email_changes_disabled = true;
settings_account.update_email_change_display();
- assert($("#change_email .button").prop("disabled"));
+ assert.ok($("#change_email .button").prop("disabled"));
page_params.realm_avatar_changes_disabled = false;
page_params.server_avatar_changes_disabled = false;
settings_account.update_avatar_change_display();
- assert(!$("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
- assert(!$("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
+ assert.ok(!$("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
+ assert.ok(!$("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
page_params.realm_avatar_changes_disabled = true;
page_params.server_avatar_changes_disabled = false;
settings_account.update_avatar_change_display();
- assert($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
- assert($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
page_params.realm_avatar_changes_disabled = false;
page_params.server_avatar_changes_disabled = true;
settings_account.update_avatar_change_display();
- assert($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
- assert($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
page_params.realm_avatar_changes_disabled = true;
page_params.server_avatar_changes_disabled = true;
settings_account.update_avatar_change_display();
- assert($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
- assert($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image_upload_button").prop("disabled"));
+ assert.ok($("#user-avatar-upload-widget .image-delete-button .button").prop("disabled"));
// If organization admin, these UI elements are never disabled.
page_params.is_admin = true;
settings_account.update_name_change_display();
- assert(!$("#full_name").prop("disabled"));
+ assert.ok(!$("#full_name").prop("disabled"));
assert.equal($(".change_name_tooltip").is(":visible"), false);
settings_account.update_email_change_display();
- assert(!$("#change_email .button").prop("disabled"));
+ assert.ok(!$("#change_email .button").prop("disabled"));
override(stream_settings_data, "get_streams_for_settings_page", () => [
{name: "some_stream", stream_id: 75},
@@ -939,11 +939,11 @@ test("misc", (override) => {
});
settings_org.notifications_stream_widget.render(42);
assert.equal(elem.text(), "#some_stream");
- assert(!elem.hasClass("text-warning"));
+ assert.ok(!elem.hasClass("text-warning"));
settings_org.notifications_stream_widget.render(undefined);
assert.equal(elem.text(), "translated: Disabled");
- assert(elem.hasClass("text-warning"));
+ assert.ok(elem.hasClass("text-warning"));
setting_name = "realm_signup_notifications_stream_id";
elem = $(`#${CSS.escape(setting_name)}_widget #${CSS.escape(setting_name)}_name`);
@@ -954,9 +954,9 @@ test("misc", (override) => {
});
settings_org.signup_notifications_stream_widget.render(75);
assert.equal(elem.text(), "#some_stream");
- assert(!elem.hasClass("text-warning"));
+ assert.ok(!elem.hasClass("text-warning"));
settings_org.signup_notifications_stream_widget.render(undefined);
assert.equal(elem.text(), "translated: Disabled");
- assert(elem.hasClass("text-warning"));
+ assert.ok(elem.hasClass("text-warning"));
});
diff --git a/frontend_tests/node_tests/settings_user_groups.js b/frontend_tests/node_tests/settings_user_groups.js
--- a/frontend_tests/node_tests/settings_user_groups.js
+++ b/frontend_tests/node_tests/settings_user_groups.js
@@ -41,7 +41,7 @@ function reset_test_setup(pill_container_stub) {
function input_pill_stub(opts) {
assert.equal(opts.container, pill_container_stub);
create_item_handler = opts.create_item_from_text;
- assert(create_item_handler);
+ assert.ok(create_item_handler);
return pills;
}
input_pill.create = input_pill_stub;
@@ -55,11 +55,11 @@ function test_ui(label, f) {
test_ui("can_edit", () => {
page_params.is_guest = false;
page_params.is_admin = true;
- assert(settings_user_groups.can_edit(1));
+ assert.ok(settings_user_groups.can_edit(1));
page_params.is_admin = false;
page_params.is_guest = true;
- assert(!settings_user_groups.can_edit(1));
+ assert.ok(!settings_user_groups.can_edit(1));
page_params.is_guest = false;
page_params.is_admin = false;
@@ -68,11 +68,11 @@ test_ui("can_edit", () => {
assert.equal(user_id, undefined);
return false;
};
- assert(!settings_user_groups.can_edit(1));
+ assert.ok(!settings_user_groups.can_edit(1));
page_params.realm_user_group_edit_policy = 2;
page_params.is_admin = true;
- assert(settings_user_groups.can_edit(1));
+ assert.ok(settings_user_groups.can_edit(1));
page_params.is_admin = false;
user_groups.is_member_of = (group_id, user_id) => {
@@ -80,7 +80,7 @@ test_ui("can_edit", () => {
assert.equal(user_id, undefined);
return true;
};
- assert(!settings_user_groups.can_edit(1));
+ assert.ok(!settings_user_groups.can_edit(1));
page_params.realm_user_group_edit_policy = 1;
page_params.is_admin = false;
@@ -89,7 +89,7 @@ test_ui("can_edit", () => {
assert.equal(user_id, undefined);
return true;
};
- assert(settings_user_groups.can_edit(1));
+ assert.ok(settings_user_groups.can_edit(1));
});
const user_group_selector = `#user-groups #${CSS.escape(1)}`;
@@ -172,7 +172,7 @@ test_ui("populate_user_groups", (override) => {
const pill_container_stub = $(`.pill-container[data-group-pills="${CSS.escape(1)}"]`);
pills.appendValidatedData = (item) => {
const id = item.user_id;
- assert(!all_pills.has(id));
+ assert.ok(!all_pills.has(id));
all_pills.set(id, item);
};
pills.items = () => Array.from(all_pills.values());
@@ -188,9 +188,9 @@ test_ui("populate_user_groups", (override) => {
let input_typeahead_called = false;
input_field_stub.typeahead = (config) => {
assert.equal(config.items, 5);
- assert(config.fixed);
- assert(config.dropup);
- assert(config.stopAdvance);
+ assert.ok(config.fixed);
+ assert.ok(config.dropup);
+ assert.ok(config.stopAdvance);
assert.equal(typeof config.source, "function");
assert.equal(typeof config.highlighter, "function");
assert.equal(typeof config.matcher, "function");
@@ -221,20 +221,20 @@ test_ui("populate_user_groups", (override) => {
/* Here the query doesn't begin with an '@' because typeahead is triggered
by the '@' sign and thus removed in the query. */
let result = config.matcher.call(fake_context, iago);
- assert(!result);
+ assert.ok(!result);
result = config.matcher.call(fake_context, alice);
- assert(result);
+ assert.ok(result);
page_params.realm_email_address_visibility =
settings_config.email_address_visibility_values.admins_only.code;
page_params.is_admin = false;
result = config.matcher.call(fake_context_for_email, bob);
- assert(!result);
+ assert.ok(!result);
page_params.is_admin = true;
result = config.matcher.call(fake_context_for_email, bob);
- assert(result);
+ assert.ok(result);
})();
(function test_sorter() {
@@ -243,7 +243,7 @@ test_ui("populate_user_groups", (override) => {
sort_recipients_typeahead_called = true;
};
config.sorter.call(fake_context, []);
- assert(sort_recipients_typeahead_called);
+ assert.ok(sort_recipients_typeahead_called);
})();
(function test_updater() {
@@ -284,9 +284,9 @@ test_ui("populate_user_groups", (override) => {
text_cleared = false;
config.updater(alice);
// update_cancel_button is called.
- assert(saved_fade_out_called);
- assert(cancel_fade_to_called);
- assert(instructions_fade_to_called);
+ assert.ok(saved_fade_out_called);
+ assert.ok(cancel_fade_to_called);
+ assert.ok(instructions_fade_to_called);
assert.equal(text_cleared, true);
})();
input_typeahead_called = true;
@@ -311,14 +311,14 @@ test_ui("populate_user_groups", (override) => {
function test_create_item(handler) {
(function test_rejection_path() {
const item = handler(iago.email, pills.items());
- assert(get_by_email_called);
+ assert.ok(get_by_email_called);
assert.equal(item, undefined);
})();
(function test_success_path() {
get_by_email_called = false;
const res = handler(bob.email, pills.items());
- assert(get_by_email_called);
+ assert.ok(get_by_email_called);
assert.equal(typeof res, "object");
assert.equal(res.user_id, bob.user_id);
assert.equal(res.display_value, bob.full_name);
@@ -335,10 +335,10 @@ test_ui("populate_user_groups", (override) => {
reset_test_setup(pill_container_stub);
settings_user_groups.set_up();
- assert(templates_render_called);
- assert(user_groups_list_append_called);
- assert(get_by_user_id_called);
- assert(input_typeahead_called);
+ assert.ok(templates_render_called);
+ assert.ok(user_groups_list_append_called);
+ assert.ok(get_by_user_id_called);
+ assert.ok(input_typeahead_called);
test_create_item(create_item_handler);
// Tests for settings_user_groups.set_up workflow.
@@ -478,7 +478,7 @@ test_ui("with_external_user", (override) => {
assert.equal(set_parents_result_called, 1);
assert.equal(set_attributes_called, 1);
assert.equal(can_edit_called, 9);
- assert(exit_button_called);
+ assert.ok(exit_button_called);
assert.equal(user_group_find_called, 2);
assert.equal(pill_container_find_called, 4);
assert.equal(turned_off["keydown/.pill"], true);
@@ -493,7 +493,7 @@ test_ui("reload", (override) => {
populate_user_groups_called = true;
});
settings_user_groups.reload();
- assert(populate_user_groups_called);
+ assert.ok(populate_user_groups_called);
assert.equal($("#user-groups").html(), "");
});
@@ -542,7 +542,7 @@ test_ui("on_events", (override) => {
opts.success();
- assert(!$("#admin-user-group-status").visible());
+ assert.ok(!$("#admin-user-group-status").visible());
assert.equal($("form.admin-user-group-form input[type='text']").val(), "");
})();
@@ -561,7 +561,7 @@ test_ui("on_events", (override) => {
};
opts.error(xhr);
- assert(!$("#admin-user-group-status").visible());
+ assert.ok(!$("#admin-user-group-status").visible());
})();
};
@@ -603,7 +603,7 @@ test_ui("on_events", (override) => {
},
};
handler(event);
- assert(default_action_for_enter_stopped);
+ assert.ok(default_action_for_enter_stopped);
})();
(function test_do_not_blur() {
@@ -635,7 +635,7 @@ test_ui("on_events", (override) => {
return [];
};
handler.call(fake_this, event);
- assert(!api_endpoint_called);
+ assert.ok(!api_endpoint_called);
}
api_endpoint_called = false;
@@ -646,7 +646,7 @@ test_ui("on_events", (override) => {
return [];
};
handler.call(fake_this, event);
- assert(!api_endpoint_called);
+ assert.ok(!api_endpoint_called);
// Cancel button triggers blur event.
let settings_user_groups_reload_called = false;
@@ -664,8 +664,8 @@ test_ui("on_events", (override) => {
return [];
};
handler.call(fake_this, event);
- assert(!api_endpoint_called);
- assert(settings_user_groups_reload_called);
+ assert.ok(!api_endpoint_called);
+ assert.ok(settings_user_groups_reload_called);
}
})();
@@ -697,23 +697,23 @@ test_ui("on_events", (override) => {
// Cancel button removed if user group if user group has no changes.
const fake_this = $.create("fake-#update_cancel_button");
handler_name.call(fake_this);
- assert(cancel_fade_out_called);
- assert(instructions_fade_out_called);
+ assert.ok(cancel_fade_out_called);
+ assert.ok(instructions_fade_out_called);
// Check if cancel button removed if user group error is showing.
$(user_group_selector + " .user-group-status").show();
cancel_fade_out_called = false;
instructions_fade_out_called = false;
handler_name.call(fake_this);
- assert(cancel_fade_out_called);
- assert(instructions_fade_out_called);
+ assert.ok(cancel_fade_out_called);
+ assert.ok(instructions_fade_out_called);
// Check for handler_desc to achieve 100% coverage.
cancel_fade_out_called = false;
instructions_fade_out_called = false;
handler_desc.call(fake_this);
- assert(cancel_fade_out_called);
- assert(instructions_fade_out_called);
+ assert.ok(cancel_fade_out_called);
+ assert.ok(instructions_fade_out_called);
})();
(function test_user_groups_save_group_changes_triggered() {
@@ -760,9 +760,9 @@ test_ui("on_events", (override) => {
func();
});
opts.success();
- assert(cancel_fade_out_called);
- assert(instructions_fade_out_called);
- assert(saved_fade_to_called);
+ assert.ok(cancel_fade_out_called);
+ assert.ok(instructions_fade_out_called);
+ assert.ok(saved_fade_to_called);
})();
(function test_post_error() {
const user_group_error = $(user_group_selector + " .user-group-status");
@@ -780,7 +780,7 @@ test_ui("on_events", (override) => {
};
opts.error(xhr);
- assert(user_group_error.visible());
+ assert.ok(user_group_error.visible());
})();
};
@@ -793,19 +793,19 @@ test_ui("on_events", (override) => {
api_endpoint_called = false;
handler_name.call(fake_this, event);
- assert(api_endpoint_called);
+ assert.ok(api_endpoint_called);
// Check API endpoint isn't called if name and desc haven't changed.
group_data.name = "translated: mobile";
group_data.description = "translated: All mobile members";
api_endpoint_called = false;
handler_name.call(fake_this, event);
- assert(!api_endpoint_called);
+ assert.ok(!api_endpoint_called);
// Check for handler_desc to achieve 100% coverage.
api_endpoint_called = false;
handler_desc.call(fake_this, event);
- assert(!api_endpoint_called);
+ assert.ok(!api_endpoint_called);
})();
(function test_user_groups_save_member_changes_triggered() {
@@ -846,9 +846,9 @@ test_ui("on_events", (override) => {
(function test_post_success() {
opts.success();
- assert(cancel_fade_out_called);
- assert(instructions_fade_out_called);
- assert(saved_fade_to_called);
+ assert.ok(cancel_fade_out_called);
+ assert.ok(instructions_fade_out_called);
+ assert.ok(saved_fade_to_called);
})();
};
@@ -861,6 +861,6 @@ test_ui("on_events", (override) => {
api_endpoint_called = false;
handler.call(fake_this, event);
- assert(api_endpoint_called);
+ assert.ok(api_endpoint_called);
})();
});
diff --git a/frontend_tests/node_tests/stream_data.js b/frontend_tests/node_tests/stream_data.js
--- a/frontend_tests/node_tests/stream_data.js
+++ b/frontend_tests/node_tests/stream_data.js
@@ -71,9 +71,9 @@ test("basics", () => {
};
stream_data.add_sub(denmark);
stream_data.add_sub(social);
- assert(stream_data.all_subscribed_streams_are_in_home_view());
+ assert.ok(stream_data.all_subscribed_streams_are_in_home_view());
stream_data.add_sub(test);
- assert(!stream_data.all_subscribed_streams_are_in_home_view());
+ assert.ok(!stream_data.all_subscribed_streams_are_in_home_view());
assert.equal(stream_data.get_sub("denmark"), denmark);
assert.equal(stream_data.get_sub("Social"), social);
@@ -83,10 +83,10 @@ test("basics", () => {
assert.deepEqual(stream_data.get_colors(), ["red", "yellow"]);
assert.deepEqual(stream_data.subscribed_stream_ids(), [social.stream_id, test.stream_id]);
- assert(stream_data.is_subscribed("social"));
- assert(stream_data.is_subscribed("Social"));
- assert(!stream_data.is_subscribed("Denmark"));
- assert(!stream_data.is_subscribed("Rome"));
+ assert.ok(stream_data.is_subscribed("social"));
+ assert.ok(stream_data.is_subscribed("Social"));
+ assert.ok(!stream_data.is_subscribed("Denmark"));
+ assert.ok(!stream_data.is_subscribed("Rome"));
assert.equal(stream_data.get_stream_privacy_policy(test.stream_id), "public");
assert.equal(stream_data.get_stream_privacy_policy(social.stream_id), "invite-only");
@@ -95,8 +95,8 @@ test("basics", () => {
"invite-only-public-history",
);
- assert(stream_data.get_invite_only("social"));
- assert(!stream_data.get_invite_only("unknown"));
+ assert.ok(stream_data.get_invite_only("social"));
+ assert.ok(!stream_data.get_invite_only("unknown"));
assert.equal(stream_data.get_color("social"), "red");
assert.equal(stream_data.get_color("unknown"), "#c2c2c2");
@@ -104,17 +104,17 @@ test("basics", () => {
assert.equal(stream_data.get_name("denMARK"), "Denmark");
assert.equal(stream_data.get_name("unknown Stream"), "unknown Stream");
- assert(!stream_data.is_muted(social.stream_id));
- assert(stream_data.is_muted(denmark.stream_id));
+ assert.ok(!stream_data.is_muted(social.stream_id));
+ assert.ok(stream_data.is_muted(denmark.stream_id));
assert.equal(stream_data.maybe_get_stream_name(), undefined);
assert.equal(stream_data.maybe_get_stream_name(social.stream_id), "social");
assert.equal(stream_data.maybe_get_stream_name(42), undefined);
stream_data.set_realm_default_streams([denmark]);
- assert(stream_data.is_default_stream_id(denmark.stream_id));
- assert(!stream_data.is_default_stream_id(social.stream_id));
- assert(!stream_data.is_default_stream_id(999999));
+ assert.ok(stream_data.is_default_stream_id(denmark.stream_id));
+ assert.ok(!stream_data.is_default_stream_id(social.stream_id));
+ assert.ok(!stream_data.is_default_stream_id(999999));
assert.equal(stream_data.slug_to_name("2-social"), "social");
assert.equal(stream_data.slug_to_name("2-whatever"), "social");
@@ -167,19 +167,19 @@ test("is_active", () => {
sub = {name: "pets", subscribed: false, stream_id: 111};
stream_data.add_sub(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_data.subscribe_myself(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
- assert(contains_sub(stream_data.subscribed_subs(), sub));
- assert(!contains_sub(stream_data.unsubscribed_subs(), sub));
+ assert.ok(contains_sub(stream_data.subscribed_subs(), sub));
+ assert.ok(!contains_sub(stream_data.unsubscribed_subs(), sub));
stream_data.unsubscribe_myself(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
sub.pin_to_top = true;
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
sub.pin_to_top = false;
const opts = {
@@ -189,7 +189,7 @@ test("is_active", () => {
};
stream_topic_history.add_message(opts);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
page_params.demote_inactive_streams =
settings_config.demote_inactive_streams_values.always.code;
@@ -198,26 +198,26 @@ test("is_active", () => {
sub = {name: "pets", subscribed: false, stream_id: 111};
stream_data.add_sub(sub);
- assert(!stream_data.is_active(sub));
+ assert.ok(!stream_data.is_active(sub));
sub.pin_to_top = true;
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
sub.pin_to_top = false;
stream_data.subscribe_myself(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_data.unsubscribe_myself(sub);
- assert(!stream_data.is_active(sub));
+ assert.ok(!stream_data.is_active(sub));
sub = {name: "lunch", subscribed: false, stream_id: 222};
stream_data.add_sub(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_topic_history.add_message(opts);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
page_params.demote_inactive_streams = settings_config.demote_inactive_streams_values.never.code;
stream_data.set_filter_out_inactives();
@@ -225,20 +225,20 @@ test("is_active", () => {
sub = {name: "pets", subscribed: false, stream_id: 111};
stream_data.add_sub(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_data.subscribe_myself(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_data.unsubscribe_myself(sub);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
sub.pin_to_top = true;
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
stream_topic_history.add_message(opts);
- assert(stream_data.is_active(sub));
+ assert.ok(stream_data.is_active(sub));
});
test("admin_options", () => {
@@ -266,8 +266,8 @@ test("admin_options", () => {
// non-admins can't do anything
page_params.is_admin = false;
let sub = make_sub();
- assert(!is_realm_admin(sub));
- assert(!can_change_stream_permissions(sub));
+ assert.ok(!is_realm_admin(sub));
+ assert.ok(!can_change_stream_permissions(sub));
// just a sanity check that we leave "normal" fields alone
assert.equal(sub.color, "blue");
@@ -277,22 +277,22 @@ test("admin_options", () => {
// admins can make public streams become private
sub = make_sub();
- assert(is_realm_admin(sub));
- assert(can_change_stream_permissions(sub));
+ assert.ok(is_realm_admin(sub));
+ assert.ok(can_change_stream_permissions(sub));
// admins can only make private streams become public
// if they are subscribed
sub = make_sub();
sub.invite_only = true;
sub.subscribed = false;
- assert(is_realm_admin(sub));
- assert(!can_change_stream_permissions(sub));
+ assert.ok(is_realm_admin(sub));
+ assert.ok(!can_change_stream_permissions(sub));
sub = make_sub();
sub.invite_only = true;
sub.subscribed = true;
- assert(is_realm_admin(sub));
- assert(can_change_stream_permissions(sub));
+ assert.ok(is_realm_admin(sub));
+ assert.ok(can_change_stream_permissions(sub));
});
test("stream_settings", () => {
@@ -417,14 +417,14 @@ test("delete_sub", () => {
stream_data.add_sub(canada);
- assert(stream_data.is_subscribed("Canada"));
+ assert.ok(stream_data.is_subscribed("Canada"));
assert.equal(stream_data.get_sub("Canada").stream_id, canada.stream_id);
assert.equal(sub_store.get(canada.stream_id).name, "Canada");
stream_data.delete_sub(canada.stream_id);
- assert(!stream_data.is_subscribed("Canada"));
- assert(!stream_data.get_sub("Canada"));
- assert(!sub_store.get(canada.stream_id));
+ assert.ok(!stream_data.is_subscribed("Canada"));
+ assert.ok(!stream_data.get_sub("Canada"));
+ assert.ok(!sub_store.get(canada.stream_id));
blueslip.expect("warn", "Failed to archive stream 99999");
stream_data.delete_sub(99999);
@@ -445,60 +445,60 @@ test("notifications", () => {
};
stream_data.add_sub(india);
- assert(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
- assert(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
page_params.enable_stream_desktop_notifications = true;
page_params.enable_stream_audible_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
- assert(stream_data.receives_notifications(india.stream_id, "audible_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "audible_notifications"));
page_params.enable_stream_desktop_notifications = false;
page_params.enable_stream_audible_notifications = false;
- assert(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
- assert(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
india.desktop_notifications = true;
india.audible_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
- assert(stream_data.receives_notifications(india.stream_id, "audible_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "audible_notifications"));
india.desktop_notifications = false;
india.audible_notifications = false;
page_params.enable_stream_desktop_notifications = true;
page_params.enable_stream_audible_notifications = true;
- assert(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
- assert(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "desktop_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "audible_notifications"));
page_params.wildcard_mentions_notify = true;
- assert(stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
page_params.wildcard_mentions_notify = false;
- assert(!stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
india.wildcard_mentions_notify = true;
- assert(stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
page_params.wildcard_mentions_notify = true;
india.wildcard_mentions_notify = false;
- assert(!stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "wildcard_mentions_notify"));
page_params.enable_stream_push_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "push_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "push_notifications"));
page_params.enable_stream_push_notifications = false;
- assert(!stream_data.receives_notifications(india.stream_id, "push_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "push_notifications"));
india.push_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "push_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "push_notifications"));
page_params.enable_stream_push_notifications = true;
india.push_notifications = false;
- assert(!stream_data.receives_notifications(india.stream_id, "push_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "push_notifications"));
page_params.enable_stream_email_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "email_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "email_notifications"));
page_params.enable_stream_email_notifications = false;
- assert(!stream_data.receives_notifications(india.stream_id, "email_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "email_notifications"));
india.email_notifications = true;
- assert(stream_data.receives_notifications(india.stream_id, "email_notifications"));
+ assert.ok(stream_data.receives_notifications(india.stream_id, "email_notifications"));
page_params.enable_stream_email_notifications = true;
india.email_notifications = false;
- assert(!stream_data.receives_notifications(india.stream_id, "email_notifications"));
+ assert.ok(!stream_data.receives_notifications(india.stream_id, "email_notifications"));
const canada = {
stream_id: 103,
@@ -580,7 +580,7 @@ test("notifications", () => {
assert.deepEqual(unmatched_streams, expected_streams);
// Get line coverage on defensive code with bogus stream_id.
- assert(!stream_data.receives_notifications(999999));
+ assert.ok(!stream_data.receives_notifications(999999));
});
const tony = {
@@ -600,9 +600,9 @@ const jazy = {
test("is_muted", () => {
stream_data.add_sub(tony);
stream_data.add_sub(jazy);
- assert(!stream_data.is_stream_muted_by_name("tony"));
- assert(stream_data.is_stream_muted_by_name("jazy"));
- assert(stream_data.is_stream_muted_by_name("EEXISTS"));
+ assert.ok(!stream_data.is_stream_muted_by_name("tony"));
+ assert.ok(stream_data.is_stream_muted_by_name("jazy"));
+ assert.ok(stream_data.is_stream_muted_by_name("EEXISTS"));
});
test("is_notifications_stream_muted", () => {
@@ -610,17 +610,17 @@ test("is_notifications_stream_muted", () => {
stream_data.add_sub(jazy);
page_params.realm_notifications_stream_id = tony.stream_id;
- assert(!stream_data.is_notifications_stream_muted());
+ assert.ok(!stream_data.is_notifications_stream_muted());
page_params.realm_notifications_stream_id = jazy.stream_id;
- assert(stream_data.is_notifications_stream_muted());
+ assert.ok(stream_data.is_notifications_stream_muted());
});
test("realm_has_notifications_stream", () => {
page_params.realm_notifications_stream_id = 10;
- assert(stream_data.realm_has_notifications_stream());
+ assert.ok(stream_data.realm_has_notifications_stream());
page_params.realm_notifications_stream_id = -1;
- assert(!stream_data.realm_has_notifications_stream());
+ assert.ok(!stream_data.realm_has_notifications_stream());
});
test("remove_default_stream", () => {
@@ -634,7 +634,7 @@ test("remove_default_stream", () => {
stream_data.add_sub(remove_me);
stream_data.set_realm_default_streams([remove_me]);
stream_data.remove_default_stream(remove_me.stream_id);
- assert(!stream_data.is_default_stream_id(remove_me.stream_id));
+ assert.ok(!stream_data.is_default_stream_id(remove_me.stream_id));
});
test("canonicalized_name", () => {
@@ -663,7 +663,7 @@ test("create_sub", (override) => {
override(color_data, "pick_color", () => "#bd86e5");
const india_sub = stream_data.create_sub_from_server_data(india);
- assert(india_sub);
+ assert.ok(india_sub);
assert.equal(india_sub.color, "#bd86e5");
const new_sub = stream_data.create_sub_from_server_data(india);
// make sure sub doesn't get created twice
@@ -677,7 +677,7 @@ test("create_sub", (override) => {
);
const antarctica_sub = stream_data.create_sub_from_server_data(antarctica);
- assert(antarctica_sub);
+ assert.ok(antarctica_sub);
assert.equal(antarctica_sub.color, "#76ce90");
});
@@ -719,12 +719,12 @@ test("initialize", () => {
page_params.realm_notifications_stream_id = -1;
initialize();
- assert(!stream_data.is_filtering_inactives());
+ assert.ok(!stream_data.is_filtering_inactives());
const stream_names = new Set(stream_data.get_streams_for_admin().map((elem) => elem.name));
- assert(stream_names.has("subscriptions"));
- assert(stream_names.has("unsubscribed"));
- assert(stream_names.has("never_subscribed"));
+ assert.ok(stream_names.has("subscriptions"));
+ assert.ok(stream_names.has("unsubscribed"));
+ assert.ok(stream_names.has("never_subscribed"));
assert.equal(stream_data.get_notifications_stream(), "");
// Simulate a private stream the user isn't subscribed to
@@ -755,7 +755,7 @@ test("filter inactives", () => {
params.realm_default_streams = [];
stream_data.initialize(params);
- assert(!stream_data.is_filtering_inactives());
+ assert.ok(!stream_data.is_filtering_inactives());
_.times(30, (i) => {
const name = "random" + i.toString();
@@ -770,7 +770,7 @@ test("filter inactives", () => {
stream_data.add_sub(sub);
});
stream_data.initialize(params);
- assert(stream_data.is_filtering_inactives());
+ assert.ok(stream_data.is_filtering_inactives());
});
test("edge_cases", () => {
diff --git a/frontend_tests/node_tests/stream_edit.js b/frontend_tests/node_tests/stream_edit.js
--- a/frontend_tests/node_tests/stream_edit.js
+++ b/frontend_tests/node_tests/stream_edit.js
@@ -155,9 +155,9 @@ test_ui("subscriber_pills", (override) => {
input_field_stub.typeahead = (config) => {
assert.equal(config.items, 5);
- assert(config.fixed);
- assert(config.dropup);
- assert(config.stopAdvance);
+ assert.ok(config.fixed);
+ assert.ok(config.dropup);
+ assert.ok(config.stopAdvance);
assert.equal(typeof config.source, "function");
assert.equal(typeof config.highlighter, "function");
@@ -191,19 +191,19 @@ test_ui("subscriber_pills", (override) => {
(function test_matcher() {
let result = config.matcher.call(fake_stream_this, denmark);
- assert(result);
+ assert.ok(result);
result = config.matcher.call(fake_stream_this, sweden);
- assert(!result);
+ assert.ok(!result);
result = config.matcher.call(fake_group_this, testers);
- assert(result);
+ assert.ok(result);
result = config.matcher.call(fake_group_this, admins);
- assert(!result);
+ assert.ok(!result);
result = config.matcher.call(fake_person_this, me);
- assert(result);
+ assert.ok(result);
result = config.matcher.call(fake_person_this, jill);
- assert(!result);
+ assert.ok(!result);
})();
(function test_sorter() {
@@ -212,18 +212,18 @@ test_ui("subscriber_pills", (override) => {
sort_streams_called = true;
};
config.sorter.call(fake_stream_this);
- assert(sort_streams_called);
+ assert.ok(sort_streams_called);
let sort_recipients_called = false;
typeahead_helper.sort_recipients = function () {
sort_recipients_called = true;
};
config.sorter.call(fake_group_this, [testers]);
- assert(sort_recipients_called);
+ assert.ok(sort_recipients_called);
sort_recipients_called = false;
config.sorter.call(fake_person_this, [me]);
- assert(sort_recipients_called);
+ assert.ok(sort_recipients_called);
})();
(function test_updater() {
@@ -270,8 +270,8 @@ test_ui("subscriber_pills", (override) => {
let fake_this = $subscription_settings;
let event = {target: fake_this};
stream_row_handler.call(fake_this, event);
- assert(template_rendered);
- assert(input_typeahead_called);
+ assert.ok(template_rendered);
+ assert.ok(input_typeahead_called);
let add_subscribers_handler = $(subscriptions_table_selector).get_on_handler(
"submit",
@@ -314,13 +314,13 @@ test_ui("subscriber_pills", (override) => {
stream_pill.get_user_ids = () => [];
add_subscribers_request = false;
add_subscribers_handler(event);
- assert(!add_subscribers_request);
+ assert.ok(!add_subscribers_request);
// No request is sent if we try to subscribe ourselves
// only and are already subscribed to the stream.
override(user_pill, "get_user_ids", () => [me.user_id]);
add_subscribers_handler(event);
- assert(!add_subscribers_request);
+ assert.ok(!add_subscribers_request);
// Denmark stream pill and fred and mark user pills are created.
// But only one request for mark is sent even though a mark user
diff --git a/frontend_tests/node_tests/stream_events.js b/frontend_tests/node_tests/stream_events.js
--- a/frontend_tests/node_tests/stream_events.js
+++ b/frontend_tests/node_tests/stream_events.js
@@ -341,12 +341,12 @@ test("marked_subscribed (emails)", (override) => {
const subs_stub = make_stub();
override(subs, "update_settings_for_subscribed", subs_stub.f);
- assert(!stream_data.is_subscribed(sub.name));
+ assert.ok(!stream_data.is_subscribed(sub.name));
const user_ids = [15, 20, 25, me.user_id];
stream_events.mark_subscribed(sub, user_ids, "");
assert.deepEqual(new Set(peer_data.get_subscribers(sub.stream_id)), new Set(user_ids));
- assert(stream_data.is_subscribed(sub.name));
+ assert.ok(stream_data.is_subscribed(sub.name));
const args = subs_stub.get_args("sub");
assert.deepEqual(sub, args.sub);
@@ -355,7 +355,7 @@ test("marked_subscribed (emails)", (override) => {
test("mark_unsubscribed (update_settings_for_unsubscribed)", (override) => {
// Test unsubscribe
const sub = {...dev_help};
- assert(sub.subscribed);
+ assert.ok(sub.subscribed);
const stub = make_stub();
@@ -394,11 +394,11 @@ test("remove_deactivated_user_from_all_streams", () => {
subs.update_subscribers_ui = subs_stub.f;
// assert starting state
- assert(!stream_data.is_user_subscribed(dev_help.stream_id, george.user_id));
+ assert.ok(!stream_data.is_user_subscribed(dev_help.stream_id, george.user_id));
// verify that deactivating user should unsubscribe user from all streams
peer_data.add_subscriber(dev_help.stream_id, george.user_id);
- assert(stream_data.is_user_subscribed(dev_help.stream_id, george.user_id));
+ assert.ok(stream_data.is_user_subscribed(dev_help.stream_id, george.user_id));
stream_events.remove_deactivated_user_from_all_streams(george.user_id);
diff --git a/frontend_tests/node_tests/stream_list.js b/frontend_tests/node_tests/stream_list.js
--- a/frontend_tests/node_tests/stream_list.js
+++ b/frontend_tests/node_tests/stream_list.js
@@ -124,7 +124,7 @@ test_ui("create_sidebar_row", (override) => {
stream_list.build_stream_list();
- assert(topic_list_cleared);
+ assert.ok(topic_list_cleared);
const expected_elems = [
devel_sidebar, // pinned
@@ -154,19 +154,19 @@ test_ui("create_sidebar_row", (override) => {
assert.equal(privacy_elem.html(), "<div>privacy-html");
stream_list.set_in_home_view(stream_id, false);
- assert(social_li.hasClass("out_of_home_view"));
+ assert.ok(social_li.hasClass("out_of_home_view"));
stream_list.set_in_home_view(stream_id, true);
- assert(!social_li.hasClass("out_of_home_view"));
+ assert.ok(!social_li.hasClass("out_of_home_view"));
const row = stream_list.stream_sidebar.get_row(stream_id);
override(stream_data, "is_active", () => true);
row.update_whether_active();
- assert(!social_li.hasClass("inactive_stream"));
+ assert.ok(!social_li.hasClass("inactive_stream"));
override(stream_data, "is_active", () => false);
row.update_whether_active();
- assert(social_li.hasClass("inactive_stream"));
+ assert.ok(social_li.hasClass("inactive_stream"));
let removed;
social_li.remove = () => {
@@ -174,7 +174,7 @@ test_ui("create_sidebar_row", (override) => {
};
row.remove();
- assert(removed);
+ assert.ok(removed);
});
test_ui("pinned_streams_never_inactive", (override) => {
@@ -193,15 +193,15 @@ test_ui("pinned_streams_never_inactive", (override) => {
override(stream_data, "is_active", () => false);
stream_list.build_stream_list();
- assert(social_sidebar.hasClass("inactive_stream"));
+ assert.ok(social_sidebar.hasClass("inactive_stream"));
override(stream_data, "is_active", () => true);
row.update_whether_active();
- assert(!social_sidebar.hasClass("inactive_stream"));
+ assert.ok(!social_sidebar.hasClass("inactive_stream"));
override(stream_data, "is_active", () => false);
row.update_whether_active();
- assert(social_sidebar.hasClass("inactive_stream"));
+ assert.ok(social_sidebar.hasClass("inactive_stream"));
// pinned streams can never be made inactive
const devel_sidebar = $("<devel sidebar row>");
@@ -210,10 +210,10 @@ test_ui("pinned_streams_never_inactive", (override) => {
override(stream_data, "is_active", () => false);
stream_list.build_stream_list();
- assert(!devel_sidebar.hasClass("inactive_stream"));
+ assert.ok(!devel_sidebar.hasClass("inactive_stream"));
row.update_whether_active();
- assert(!devel_sidebar.hasClass("inactive_stream"));
+ assert.ok(!devel_sidebar.hasClass("inactive_stream"));
});
function add_row(sub) {
@@ -303,8 +303,8 @@ test_ui("zoom_in_and_zoom_out", () => {
label1.show();
label2.show();
- assert(label1.visible());
- assert(label2.visible());
+ assert.ok(label1.visible());
+ assert.ok(label2.visible());
$.create(".stream-filters-label", {
children: [elem(label1), elem(label2)],
@@ -313,7 +313,7 @@ test_ui("zoom_in_and_zoom_out", () => {
const splitter = $.create("hr stub");
splitter.show();
- assert(splitter.visible());
+ assert.ok(splitter.visible());
$.create(".stream-split", {
children: [elem(splitter)],
@@ -344,12 +344,12 @@ test_ui("zoom_in_and_zoom_out", () => {
stream_list.zoom_in_topics({stream_id: 42});
- assert(!label1.visible());
- assert(!label2.visible());
- assert(!splitter.visible());
- assert(stream_li1.visible());
- assert(!stream_li2.visible());
- assert($("#streams_list").hasClass("zoom-in"));
+ assert.ok(!label1.visible());
+ assert.ok(!label2.visible());
+ assert.ok(!splitter.visible());
+ assert.ok(stream_li1.visible());
+ assert.ok(!stream_li2.visible());
+ assert.ok($("#streams_list").hasClass("zoom-in"));
$("#stream_filters li.narrow-filter").show = () => {
stream_li1.show();
@@ -359,12 +359,12 @@ test_ui("zoom_in_and_zoom_out", () => {
stream_li1.length = 1;
stream_list.zoom_out_topics({stream_li: stream_li1});
- assert(label1.visible());
- assert(label2.visible());
- assert(splitter.visible());
- assert(stream_li1.visible());
- assert(stream_li2.visible());
- assert($("#streams_list").hasClass("zoom-out"));
+ assert.ok(label1.visible());
+ assert.ok(label2.visible());
+ assert.ok(splitter.visible());
+ assert.ok(stream_li1.visible());
+ assert.ok(stream_li2.visible());
+ assert.ok($("#streams_list").hasClass("zoom-out"));
});
test_ui("narrowing", (override) => {
@@ -376,7 +376,7 @@ test_ui("narrowing", (override) => {
topic_list.get_stream_li = noop;
override(scroll_util, "scroll_element_into_container", noop);
- assert(!$("<devel sidebar row html>").hasClass("active-filter"));
+ assert.ok(!$("<devel sidebar row html>").hasClass("active-filter"));
stream_list.set_event_handlers();
@@ -384,20 +384,20 @@ test_ui("narrowing", (override) => {
filter = new Filter([{operator: "stream", operand: "devel"}]);
stream_list.handle_narrow_activated(filter);
- assert($("<devel sidebar row html>").hasClass("active-filter"));
+ assert.ok($("<devel sidebar row html>").hasClass("active-filter"));
filter = new Filter([
{operator: "stream", operand: "cars"},
{operator: "topic", operand: "sedans"},
]);
stream_list.handle_narrow_activated(filter);
- assert(!$("ul.filters li").hasClass("active-filter"));
- assert(!$("<cars sidebar row html>").hasClass("active-filter")); // false because of topic
+ assert.ok(!$("ul.filters li").hasClass("active-filter"));
+ assert.ok(!$("<cars sidebar row html>").hasClass("active-filter")); // false because of topic
filter = new Filter([{operator: "stream", operand: "cars"}]);
stream_list.handle_narrow_activated(filter);
- assert(!$("ul.filters li").hasClass("active-filter"));
- assert($("<cars sidebar row html>").hasClass("active-filter"));
+ assert.ok(!$("ul.filters li").hasClass("active-filter"));
+ assert.ok($("<cars sidebar row html>").hasClass("active-filter"));
let removed_classes;
$("ul#stream_filters li").removeClass = (classes) => {
@@ -411,7 +411,7 @@ test_ui("narrowing", (override) => {
stream_list.handle_narrow_deactivated();
assert.equal(removed_classes, "active-filter");
- assert(topics_closed);
+ assert.ok(topics_closed);
});
test_ui("focusout_user_filter", () => {
@@ -482,9 +482,9 @@ test_ui("sort_streams", (override) => {
const denmark_sub = stream_data.get_sub("Denmark");
const stream_id = denmark_sub.stream_id;
- assert(stream_list.stream_sidebar.has_row_for(stream_id));
+ assert.ok(stream_list.stream_sidebar.has_row_for(stream_id));
stream_list.remove_sidebar_row(stream_id);
- assert(!stream_list.stream_sidebar.has_row_for(stream_id));
+ assert.ok(!stream_list.stream_sidebar.has_row_for(stream_id));
});
test_ui("separators_only_pinned_and_dormant", (override) => {
@@ -621,7 +621,7 @@ test_ui("rename_stream", (override) => {
});
stream_list.rename_stream(sub);
- assert(count_updated);
+ assert.ok(count_updated);
});
test_ui("refresh_pin", (override) => {
@@ -658,7 +658,7 @@ test_ui("refresh_pin", (override) => {
});
stream_list.refresh_pinned_or_unpinned_stream(pinned_sub);
- assert(scrolled);
+ assert.ok(scrolled);
});
test_ui("create_initial_sidebar_rows", (override) => {
diff --git a/frontend_tests/node_tests/stream_search.js b/frontend_tests/node_tests/stream_search.js
--- a/frontend_tests/node_tests/stream_search.js
+++ b/frontend_tests/node_tests/stream_search.js
@@ -74,24 +74,24 @@ run_test("basics", (override) => {
cursor_helper = make_cursor_helper();
function verify_expanded() {
- assert(!section.hasClass("notdisplayed"));
+ assert.ok(!section.hasClass("notdisplayed"));
simulate_search_expanded();
}
function verify_focused() {
- assert(stream_list.searching());
- assert(input.is_focused());
+ assert.ok(stream_list.searching());
+ assert.ok(input.is_focused());
}
function verify_blurred() {
- assert(stream_list.searching());
- assert(input.is_focused());
+ assert.ok(stream_list.searching());
+ assert.ok(input.is_focused());
}
function verify_collapsed() {
- assert(section.hasClass("notdisplayed"));
- assert(!input.is_focused());
- assert(!stream_list.searching());
+ assert.ok(section.hasClass("notdisplayed"));
+ assert.ok(!input.is_focused());
+ assert.ok(!stream_list.searching());
simulate_search_collapsed();
}
@@ -102,7 +102,7 @@ run_test("basics", (override) => {
});
f();
- assert(updated);
+ assert.ok(updated);
}
// Initiate search (so expand widget).
diff --git a/frontend_tests/node_tests/stream_topic_history.js b/frontend_tests/node_tests/stream_topic_history.js
--- a/frontend_tests/node_tests/stream_topic_history.js
+++ b/frontend_tests/node_tests/stream_topic_history.js
@@ -320,7 +320,7 @@ test("server_history_end_to_end", () => {
get_success_callback({topics});
- assert(on_success_called);
+ assert.ok(on_success_called);
const history = stream_topic_history.get_recent_topic_names(stream_id);
assert.deepEqual(history, ["topic3", "topic2", "topic1"]);
@@ -335,7 +335,7 @@ test("server_history_end_to_end", () => {
stream_topic_history_util.get_server_history(stream_id, () => {
on_success_called = true;
});
- assert(on_success_called);
+ assert.ok(on_success_called);
});
test("all_topics_in_cache", (override) => {
diff --git a/frontend_tests/node_tests/submessage.js b/frontend_tests/node_tests/submessage.js
--- a/frontend_tests/node_tests/submessage.js
+++ b/frontend_tests/node_tests/submessage.js
@@ -63,7 +63,7 @@ run_test("make_server_callback", () => {
data: {foo: 32},
});
- assert(was_posted);
+ assert.ok(was_posted);
});
run_test("handle_event", () => {
diff --git a/frontend_tests/node_tests/subs.js b/frontend_tests/node_tests/subs.js
--- a/frontend_tests/node_tests/subs.js
+++ b/frontend_tests/node_tests/subs.js
@@ -115,7 +115,7 @@ run_test("redraw_left_panel", () => {
// on our current stream, even if it doesn't match the filter.
const denmark_row = $(`.stream-row[data-stream-id='${CSS.escape(denmark_stream_id)}']`);
// sanity check it's not set to active
- assert(!denmark_row.hasClass("active"));
+ assert.ok(!denmark_row.hasClass("active"));
function test_filter(params, expected_streams) {
const stream_ids = subs.redraw_left_panel(params);
@@ -127,10 +127,10 @@ run_test("redraw_left_panel", () => {
// Search with single keyword
test_filter({input: "Po", subscribed_only: false}, [poland, pomona]);
- assert(ui_called);
+ assert.ok(ui_called);
// The denmark row is active, even though it's not displayed.
- assert(denmark_row.hasClass("active"));
+ assert.ok(denmark_row.hasClass("active"));
// Search with multiple keywords
test_filter({input: "Denmark, Pol", subscribed_only: false}, [denmark, poland]);
@@ -198,7 +198,7 @@ run_test("redraw_left_panel", () => {
};
test_filter({input: "d", subscribed_only: true}, [poland]);
- assert(!$(".stream-row-denmark").hasClass("active"));
- assert(!$(".right .settings").visible());
- assert($(".nothing-selected").visible());
+ assert.ok(!$(".stream-row-denmark").hasClass("active"));
+ assert.ok(!$(".right .settings").visible());
+ assert.ok($(".nothing-selected").visible());
});
diff --git a/frontend_tests/node_tests/support.js b/frontend_tests/node_tests/support.js
--- a/frontend_tests/node_tests/support.js
+++ b/frontend_tests/node_tests/support.js
@@ -38,7 +38,7 @@ run_test("scrub_realm", () => {
window.prompt = () => "zulip";
click_handler.call(fake_this, event);
- assert(submit_form_called);
+ assert.ok(submit_form_called);
submit_form_called = false;
window.prompt = () => "invalid-string-id";
@@ -47,8 +47,8 @@ run_test("scrub_realm", () => {
alert_called = true;
};
click_handler.call(fake_this, event);
- assert(!submit_form_called);
- assert(alert_called);
+ assert.ok(!submit_form_called);
+ assert.ok(alert_called);
assert.equal(typeof click_handler, "function");
diff --git a/frontend_tests/node_tests/top_left_corner.js b/frontend_tests/node_tests/top_left_corner.js
--- a/frontend_tests/node_tests/top_left_corner.js
+++ b/frontend_tests/node_tests/top_left_corner.js
@@ -29,10 +29,10 @@ run_test("narrowing", (override) => {
pm_expanded = true;
});
- assert(!pm_expanded);
+ assert.ok(!pm_expanded);
let filter = new Filter([{operator: "is", operand: "private"}]);
top_left_corner.handle_narrow_activated(filter);
- assert(pm_expanded);
+ assert.ok(pm_expanded);
const alice = {
email: "[email protected]",
@@ -51,51 +51,51 @@ run_test("narrowing", (override) => {
pm_expanded = false;
filter = new Filter([{operator: "pm-with", operand: "[email protected]"}]);
top_left_corner.handle_narrow_activated(filter);
- assert(pm_expanded);
+ assert.ok(pm_expanded);
pm_expanded = false;
filter = new Filter([{operator: "pm-with", operand: "[email protected],[email protected]"}]);
top_left_corner.handle_narrow_activated(filter);
- assert(pm_expanded);
+ assert.ok(pm_expanded);
pm_expanded = false;
filter = new Filter([{operator: "pm-with", operand: "[email protected]"}]);
top_left_corner.handle_narrow_activated(filter);
- assert(!pm_expanded);
+ assert.ok(!pm_expanded);
filter = new Filter([{operator: "is", operand: "mentioned"}]);
top_left_corner.handle_narrow_activated(filter);
- assert($(".top_left_mentions").hasClass("active-filter"));
+ assert.ok($(".top_left_mentions").hasClass("active-filter"));
filter = new Filter([{operator: "is", operand: "starred"}]);
top_left_corner.handle_narrow_activated(filter);
- assert($(".top_left_starred_messages").hasClass("active-filter"));
+ assert.ok($(".top_left_starred_messages").hasClass("active-filter"));
filter = new Filter([{operator: "in", operand: "home"}]);
top_left_corner.handle_narrow_activated(filter);
- assert($(".top_left_all_messages").hasClass("active-filter"));
+ assert.ok($(".top_left_all_messages").hasClass("active-filter"));
// deactivating narrow
pm_closed = false;
top_left_corner.handle_narrow_deactivated();
- assert($(".top_left_all_messages").hasClass("active-filter"));
- assert(!$(".top_left_mentions").hasClass("active-filter"));
- assert(!$(".top_left_private_messages").hasClass("active-filter"));
- assert(!$(".top_left_starred_messages").hasClass("active-filter"));
- assert(!$(".top_left_recent_topics").hasClass("active-filter"));
- assert(pm_closed);
+ assert.ok($(".top_left_all_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_mentions").hasClass("active-filter"));
+ assert.ok(!$(".top_left_private_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_starred_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_recent_topics").hasClass("active-filter"));
+ assert.ok(pm_closed);
set_global("setTimeout", (f) => {
f();
});
top_left_corner.narrow_to_recent_topics();
- assert(!$(".top_left_all_messages").hasClass("active-filter"));
- assert(!$(".top_left_mentions").hasClass("active-filter"));
- assert(!$(".top_left_private_messages").hasClass("active-filter"));
- assert(!$(".top_left_starred_messages").hasClass("active-filter"));
- assert($(".top_left_recent_topics").hasClass("active-filter"));
+ assert.ok(!$(".top_left_all_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_mentions").hasClass("active-filter"));
+ assert.ok(!$(".top_left_private_messages").hasClass("active-filter"));
+ assert.ok(!$(".top_left_starred_messages").hasClass("active-filter"));
+ assert.ok($(".top_left_recent_topics").hasClass("active-filter"));
});
run_test("update_count_in_dom", () => {
@@ -129,7 +129,7 @@ run_test("update_count_in_dom", () => {
top_left_corner.update_dom_with_unread_counts(counts);
top_left_corner.update_starred_count(0);
- assert(!$("<mentioned-count>").visible());
+ assert.ok(!$("<mentioned-count>").visible());
assert.equal($("<mentioned-count>").text(), "");
assert.equal($("<starred-count>").text(), "");
});
diff --git a/frontend_tests/node_tests/transmit.js b/frontend_tests/node_tests/transmit.js
--- a/frontend_tests/node_tests/transmit.js
+++ b/frontend_tests/node_tests/transmit.js
@@ -36,7 +36,7 @@ run_test("transmit_message_ajax", () => {
transmit.send_message(request, success);
- assert(success_func_called);
+ assert.ok(success_func_called);
channel.xhr_error_message = (msg) => {
assert.equal(msg, "Error sending message");
@@ -56,7 +56,7 @@ run_test("transmit_message_ajax", () => {
error_func_called = true;
};
transmit.send_message(request, success, error);
- assert(error_func_called);
+ assert.ok(error_func_called);
});
run_test("transmit_message_ajax_reload_pending", () => {
@@ -94,8 +94,8 @@ run_test("transmit_message_ajax_reload_pending", () => {
opts.error(xhr, "bad request");
};
transmit.send_message(request, success, error);
- assert(!error_func_called);
- assert(reload_initiated);
+ assert.ok(!error_func_called);
+ assert.ok(reload_initiated);
});
run_test("reply_message_stream", (override) => {
diff --git a/frontend_tests/node_tests/typeahead_helper.js b/frontend_tests/node_tests/typeahead_helper.js
--- a/frontend_tests/node_tests/typeahead_helper.js
+++ b/frontend_tests/node_tests/typeahead_helper.js
@@ -612,7 +612,7 @@ test("render_person when emails hidden", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_person(b_user_1), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("render_person", () => {
@@ -627,7 +627,7 @@ test("render_person", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_person(a_user), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("render_person special_item_text", () => {
@@ -651,7 +651,7 @@ test("render_person special_item_text", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_person(special_person), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("render_stream", () => {
@@ -671,7 +671,7 @@ test("render_stream", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_stream(stream), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("render_stream w/long description", () => {
@@ -692,7 +692,7 @@ test("render_stream w/long description", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_stream(stream), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("render_emoji", () => {
@@ -721,7 +721,7 @@ test("render_emoji", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_emoji(test_emoji), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
// Test render_emoji with normal emoji.
rendered = false;
@@ -743,7 +743,7 @@ test("render_emoji", () => {
return "typeahead-item-stub";
});
assert.equal(th.render_emoji(test_emoji), "typeahead-item-stub");
- assert(rendered);
+ assert.ok(rendered);
});
test("sort_slash_commands", () => {
diff --git a/frontend_tests/node_tests/typing_data.js b/frontend_tests/node_tests/typing_data.js
--- a/frontend_tests/node_tests/typing_data.js
+++ b/frontend_tests/node_tests/typing_data.js
@@ -39,21 +39,21 @@ test("basics", () => {
assert.deepEqual(typing_data.get_all_typists(), [7, 10, 15]);
// test basic removal
- assert(typing_data.remove_typist([15, 7], "7"));
+ assert.ok(typing_data.remove_typist([15, 7], "7"));
assert.deepEqual(typing_data.get_group_typists([7, 15]), [15]);
// test removing an id that is not there
- assert(!typing_data.remove_typist([15, 7], 7));
+ assert.ok(!typing_data.remove_typist([15, 7], 7));
assert.deepEqual(typing_data.get_group_typists([7, 15]), [15]);
assert.deepEqual(typing_data.get_all_typists(), [10, 15]);
// remove user from one group, but "15" will still be among
// "all typists"
- assert(typing_data.remove_typist(["15", 7], "15"));
+ assert.ok(typing_data.remove_typist(["15", 7], "15"));
assert.deepEqual(typing_data.get_all_typists(), [10, 15]);
// now remove from the other group
- assert(typing_data.remove_typist([5, 15, 10], 15));
+ assert.ok(typing_data.remove_typist([5, 15, 10], 15));
assert.deepEqual(typing_data.get_all_typists(), [10]);
// test duplicate ids in a groups
diff --git a/frontend_tests/node_tests/typing_status.js b/frontend_tests/node_tests/typing_status.js
--- a/frontend_tests/node_tests/typing_status.js
+++ b/frontend_tests/node_tests/typing_status.js
@@ -84,7 +84,7 @@ run_test("basics", (override) => {
stopped: false,
timer_cleared: false,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// type again 3 seconds later
worker.get_current_time = returns_time(8);
@@ -100,7 +100,7 @@ run_test("basics", (override) => {
stopped: false,
timer_cleared: true,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// type after 15 secs, so that we can notify the server
// again
@@ -162,7 +162,7 @@ run_test("basics", (override) => {
stopped: false,
timer_cleared: false,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// Explicitly stop alice.
call_handler(null);
@@ -192,7 +192,7 @@ run_test("basics", (override) => {
stopped: false,
timer_cleared: false,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// Switch to an invalid conversation.
call_handler(null);
@@ -236,7 +236,7 @@ run_test("basics", (override) => {
stopped: false,
timer_cleared: false,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// Switch to bob now.
worker.get_current_time = returns_time(171);
@@ -258,7 +258,7 @@ run_test("basics", (override) => {
stopped: true,
timer_cleared: true,
});
- assert(events.idle_callback);
+ assert.ok(events.idle_callback);
// test that we correctly detect if worker.get_recipient
// and typing_status.state.current_recipient are the same
diff --git a/frontend_tests/node_tests/unread.js b/frontend_tests/node_tests/unread.js
--- a/frontend_tests/node_tests/unread.js
+++ b/frontend_tests/node_tests/unread.js
@@ -119,9 +119,9 @@ test("changing_topics", () => {
count = unread.num_unread_for_topic(stream_id, "Lunch");
assert.equal(count, 2);
- assert(unread.topic_has_any_unread(stream_id, "lunch"));
- assert(!unread.topic_has_any_unread(wrong_stream_id, "lunch"));
- assert(!unread.topic_has_any_unread(stream_id, "NOT lunch"));
+ assert.ok(unread.topic_has_any_unread(stream_id, "lunch"));
+ assert.ok(!unread.topic_has_any_unread(wrong_stream_id, "lunch"));
+ assert.ok(!unread.topic_has_any_unread(stream_id, "NOT lunch"));
count = unread.num_unread_for_topic(stream_id, "NOT lunch");
assert.equal(count, 0);
@@ -149,13 +149,13 @@ test("changing_topics", () => {
count = unread.num_unread_for_topic(stream_id, "lunch");
assert.equal(count, 0);
- assert(!unread.topic_has_any_unread(stream_id, "lunch"));
- assert(!unread.topic_has_any_unread(wrong_stream_id, "lunch"));
+ assert.ok(!unread.topic_has_any_unread(stream_id, "lunch"));
+ assert.ok(!unread.topic_has_any_unread(wrong_stream_id, "lunch"));
count = unread.num_unread_for_topic(stream_id, "snack");
assert.equal(count, 1);
- assert(unread.topic_has_any_unread(stream_id, "snack"));
- assert(!unread.topic_has_any_unread(wrong_stream_id, "snack"));
+ assert.ok(unread.topic_has_any_unread(stream_id, "snack"));
+ assert.ok(!unread.topic_has_any_unread(wrong_stream_id, "snack"));
// Test defensive code. Trying to update a message we don't know
// about should be a no-op.
@@ -180,12 +180,12 @@ test("changing_topics", () => {
unread.process_loaded_messages([sticky_message]);
count = unread.num_unread_for_topic(stream_id, "sticky");
assert.equal(count, 1);
- assert(sticky_message.unread);
+ assert.ok(sticky_message.unread);
unread.mark_as_read(sticky_message.id);
count = unread.num_unread_for_topic(stream_id, "sticky");
assert.equal(count, 0);
- assert(!sticky_message.unread);
+ assert.ok(!sticky_message.unread);
event = {
topic: "sticky",
@@ -594,9 +594,9 @@ test("declare_bankruptcy", () => {
test("message_unread", () => {
// Test some code that might be overly defensive, for line coverage sake.
- assert(!unread.message_unread(undefined));
- assert(unread.message_unread({unread: true}));
- assert(!unread.message_unread({unread: false}));
+ assert.ok(!unread.message_unread(undefined));
+ assert.ok(unread.message_unread({unread: true}));
+ assert.ok(!unread.message_unread({unread: false}));
});
test("server_counts", () => {
diff --git a/frontend_tests/node_tests/upgrade.js b/frontend_tests/node_tests/upgrade.js
--- a/frontend_tests/node_tests/upgrade.js
+++ b/frontend_tests/node_tests/upgrade.js
@@ -227,15 +227,15 @@ run_test("autopay_form_fields", () => {
assert.equal(schedule_options[0].value, "monthly");
assert.equal(schedule_options[1].value, "annual");
- assert(document.querySelector("#autopay-error"));
- assert(document.querySelector("#autopay-loading"));
- assert(document.querySelector("#autopay"));
- assert(document.querySelector("#autopay-success"));
- assert(document.querySelector("#autopay_loading_indicator"));
+ assert.ok(document.querySelector("#autopay-error"));
+ assert.ok(document.querySelector("#autopay-loading"));
+ assert.ok(document.querySelector("#autopay"));
+ assert.ok(document.querySelector("#autopay-success"));
+ assert.ok(document.querySelector("#autopay_loading_indicator"));
- assert(document.querySelector("input[name=csrfmiddlewaretoken]"));
+ assert.ok(document.querySelector("input[name=csrfmiddlewaretoken]"));
- assert(document.querySelector("#free-trial-alert-message"));
+ assert.ok(document.querySelector("#free-trial-alert-message"));
});
run_test("invoice_form_fields", () => {
@@ -259,13 +259,13 @@ run_test("invoice_form_fields", () => {
assert.equal(schedule_options.length, 1);
assert.equal(schedule_options[0].value, "annual");
- assert(document.querySelector("#invoice-error"));
- assert(document.querySelector("#invoice-loading"));
- assert(document.querySelector("#invoice"));
- assert(document.querySelector("#invoice-success"));
- assert(document.querySelector("#invoice_loading_indicator"));
+ assert.ok(document.querySelector("#invoice-error"));
+ assert.ok(document.querySelector("#invoice-loading"));
+ assert.ok(document.querySelector("#invoice"));
+ assert.ok(document.querySelector("#invoice-success"));
+ assert.ok(document.querySelector("#invoice_loading_indicator"));
- assert(document.querySelector("input[name=csrfmiddlewaretoken]"));
+ assert.ok(document.querySelector("input[name=csrfmiddlewaretoken]"));
- assert(document.querySelector("#free-trial-alert-message"));
+ assert.ok(document.querySelector("#free-trial-alert-message"));
});
diff --git a/frontend_tests/node_tests/upload.js b/frontend_tests/node_tests/upload.js
--- a/frontend_tests/node_tests/upload.js
+++ b/frontend_tests/node_tests/upload.js
@@ -49,11 +49,11 @@ test("feature_check", (override) => {
const upload_button = $.create("upload-button-stub");
upload_button.addClass("notdisplayed");
upload.feature_check(upload_button);
- assert(upload_button.hasClass("notdisplayed"));
+ assert.ok(upload_button.hasClass("notdisplayed"));
override(window, "XMLHttpRequest", () => ({upload: true}));
upload.feature_check(upload_button);
- assert(!upload_button.hasClass("notdisplayed"));
+ assert.ok(!upload_button.hasClass("notdisplayed"));
});
test("make_upload_absolute", () => {
@@ -197,9 +197,9 @@ test("show_error_message", () => {
upload.show_error_message({mode: "compose"}, "Error message");
assert.equal($("#compose-send-button").prop("disabled"), false);
- assert($("#compose-send-status").hasClass("alert-error"));
+ assert.ok($("#compose-send-status").hasClass("alert-error"));
assert.equal($("#compose-send-status").hasClass("alert-info"), false);
- assert($("#compose-send-status").visible());
+ assert.ok($("#compose-send-status").visible());
assert.equal($("#compose-error-msg").text(), "Error message");
upload.show_error_message({mode: "compose"});
@@ -236,7 +236,7 @@ test("upload_files", (override) => {
const config = {mode: "compose"};
$("#compose-send-button").prop("disabled", false);
upload.upload_files(uppy, config, []);
- assert(!$("#compose-send-button").prop("disabled"));
+ assert.ok(!$("#compose-send-button").prop("disabled"));
page_params.max_file_upload_size_mib = 0;
let show_error_message_called = false;
@@ -249,7 +249,7 @@ test("upload_files", (override) => {
);
});
upload.upload_files(uppy, config, files);
- assert(show_error_message_called);
+ assert.ok(show_error_message_called);
page_params.max_file_upload_size_mib = 25;
let on_click_close_button_callback;
@@ -275,14 +275,14 @@ test("upload_files", (override) => {
$("#compose-send-status").removeClass("alert-info").hide();
$("#compose .undo_markdown_preview").show();
upload.upload_files(uppy, config, files);
- assert($("#compose-send-button").prop("disabled"));
- assert($("#compose-send-status").hasClass("alert-info"));
- assert($("#compose-send-status").visible());
+ assert.ok($("#compose-send-button").prop("disabled"));
+ assert.ok($("#compose-send-status").hasClass("alert-info"));
+ assert.ok($("#compose-send-status").visible());
assert.equal($("<p>").text(), "translated: Uploading…");
- assert(compose_ui_insert_syntax_and_focus_called);
- assert(compose_ui_autosize_textarea_called);
- assert(markdown_preview_hide_button_clicked);
- assert(uppy_add_file_called);
+ assert.ok(compose_ui_insert_syntax_and_focus_called);
+ assert.ok(compose_ui_autosize_textarea_called);
+ assert.ok(markdown_preview_hide_button_clicked);
+ assert.ok(uppy_add_file_called);
files = [
{
@@ -322,17 +322,17 @@ test("upload_files", (override) => {
assert.equal(textarea, $("#compose-textarea"));
});
on_click_close_button_callback();
- assert(uppy_cancel_all_called);
- assert(hide_upload_status_called);
- assert(compose_ui_autosize_textarea_called);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(uppy_cancel_all_called);
+ assert.ok(hide_upload_status_called);
+ assert.ok(compose_ui_autosize_textarea_called);
+ assert.ok(compose_ui_replace_syntax_called);
hide_upload_status_called = false;
compose_ui_replace_syntax_called = false;
$("#compose-textarea").val("user modified text");
on_click_close_button_callback();
- assert(hide_upload_status_called);
- assert(compose_ui_autosize_textarea_called);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(hide_upload_status_called);
+ assert.ok(compose_ui_autosize_textarea_called);
+ assert.ok(compose_ui_replace_syntax_called);
assert.equal($("#compose-textarea").val(), "user modified text");
});
@@ -348,7 +348,7 @@ test("uppy_config", () => {
assert.equal(config.autoProceed, true);
assert.equal(config.restrictions.maxFileSize, 25 * 1024 * 1024);
assert.equal(Object.keys(config.locale.strings).length, 2);
- assert("exceedsSize" in config.locale.strings);
+ assert.ok("exceedsSize" in config.locale.strings);
return {
setMeta: (params) => {
@@ -364,7 +364,7 @@ test("uppy_config", () => {
assert.equal(params.fieldName, "file");
assert.equal(params.limit, 5);
assert.equal(Object.keys(params.locale.strings).length, 1);
- assert("timedOut" in params.locale.strings);
+ assert.ok("timedOut" in params.locale.strings);
} else if (func_name === "ProgressBar") {
uppy_used_progressbar = true;
assert.equal(params.target, "#compose-send-status");
@@ -403,7 +403,7 @@ test("file_input", (override) => {
upload_files_called = true;
});
change_handler(event);
- assert(upload_files_called);
+ assert.ok(upload_files_called);
});
test("file_drop", (override) => {
@@ -472,8 +472,8 @@ test("copy_paste", (override) => {
});
paste_handler(event);
- assert(get_as_file_called);
- assert(upload_files_called);
+ assert.ok(get_as_file_called);
+ assert.ok(upload_files_called);
upload_files_called = false;
event = {
@@ -543,9 +543,9 @@ test("uppy_events", (override) => {
compose_ui_autosize_textarea_called = true;
});
on_upload_success_callback(file, response);
- assert(compose_actions_start_called);
- assert(compose_ui_replace_syntax_called);
- assert(compose_ui_autosize_textarea_called);
+ assert.ok(compose_actions_start_called);
+ assert.ok(compose_ui_replace_syntax_called);
+ assert.ok(compose_ui_autosize_textarea_called);
response = {
body: {
@@ -584,13 +584,13 @@ test("uppy_events", (override) => {
},
];
on_complete_callback();
- assert(hide_upload_status_called);
+ assert.ok(hide_upload_status_called);
assert.equal(files.length, 0);
hide_upload_status_called = false;
$("#compose-send-status").addClass("alert-error");
on_complete_callback();
- assert(!hide_upload_status_called);
+ assert.ok(!hide_upload_status_called);
$("#compose-send-status").removeClass("alert-error");
hide_upload_status_called = false;
@@ -609,7 +609,7 @@ test("uppy_events", (override) => {
},
];
on_complete_callback();
- assert(!hide_upload_status_called);
+ assert.ok(!hide_upload_status_called);
assert.equal(files.length, 1);
state = {
@@ -628,8 +628,8 @@ test("uppy_events", (override) => {
assert.equal(message, "Some error message");
});
on_info_visible_callback();
- assert(uppy_cancel_all_called);
- assert(show_error_message_called);
+ assert.ok(uppy_cancel_all_called);
+ assert.ok(show_error_message_called);
override(compose_ui, "replace_syntax", (old_syntax, new_syntax, textarea) => {
compose_ui_replace_syntax_called = true;
assert.equal(old_syntax, "[translated: Uploading copenhagen.png…]()");
@@ -637,11 +637,11 @@ test("uppy_events", (override) => {
assert.equal(textarea, $("#compose-textarea"));
});
on_restriction_failed_callback(file, null, null);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(compose_ui_replace_syntax_called);
compose_ui_replace_syntax_called = false;
$("#compose-textarea").val("user modified text");
on_restriction_failed_callback(file, null, null);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(compose_ui_replace_syntax_called);
assert.equal($("#compose-textarea").val(), "user modified text");
state = {
@@ -673,9 +673,9 @@ test("uppy_events", (override) => {
};
uppy_cancel_all_called = false;
on_upload_error_callback(file, null, response);
- assert(uppy_cancel_all_called);
- assert(show_error_message_called);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(uppy_cancel_all_called);
+ assert.ok(show_error_message_called);
+ assert.ok(compose_ui_replace_syntax_called);
compose_ui_replace_syntax_called = false;
override(upload, "show_error_message", (config, message) => {
@@ -685,15 +685,15 @@ test("uppy_events", (override) => {
});
uppy_cancel_all_called = false;
on_upload_error_callback(file, null, null);
- assert(uppy_cancel_all_called);
- assert(show_error_message_called);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(uppy_cancel_all_called);
+ assert.ok(show_error_message_called);
+ assert.ok(compose_ui_replace_syntax_called);
show_error_message_called = false;
$("#compose-textarea").val("user modified text");
uppy_cancel_all_called = false;
on_upload_error_callback(file, null);
- assert(uppy_cancel_all_called);
- assert(show_error_message_called);
- assert(compose_ui_replace_syntax_called);
+ assert.ok(uppy_cancel_all_called);
+ assert.ok(show_error_message_called);
+ assert.ok(compose_ui_replace_syntax_called);
assert.equal($("#compose-textarea").val(), "user modified text");
});
diff --git a/frontend_tests/node_tests/user_events.js b/frontend_tests/node_tests/user_events.js
--- a/frontend_tests/node_tests/user_events.js
+++ b/frontend_tests/node_tests/user_events.js
@@ -83,14 +83,14 @@ run_test("updates", () => {
role: settings_config.user_role_values.guest.code,
});
person = people.get_by_email(isaac.email);
- assert(person.is_guest);
+ assert.ok(person.is_guest);
assert.equal(person.role, settings_config.user_role_values.guest.code);
user_events.update_person({
user_id: isaac.user_id,
role: settings_config.user_role_values.member.code,
});
person = people.get_by_email(isaac.email);
- assert(!person.is_guest);
+ assert.ok(!person.is_guest);
assert.equal(person.role, settings_config.user_role_values.member.code);
user_events.update_person({
@@ -121,23 +121,23 @@ run_test("updates", () => {
user_events.update_person({user_id: me.user_id, is_billing_admin: true});
person = people.get_by_email(me.email);
- assert(person.is_billing_admin);
+ assert.ok(person.is_billing_admin);
assert.equal(person.role, settings_config.user_role_values.member.code);
- assert(page_params.is_billing_admin);
+ assert.ok(page_params.is_billing_admin);
user_events.update_person({user_id: me.user_id, is_billing_admin: false});
person = people.get_by_email(me.email);
assert.equal(person.user_id, me.user_id);
- assert(!person.is_billing_admin);
+ assert.ok(!person.is_billing_admin);
assert.equal(person.role, settings_config.user_role_values.member.code);
- assert(!page_params.is_billing_admin);
+ assert.ok(!page_params.is_billing_admin);
user_events.update_person({user_id: isaac.user_id, is_billing_admin: false});
person = people.get_by_email(isaac.email);
assert.equal(person.user_id, isaac.user_id);
- assert(!person.is_billing_admin);
+ assert.ok(!person.is_billing_admin);
assert.equal(person.role, settings_config.user_role_values.owner.code);
- assert(!page_params.is_billing_admin);
+ assert.ok(!page_params.is_billing_admin);
let user_id;
let full_name;
@@ -157,7 +157,7 @@ run_test("updates", () => {
user_id: me.user_id,
role: settings_config.user_role_values.member.code,
});
- assert(!page_params.is_admin);
+ assert.ok(!page_params.is_admin);
user_events.update_person({user_id: me.user_id, full_name: "Me V2"});
assert.equal(people.my_full_name(), "Me V2");
@@ -201,11 +201,11 @@ run_test("updates", () => {
user_events.update_person({user_id: me.user_id, timezone: "UTC"});
person = people.get_by_email(me.email);
- assert(person.timezone);
+ assert.ok(person.timezone);
blueslip.expect("error", "Got update_person event for unexpected user 29");
blueslip.expect("error", "Unknown user_id in get_by_user_id: 29");
- assert(!user_events.update_person({user_id: 29, full_name: "Sir Isaac Newton"}));
+ assert.ok(!user_events.update_person({user_id: 29, full_name: "Sir Isaac Newton"}));
me.profile_data = {};
user_events.update_person({
@@ -223,7 +223,7 @@ run_test("updates", () => {
};
user_events.update_person({user_id: me.user_id, delivery_email: "[email protected]"});
- assert(updated);
+ assert.ok(updated);
const test_bot = {
email: "[email protected]",
diff --git a/frontend_tests/node_tests/user_groups.js b/frontend_tests/node_tests/user_groups.js
--- a/frontend_tests/node_tests/user_groups.js
+++ b/frontend_tests/node_tests/user_groups.js
@@ -80,8 +80,8 @@ run_test("user_groups", () => {
const groups_of_users_nomatch = user_groups.get_user_groups_of_user(user_id_not_in_any_group);
assert.equal(groups_of_users_nomatch.length, 0);
- assert(!user_groups.is_member_of(admins.id, 4));
- assert(user_groups.is_member_of(admins.id, 3));
+ assert.ok(!user_groups.is_member_of(admins.id, 4));
+ assert.ok(user_groups.is_member_of(admins.id, 3));
user_groups.add_members(all.id, [5, 4]);
assert.deepEqual(user_groups.get_user_group_from_id(all.id).members, new Set([1, 2, 3, 5, 4]));
@@ -89,12 +89,12 @@ run_test("user_groups", () => {
user_groups.remove_members(all.id, [1, 4]);
assert.deepEqual(user_groups.get_user_group_from_id(all.id).members, new Set([2, 3, 5]));
- assert(user_groups.is_user_group(admins));
+ assert.ok(user_groups.is_user_group(admins));
const object = {
name: "core",
id: 3,
};
- assert(!user_groups.is_user_group(object));
+ assert.ok(!user_groups.is_user_group(object));
user_groups.init();
assert.equal(user_groups.get_realm_user_groups().length, 0);
diff --git a/frontend_tests/node_tests/user_pill.js b/frontend_tests/node_tests/user_pill.js
--- a/frontend_tests/node_tests/user_pill.js
+++ b/frontend_tests/node_tests/user_pill.js
@@ -96,8 +96,8 @@ test("append", () => {
pill_widget,
});
- assert(appended);
- assert(cleared);
+ assert.ok(appended);
+ assert.ok(cleared);
});
test("get_items", () => {
diff --git a/frontend_tests/node_tests/user_search.js b/frontend_tests/node_tests/user_search.js
--- a/frontend_tests/node_tests/user_search.js
+++ b/frontend_tests/node_tests/user_search.js
@@ -87,11 +87,11 @@ test("clear_search", (override) => {
override(resize, "resize_sidebars", () => {});
$(".user-list-filter").val("somevalue");
- assert(!$("#user_search_section").hasClass("notdisplayed"));
+ assert.ok(!$("#user_search_section").hasClass("notdisplayed"));
$("#clear_search_people_button").trigger("click");
assert.equal($(".user-list-filter").val(), "");
$("#clear_search_people_button").trigger("click");
- assert($("#user_search_section").hasClass("notdisplayed"));
+ assert.ok($("#user_search_section").hasClass("notdisplayed"));
});
test("escape_search", (override) => {
@@ -104,7 +104,7 @@ test("escape_search", (override) => {
activity.escape_search();
assert.equal($(".user-list-filter").val(), "");
activity.escape_search();
- assert($("#user_search_section").hasClass("notdisplayed"));
+ assert.ok($("#user_search_section").hasClass("notdisplayed"));
});
test("blur search right", (override) => {
@@ -205,12 +205,12 @@ test("click on user header to toggle display", (override) => {
page_params.realm_presence_disabled = true;
- assert(!$("#user_search_section").hasClass("notdisplayed"));
+ assert.ok(!$("#user_search_section").hasClass("notdisplayed"));
user_filter.val("bla");
$("#userlist-header").trigger("click");
- assert($("#user_search_section").hasClass("notdisplayed"));
+ assert.ok($("#user_search_section").hasClass("notdisplayed"));
assert.equal(user_filter.val(), "");
$(".user-list-filter").closest = (selector) => {
diff --git a/frontend_tests/node_tests/user_status.js b/frontend_tests/node_tests/user_status.js
--- a/frontend_tests/node_tests/user_status.js
+++ b/frontend_tests/node_tests/user_status.js
@@ -22,14 +22,14 @@ function initialize() {
run_test("basics", () => {
initialize();
- assert(user_status.is_away(2));
- assert(!user_status.is_away(99));
+ assert.ok(user_status.is_away(2));
+ assert.ok(!user_status.is_away(99));
- assert(!user_status.is_away(4));
+ assert.ok(!user_status.is_away(4));
user_status.set_away(4);
- assert(user_status.is_away(4));
+ assert.ok(user_status.is_away(4));
user_status.revoke_away(4);
- assert(!user_status.is_away(4));
+ assert.ok(!user_status.is_away(4));
assert.equal(user_status.get_status_text(1), "in a meeting");
@@ -76,7 +76,7 @@ run_test("server", () => {
});
success();
- assert(called);
+ assert.ok(called);
});
run_test("defensive checks", () => {
diff --git a/frontend_tests/node_tests/util.js b/frontend_tests/node_tests/util.js
--- a/frontend_tests/node_tests/util.js
+++ b/frontend_tests/node_tests/util.js
@@ -41,9 +41,9 @@ run_test("extract_pm_recipients", () => {
run_test("is_pm_recipient", () => {
const message = {to_user_ids: "31,32,33"};
- assert(util.is_pm_recipient(31, message));
- assert(util.is_pm_recipient(32, message));
- assert(!util.is_pm_recipient(34, message));
+ assert.ok(util.is_pm_recipient(31, message));
+ assert.ok(util.is_pm_recipient(32, message));
+ assert.ok(!util.is_pm_recipient(34, message));
});
run_test("lower_bound", () => {
@@ -67,43 +67,45 @@ run_test("lower_bound", () => {
});
run_test("same_recipient", () => {
- assert(
+ assert.ok(
util.same_recipient(
{type: "stream", stream_id: 101, topic: "Bar"},
{type: "stream", stream_id: 101, topic: "bar"},
),
);
- assert(
+ assert.ok(
!util.same_recipient(
{type: "stream", stream_id: 101, topic: "Bar"},
{type: "stream", stream_id: 102, topic: "whatever"},
),
);
- assert(
+ assert.ok(
util.same_recipient(
{type: "private", to_user_ids: "101,102"},
{type: "private", to_user_ids: "101,102"},
),
);
- assert(
+ assert.ok(
!util.same_recipient(
{type: "private", to_user_ids: "101,102"},
{type: "private", to_user_ids: "103"},
),
);
- assert(!util.same_recipient({type: "stream", stream_id: 101, topic: "Bar"}, {type: "private"}));
+ assert.ok(
+ !util.same_recipient({type: "stream", stream_id: 101, topic: "Bar"}, {type: "private"}),
+ );
- assert(!util.same_recipient({type: "private", to_user_ids: undefined}, {type: "private"}));
+ assert.ok(!util.same_recipient({type: "private", to_user_ids: undefined}, {type: "private"}));
- assert(!util.same_recipient({type: "unknown type"}, {type: "unknown type"}));
+ assert.ok(!util.same_recipient({type: "unknown type"}, {type: "unknown type"}));
- assert(!util.same_recipient(undefined, {type: "private"}));
+ assert.ok(!util.same_recipient(undefined, {type: "private"}));
- assert(!util.same_recipient(undefined, undefined));
+ assert.ok(!util.same_recipient(undefined, undefined));
});
run_test("robust_uri_decode", () => {
@@ -148,18 +150,18 @@ run_test("get_edit_event_prev_topic", () => {
run_test("is_mobile", () => {
window.navigator = {userAgent: "Android"};
- assert(util.is_mobile());
+ assert.ok(util.is_mobile());
window.navigator = {userAgent: "Not mobile"};
- assert(!util.is_mobile());
+ assert.ok(!util.is_mobile());
});
run_test("array_compare", () => {
- assert(util.array_compare([], []));
- assert(util.array_compare([1, 2, 3], [1, 2, 3]));
- assert(!util.array_compare([1, 2], [1, 2, 3]));
- assert(!util.array_compare([1, 2, 3], [1, 2]));
- assert(!util.array_compare([1, 2, 3, 4], [1, 2, 3, 5]));
+ assert.ok(util.array_compare([], []));
+ assert.ok(util.array_compare([1, 2, 3], [1, 2, 3]));
+ assert.ok(!util.array_compare([1, 2], [1, 2, 3]));
+ assert.ok(!util.array_compare([1, 2, 3], [1, 2]));
+ assert.ok(!util.array_compare([1, 2, 3, 4], [1, 2, 3, 5]));
});
run_test("normalize_recipients", () => {
@@ -175,8 +177,8 @@ run_test("random_int", () => {
_.times(500, () => {
const val = util.random_int(min, max);
- assert(min <= val);
- assert(val <= max);
+ assert.ok(min <= val);
+ assert.ok(val <= max);
assert.equal(val, Math.floor(val));
});
});
@@ -232,27 +234,27 @@ run_test("all_and_everyone_mentions_regexp", () => {
let i;
for (i = 0; i < messages_with_all_mentions.length; i += 1) {
- assert(util.find_wildcard_mentions(messages_with_all_mentions[i]));
+ assert.ok(util.find_wildcard_mentions(messages_with_all_mentions[i]));
}
for (i = 0; i < messages_with_everyone_mentions.length; i += 1) {
- assert(util.find_wildcard_mentions(messages_with_everyone_mentions[i]));
+ assert.ok(util.find_wildcard_mentions(messages_with_everyone_mentions[i]));
}
for (i = 0; i < messages_with_stream_mentions.length; i += 1) {
- assert(util.find_wildcard_mentions(messages_with_stream_mentions[i]));
+ assert.ok(util.find_wildcard_mentions(messages_with_stream_mentions[i]));
}
for (i = 0; i < messages_without_all_mentions.length; i += 1) {
- assert(!util.find_wildcard_mentions(messages_without_everyone_mentions[i]));
+ assert.ok(!util.find_wildcard_mentions(messages_without_everyone_mentions[i]));
}
for (i = 0; i < messages_without_everyone_mentions.length; i += 1) {
- assert(!util.find_wildcard_mentions(messages_without_everyone_mentions[i]));
+ assert.ok(!util.find_wildcard_mentions(messages_without_everyone_mentions[i]));
}
for (i = 0; i < messages_without_stream_mentions.length; i += 1) {
- assert(!util.find_wildcard_mentions(messages_without_stream_mentions[i]));
+ assert.ok(!util.find_wildcard_mentions(messages_without_stream_mentions[i]));
}
});
diff --git a/frontend_tests/node_tests/vdom.js b/frontend_tests/node_tests/vdom.js
--- a/frontend_tests/node_tests/vdom.js
+++ b/frontend_tests/node_tests/vdom.js
@@ -97,8 +97,8 @@ run_test("attribute updates", () => {
vdom.update(replace_content, find, new_ul, ul);
- assert(updated);
- assert(removed);
+ assert.ok(updated);
+ assert.ok(removed);
});
function make_child(i, name) {
diff --git a/frontend_tests/node_tests/watchdog.js b/frontend_tests/node_tests/watchdog.js
--- a/frontend_tests/node_tests/watchdog.js
+++ b/frontend_tests/node_tests/watchdog.js
@@ -62,10 +62,10 @@ run_test("basics", () => {
run_test("suspect_offline", () => {
watchdog.set_suspect_offline(true);
- assert(watchdog.suspects_user_is_offline());
+ assert.ok(watchdog.suspects_user_is_offline());
watchdog.set_suspect_offline(false);
- assert(!watchdog.suspects_user_is_offline());
+ assert.ok(!watchdog.suspects_user_is_offline());
});
MockDate.reset();
diff --git a/frontend_tests/node_tests/widgetize.js b/frontend_tests/node_tests/widgetize.js
--- a/frontend_tests/node_tests/widgetize.js
+++ b/frontend_tests/node_tests/widgetize.js
@@ -45,7 +45,7 @@ const fake_poll_widget = {
activate(data) {
is_widget_activated = true;
widget_elem = data.elem;
- assert(widget_elem.hasClass("widget-content"));
+ assert.ok(widget_elem.hasClass("widget-content"));
widget_elem.handle_events = (e) => {
is_event_handled = true;
assert.notDeepStrictEqual(e, events);
@@ -105,19 +105,19 @@ test("activate", (override) => {
message_content.append = (elem) => {
is_widget_elem_inserted = true;
assert.equal(elem, widget_elem);
- assert(elem.hasClass("widget-content"));
+ assert.ok(elem.hasClass("widget-content"));
};
is_widget_elem_inserted = false;
is_widget_activated = false;
is_event_handled = false;
- assert(!widgetize.widget_contents.has(opts.message.id));
+ assert.ok(!widgetize.widget_contents.has(opts.message.id));
widgetize.activate(opts);
- assert(is_widget_elem_inserted);
- assert(is_widget_activated);
- assert(is_event_handled);
+ assert.ok(is_widget_elem_inserted);
+ assert.ok(is_widget_activated);
+ assert.ok(is_event_handled);
assert.equal(widgetize.widget_contents.get(opts.message.id), widget_elem);
is_widget_elem_inserted = false;
@@ -126,9 +126,9 @@ test("activate", (override) => {
widgetize.activate(opts);
- assert(is_widget_elem_inserted);
- assert(!is_widget_activated);
- assert(!is_event_handled);
+ assert.ok(is_widget_elem_inserted);
+ assert.ok(!is_widget_activated);
+ assert.ok(!is_event_handled);
narrow_active = true;
is_widget_elem_inserted = false;
@@ -137,9 +137,9 @@ test("activate", (override) => {
widgetize.activate(opts);
- assert(!is_widget_elem_inserted);
- assert(!is_widget_activated);
- assert(!is_event_handled);
+ assert.ok(!is_widget_elem_inserted);
+ assert.ok(!is_widget_activated);
+ assert.ok(!is_event_handled);
blueslip.expect("warn", "unknown widget_type");
narrow_active = false;
@@ -149,17 +149,17 @@ test("activate", (override) => {
opts.widget_type = "invalid_widget";
widgetize.activate(opts);
- assert(!is_widget_elem_inserted);
- assert(!is_widget_activated);
- assert(!is_event_handled);
+ assert.ok(!is_widget_elem_inserted);
+ assert.ok(!is_widget_activated);
+ assert.ok(!is_event_handled);
assert.equal(blueslip.get_test_logs("warn")[0].more_info, "invalid_widget");
opts.widget_type = "tictactoe";
widgetize.activate(opts);
- assert(!is_widget_elem_inserted);
- assert(!is_widget_activated);
- assert(!is_event_handled);
+ assert.ok(!is_widget_elem_inserted);
+ assert.ok(!is_widget_activated);
+ assert.ok(!is_event_handled);
/* Testing widgetize.handle_events */
const post_activate_event = {
@@ -176,12 +176,12 @@ test("activate", (override) => {
};
is_event_handled = false;
widgetize.handle_event(post_activate_event);
- assert(is_event_handled);
+ assert.ok(is_event_handled);
is_event_handled = false;
post_activate_event.message_id = 1000;
widgetize.handle_event(post_activate_event);
- assert(!is_event_handled);
+ assert.ok(!is_event_handled);
/* Test narrow change message update */
override(message_lists.current, "get", (idx) => {
diff --git a/frontend_tests/node_tests/zjquery.js b/frontend_tests/node_tests/zjquery.js
--- a/frontend_tests/node_tests/zjquery.js
+++ b/frontend_tests/node_tests/zjquery.js
@@ -37,11 +37,11 @@ run_test("basics", () => {
}
// Before we call show_my_form, we can assert that my-form is hidden:
- assert(!$("#my-form").visible());
+ assert.ok(!$("#my-form").visible());
// Then calling show_my_form() should make it visible.
show_my_form();
- assert($("#my-form").visible());
+ assert.ok($("#my-form").visible());
// Next, look at how several functions correctly simulate setting
// and getting for you.
@@ -108,7 +108,7 @@ run_test("finding_related_objects", () => {
elem.set_parents_result(".folder", my_parents);
elem.parents(".folder").addClass("active");
- assert(my_parents.hasClass("active"));
+ assert.ok(my_parents.hasClass("active"));
});
run_test("clicks", () => {
@@ -128,8 +128,8 @@ run_test("clicks", () => {
// Setting up the click handlers doesn't change state right away.
set_up_click_handlers();
- assert(!state.clicked);
- assert(!state.keydown);
+ assert.ok(!state.clicked);
+ assert.ok(!state.keydown);
// But we can simulate clicks.
$("#widget1").trigger("click");
@@ -192,8 +192,8 @@ run_test("create", () => {
const obj2 = $.create("the collection of rows in the table");
obj1.show();
- assert(obj1.visible());
+ assert.ok(obj1.visible());
obj2.addClass(".striped");
- assert(obj2.hasClass(".striped"));
+ assert.ok(obj2.hasClass(".striped"));
});
diff --git a/frontend_tests/puppeteer_lib/common.ts b/frontend_tests/puppeteer_lib/common.ts
--- a/frontend_tests/puppeteer_lib/common.ts
+++ b/frontend_tests/puppeteer_lib/common.ts
@@ -261,7 +261,7 @@ class CommonUtils {
// Wait for a email input in login page so we know login
// page is loaded. Then check that we are at the login url.
await page.waitForSelector('input[name="username"]');
- assert(page.url().includes("/login/"));
+ assert.ok(page.url().includes("/login/"));
}
async ensure_enter_does_not_send(page: Page): Promise<void> {
diff --git a/frontend_tests/puppeteer_tests/mention.ts b/frontend_tests/puppeteer_tests/mention.ts
--- a/frontend_tests/puppeteer_tests/mention.ts
+++ b/frontend_tests/puppeteer_tests/mention.ts
@@ -26,7 +26,7 @@ async function test_mention(page: Page): Promise<void> {
zulip_test.set_wildcard_mention_large_stream_threshold(5);
return zulip_test.wildcard_mention_large_stream_threshold;
});
- assert(stream_size > threshold);
+ assert.ok(stream_size > threshold);
await page.click("#compose-send-button");
await page.waitForXPath(
diff --git a/frontend_tests/puppeteer_tests/navigation.ts b/frontend_tests/puppeteer_tests/navigation.ts
--- a/frontend_tests/puppeteer_tests/navigation.ts
+++ b/frontend_tests/puppeteer_tests/navigation.ts
@@ -71,7 +71,7 @@ async function test_reload_hash(page: Page): Promise<void> {
await page.waitForSelector("#zfilt", {visible: true});
const page_load_time = await page.evaluate(() => zulip_test.page_params.page_load_time);
- assert(page_load_time > initial_page_load_time, "Page not reloaded.");
+ assert.ok(page_load_time > initial_page_load_time, "Page not reloaded.");
const hash = await page.evaluate(() => window.location.hash);
assert.strictEqual(hash, initial_hash, "Hash not preserved.");
diff --git a/frontend_tests/puppeteer_tests/realm-creation.ts b/frontend_tests/puppeteer_tests/realm-creation.ts
--- a/frontend_tests/puppeteer_tests/realm-creation.ts
+++ b/frontend_tests/puppeteer_tests/realm-creation.ts
@@ -21,7 +21,7 @@ async function realm_creation_tests(page: Page): Promise<void> {
]);
// Make sure onfirmation email is sent.
- assert(page.url().includes("/accounts/new/send_confirm/" + email));
+ assert.ok(page.url().includes("/accounts/new/send_confirm/" + email));
// Special endpoint enabled only during tests for extracting confirmation key
await page.goto("http://" + host + "/confirmation_key/");
diff --git a/frontend_tests/puppeteer_tests/settings.ts b/frontend_tests/puppeteer_tests/settings.ts
--- a/frontend_tests/puppeteer_tests/settings.ts
+++ b/frontend_tests/puppeteer_tests/settings.ts
@@ -29,7 +29,10 @@ async function open_settings(page: Page): Promise<void> {
await page.waitForSelector("#settings_content .account-settings-form", {visible: true});
const page_url = await common.page_url_with_fragment(page);
- assert(page_url.includes("/#settings/"), `Page url: ${page_url} does not contain /#settings/`);
+ assert.ok(
+ page_url.includes("/#settings/"),
+ `Page url: ${page_url} does not contain /#settings/`,
+ );
}
async function test_change_full_name(page: Page): Promise<void> {
diff --git a/frontend_tests/puppeteer_tests/subscriptions.ts b/frontend_tests/puppeteer_tests/subscriptions.ts
--- a/frontend_tests/puppeteer_tests/subscriptions.ts
+++ b/frontend_tests/puppeteer_tests/subscriptions.ts
@@ -45,7 +45,7 @@ async function open_streams_modal(page: Page): Promise<void> {
await page.waitForSelector("#subscription_overlay.new-style", {visible: true});
const url = await common.page_url_with_fragment(page);
- assert(url.includes("#streams/all"));
+ assert.ok(url.includes("#streams/all"));
}
async function test_subscription_button_verona_stream(page: Page): Promise<void> {
@@ -239,7 +239,7 @@ async function test_streams_search_feature(page: Page): Promise<void> {
),
"Verona",
);
- assert(
+ assert.ok(
!(await common.get_text_from_selector(page, hidden_streams_selector)).includes("Verona"),
"#Verona is hidden",
);
@@ -249,11 +249,11 @@ async function test_streams_search_feature(page: Page): Promise<void> {
await common.get_text_from_selector(page, ".stream-row:not(.notdisplayed) .stream-name"),
"Puppeteer",
);
- assert(
+ assert.ok(
(await common.get_text_from_selector(page, hidden_streams_selector)).includes("Verona"),
"#Verona is not hidden",
);
- assert(
+ assert.ok(
!(await common.get_text_from_selector(page, hidden_streams_selector)).includes("Puppeteer"),
"Puppeteer is hidden after searching.",
);
diff --git a/frontend_tests/zjsunit/zjquery.js b/frontend_tests/zjsunit/zjquery.js
--- a/frontend_tests/zjsunit/zjquery.js
+++ b/frontend_tests/zjsunit/zjquery.js
@@ -305,7 +305,7 @@ function make_new_elem(selector, opts) {
},
parents(parents_selector) {
const result = parents_result.get(parents_selector);
- assert(
+ assert.ok(
result,
"You need to call set_parents_result for " + parents_selector + " in " + selector,
);
@@ -385,7 +385,7 @@ function make_new_elem(selector, opts) {
return text;
},
toggle(show) {
- assert([true, false].includes(show));
+ assert.ok([true, false].includes(show));
shown = show;
return self;
},
@@ -549,7 +549,7 @@ function make_zjquery() {
};
zjquery.create = function (name, opts) {
- assert(!elems.has(name), "You already created an object with this name!!");
+ assert.ok(!elems.has(name), "You already created an object with this name!!");
const elem = new_elem(name, opts);
elems.set(name, elem);
| node tests: Add a lint rule to make sure that assert is not used incorrectly
We had a lot of places in codebase where assert was used accidentally instead of asert.equal.
See https://github.com/zulip/zulip/pull/18684 for example.
We need to add a lint rule to make sure that this does not happen again.
| Hello @zulip/server-testing members, this issue was labeled with the "area: testing-infrastructure" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-06-10T07:09:27 |
zulip/zulip | 18,821 | zulip__zulip-18821 | [
"18770"
] | 1b517924598bcb0035a2c1fb64901ca98ecfd22d | diff --git a/analytics/management/commands/stream_stats.py b/analytics/management/commands/stream_stats.py
deleted file mode 100644
--- a/analytics/management/commands/stream_stats.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from argparse import ArgumentParser
-from typing import Any
-
-from django.core.management.base import BaseCommand, CommandError
-from django.db.models import Q
-
-from zerver.models import Message, Realm, Recipient, Stream, Subscription, get_realm
-
-
-class Command(BaseCommand):
- help = "Generate statistics on the streams for a realm."
-
- def add_arguments(self, parser: ArgumentParser) -> None:
- parser.add_argument(
- "realms", metavar="<realm>", nargs="*", help="realm to generate statistics for"
- )
-
- def handle(self, *args: Any, **options: str) -> None:
- if options["realms"]:
- try:
- realms = [get_realm(string_id) for string_id in options["realms"]]
- except Realm.DoesNotExist as e:
- raise CommandError(e)
- else:
- realms = Realm.objects.all()
-
- for realm in realms:
- streams = Stream.objects.filter(realm=realm).exclude(Q(name__istartswith="tutorial-"))
- # private stream count
- private_count = 0
- # public stream count
- public_count = 0
- for stream in streams:
- if stream.invite_only:
- private_count += 1
- else:
- public_count += 1
- print("------------")
- print(realm.string_id, end=" ")
- print("{:>10} {} public streams and".format("(", public_count), end=" ")
- print(f"{private_count} private streams )")
- print("------------")
- print("{:>25} {:>15} {:>10} {:>12}".format("stream", "subscribers", "messages", "type"))
-
- for stream in streams:
- if stream.invite_only:
- stream_type = "private"
- else:
- stream_type = "public"
- print(f"{stream.name:>25}", end=" ")
- recipient = Recipient.objects.filter(type=Recipient.STREAM, type_id=stream.id)
- print(
- "{:10}".format(
- len(Subscription.objects.filter(recipient=recipient, active=True))
- ),
- end=" ",
- )
- num_messages = len(Message.objects.filter(recipient=recipient))
- print(f"{num_messages:12}", end=" ")
- print(f"{stream_type:>15}")
- print("")
diff --git a/manage.py b/manage.py
--- a/manage.py
+++ b/manage.py
@@ -2,6 +2,7 @@
import configparser
import os
import sys
+from typing import Dict, List, Optional
if sys.version_info <= (3, 0):
print("Error: Zulip is a Python 3 project, and cannot be run with Python 2.")
@@ -14,8 +15,112 @@
setup_path()
+from collections import defaultdict
+
+from django.core.management import ManagementUtility, get_commands
+from django.core.management.color import color_style
+
from scripts.lib.zulip_tools import assert_not_running_as_root
+
+def get_filtered_commands() -> Dict[str, str]:
+ """Because Zulip uses management commands in production, `manage.py
+ help` is a form of documentation for users. Here we exclude from
+ that documentation built-in commands that are not constructive for
+ end users or even Zulip developers to run.
+
+ Ideally, we'd do further customization to display management
+ commands with more organization in the help text, and also hide
+ development-focused management commands in production.
+ """
+ all_commands = get_commands()
+ documented_commands = dict()
+ documented_apps = [
+ # "auth" removed because its commands are not applicable to Zulip.
+ # "contenttypes" removed because we don't use that subsystem, and
+ # even if we did.
+ "django.core",
+ "analytics",
+ # "otp_static" removed because it's a 2FA internals detail.
+ # "sessions" removed because it's just a cron job with a misleading
+ # name, since all it does is delete expired sessions.
+ # "social_django" removed for similar reasons to sessions.
+ # "staticfiles" removed because its commands are only usefully run when
+ # wrapped by Zulip tooling.
+ # "two_factor" removed because it's a 2FA internals detail.
+ "zerver",
+ "zilencer",
+ ]
+ documented_command_subsets = {
+ "django.core": {
+ "dbshell",
+ "makemigrations",
+ "migrate",
+ "shell",
+ "showmigrations",
+ },
+ }
+ for command, app in all_commands.items():
+ if app not in documented_apps:
+ continue
+ if app in documented_command_subsets:
+ if command not in documented_command_subsets[app]:
+ continue
+
+ documented_commands[command] = app
+ return documented_commands
+
+
+class FilteredManagementUtility(ManagementUtility):
+ """Replaces the main_help_text function of ManagementUtility with one
+ that calls our get_filtered_commands(), rather than the default
+ get_commands() function.
+
+ All other change are just code style differences to pass the Zulip linter.
+ """
+
+ def main_help_text(self, commands_only: bool = False) -> str:
+ """Return the script's main help text, as a string."""
+ if commands_only:
+ usage = sorted(get_filtered_commands())
+ else:
+ usage = [
+ "",
+ f"Type '{self.prog_name} help <subcommand>' for help on a specific subcommand.",
+ "",
+ "Available subcommands:",
+ ]
+ commands_dict = defaultdict(lambda: [])
+ for name, app in get_filtered_commands().items():
+ if app == "django.core":
+ app = "django"
+ else:
+ app = app.rpartition(".")[-1]
+ commands_dict[app].append(name)
+ style = color_style()
+ for app in sorted(commands_dict):
+ usage.append("")
+ usage.append(style.NOTICE(f"[{app}]"))
+ for name in sorted(commands_dict[app]):
+ usage.append(f" {name}")
+ # Output an extra note if settings are not properly configured
+ if self.settings_exception is not None:
+ usage.append(
+ style.NOTICE(
+ "Note that only Django core commands are listed "
+ f"as settings are not properly configured (error: {self.settings_exception})."
+ )
+ )
+
+ return "\n".join(usage)
+
+
+def execute_from_command_line(argv: Optional[List[str]] = None) -> None:
+ """Run a FilteredManagementUtility."""
+ utility = FilteredManagementUtility(argv)
+ utility.execute()
+
+
if __name__ == "__main__":
assert_not_running_as_root()
@@ -38,7 +143,6 @@
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "zproject.settings")
from django.conf import settings
- from django.core.management import execute_from_command_line
from django.core.management.base import CommandError
from scripts.lib.zulip_tools import log_management_command
| Disable various Django internal management commands that sysadmins should never run
If you run just `./manage.py help`, it prints out all the supported management commands for a Zulip server. This ends up being pretty awkward, in that it lists various built-in Django management commands that sound relevant but are neither used by Zulip not things that are a good idea to try to run (usually they just give confusing errors).
The main `./manage.py` script in the Zulip repository does some setup and then calls this:
```
def execute_from_command_line(argv=None):
"""Run a ManagementUtility."""
utility = ManagementUtility(argv)
utility.execute()
```
So I think we implement do this by subclassing `ManagementUtility` to override the `main_help_text` function with a variant that effectively filters the `get_commands` function to remove some commands.
A first version of this should remove everything in the `[django]` block except these 5, probably most simply implemented as an allowlist:
```
[django]
dbshell
makemigrations
migrate
shell
showmigrations
```
But we will probably want to extend that once implemented.
| Hello @zulip/server-dependencies, @zulip/server-production members, this issue was labeled with the "area: dependencies", "area: production" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-06-13T13:04:56 |
|
zulip/zulip | 18,822 | zulip__zulip-18822 | [
"18796"
] | bcc89c80a29166b487620cb84728a9324f66b227 | diff --git a/zerver/views/home.py b/zerver/views/home.py
--- a/zerver/views/home.py
+++ b/zerver/views/home.py
@@ -205,11 +205,7 @@ def home_real(request: HttpRequest) -> HttpResponse:
"user_profile": user_profile,
"page_params": page_params,
"csp_nonce": csp_nonce,
- "is_owner": user_permission_info.is_realm_owner,
- "is_admin": user_permission_info.is_realm_admin,
- "is_guest": user_permission_info.is_guest,
"color_scheme": user_permission_info.color_scheme,
- "max_file_upload_size_mib": settings.MAX_FILE_UPLOAD_SIZE,
},
)
patch_cache_control(response, no_cache=True, no_store=True, must_revalidate=True)
| templates: Migrate settings_overlay.html to be a handlebars template
This issue is a part of #18792. We'd like to migrate settings_overlay.html to be a handlebars template `static/templates/settings_overlay.hbs`. This will require a bit of care with translation tags and also with the many variables that will need to be access from `page_params` when rendering the template. (Some may need to be moved from the rendering `context` in `zerver/views/home.py` to the extra `page_params` fields in `zerver/lib/home.py`, though I think the vast majority are already available in `page_params`)
@sahil839 do you want to claim this one as our settings maintainer?
| Hello @zulip/server-refactoring members, this issue was labeled with the "area: refactoring" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-06-13T18:23:43 |
|
zulip/zulip | 18,885 | zulip__zulip-18885 | [
"18860"
] | ab380b122b4217cb52e67a78796b607f365c9688 | diff --git a/zerver/views/realm_emoji.py b/zerver/views/realm_emoji.py
--- a/zerver/views/realm_emoji.py
+++ b/zerver/views/realm_emoji.py
@@ -4,7 +4,7 @@
from zerver.decorator import require_member_or_admin
from zerver.lib.actions import check_add_realm_emoji, do_remove_realm_emoji
-from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name
+from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name, name_to_codepoint
from zerver.lib.request import REQ, JsonableError, has_request_variables
from zerver.lib.response import json_success
from zerver.models import RealmEmoji, UserProfile
@@ -23,6 +23,7 @@ def upload_emoji(
request: HttpRequest, user_profile: UserProfile, emoji_name: str = REQ(path_only=True)
) -> HttpResponse:
emoji_name = emoji_name.strip().replace(" ", "_")
+ valid_built_in_emoji = name_to_codepoint.keys()
check_valid_emoji_name(emoji_name)
check_emoji_admin(user_profile)
if RealmEmoji.objects.filter(
@@ -31,6 +32,9 @@ def upload_emoji(
raise JsonableError(_("A custom emoji with this name already exists."))
if len(request.FILES) != 1:
raise JsonableError(_("You must upload exactly one file."))
+ if emoji_name in valid_built_in_emoji:
+ if not user_profile.is_realm_admin:
+ raise JsonableError(_("Only administrators can override built-in emoji."))
emoji_file = list(request.FILES.values())[0]
if (settings.MAX_EMOJI_FILE_SIZE_MIB * 1024 * 1024) < emoji_file.size:
raise JsonableError(
| diff --git a/zerver/tests/test_realm_emoji.py b/zerver/tests/test_realm_emoji.py
--- a/zerver/tests/test_realm_emoji.py
+++ b/zerver/tests/test_realm_emoji.py
@@ -79,6 +79,28 @@ def test_upload(self) -> None:
author = UserProfile.objects.get(id=test_emoji["author_id"])
self.assertEqual(author.email, email)
+ def test_override_built_in_emoji_by_admin(self) -> None:
+ # Test that only administrators can override built-in emoji.
+ self.login("othello")
+ with get_test_image_file("img.png") as fp1:
+ emoji_data = {"f1": fp1}
+ result = self.client_post("/json/realm/emoji/laughing", info=emoji_data)
+ self.assert_json_error(
+ result,
+ "Only administrators can override built-in emoji.",
+ )
+
+ user = self.example_user("iago")
+ email = user.email
+ self.login_user(user)
+ with get_test_image_file("img.png") as fp1:
+ emoji_data = {"f1": fp1}
+ result = self.client_post("/json/realm/emoji/smile", info=emoji_data)
+ self.assert_json_success(result)
+ self.assertEqual(200, result.status_code)
+ realm_emoji = RealmEmoji.objects.get(name="smile")
+ self.assertEqual(realm_emoji.author.email, email)
+
def test_realm_emoji_repr(self) -> None:
realm_emoji = RealmEmoji.objects.get(name="green_tick")
file_name = str(realm_emoji.id) + ".png"
| Permissions and warning for custom emoji overriding unicode emoji
Only administrators/owners should be able to override unicode emoji
1. If an administrator attempts to override a unicode emoji with a custom emoji, they should get a warning. #16937 attempts to fix this, but it is currently not working in production.
We should also shorten the warning message and avoid referring to "unicode" to avoid confusing non-technical users:
>**Override built-in emoji?**
> Uploading a custom emoji with the name **<name>** will override the built-in **<name>** emoji. Continue?
2. If a non-administrator attempts to override an emoji, show an error in the same style as the error for overriding custom emoji (screenshot below). Text: "Failed: An emoji with this name already exists. Only administrators can override built-in emoji."
Error for overriding custom emoji:
<img width="531" alt="Screen Shot 2021-06-15 at 2 30 38 PM" src="https://user-images.githubusercontent.com/2090066/122126418-915e9880-cde6-11eb-86f6-0a4338478739.png">
Related issue: #18269
[Related CZO thread](https://chat.zulip.org/#narrow/stream/2-general/topic/ok.20emoji)
| Hello @zulip/server-emoji, @zulip/server-settings members, this issue was labeled with the "area: emoji", "area: settings (admin/org)" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@aryanshridhar FYI
Thanks alya!
@zulipbot claim | 2021-06-16T19:17:11 |
zulip/zulip | 18,920 | zulip__zulip-18920 | [
"18915"
] | 043b0c6ef302b43aecd40f6c102f990332681aac | diff --git a/zerver/lib/upload.py b/zerver/lib/upload.py
--- a/zerver/lib/upload.py
+++ b/zerver/lib/upload.py
@@ -386,11 +386,11 @@ def __init__(self) -> None:
self.avatar_bucket = get_bucket(settings.S3_AVATAR_BUCKET, self.session)
self.uploads_bucket = get_bucket(settings.S3_AUTH_UPLOADS_BUCKET, self.session)
- def get_public_upload_url(
- self,
- key: str,
- ) -> str:
- # Return the public URL for a key in the S3 Avatar bucket.
+ self._boto_client = None
+ self.public_upload_url_pattern = self.construct_public_upload_url_pattern()
+
+ def construct_public_upload_url_pattern(self) -> str:
+ # Return the pattern for public URL for a key in the S3 Avatar bucket.
# For Amazon S3 itself, this will return the following:
# f"https://{self.avatar_bucket.name}.{network_location}/{key}"
#
@@ -399,20 +399,49 @@ def get_public_upload_url(
# different URL format. Configuring no signature and providing
# no access key makes `generate_presigned_url` just return the
# normal public URL for a key.
- config = Config(signature_version=botocore.UNSIGNED)
- return self.session.client(
- "s3",
- region_name=settings.S3_REGION,
- endpoint_url=settings.S3_ENDPOINT_URL,
- config=config,
- ).generate_presigned_url(
+ #
+ # It unfortunately takes 2ms per query to call
+ # generate_presigned_url, even with our cached boto
+ # client. Since we need to potentially compute hundreds of
+ # avatar URLs in single `GET /messages` request, we instead
+ # back-compute the URL pattern here.
+
+ DUMMY_KEY = "dummy_key_ignored"
+ foo_url = self.get_boto_client().generate_presigned_url(
ClientMethod="get_object",
Params={
"Bucket": self.avatar_bucket.name,
- "Key": key,
+ "Key": DUMMY_KEY,
},
ExpiresIn=0,
)
+ parsed = urllib.parse.urlparse(foo_url)
+ base_path = os.path.dirname(parsed.path)
+
+ url_pattern = urllib.parse.urlunparse(
+ parsed._replace(path=os.path.join(base_path, "{key}"))
+ )
+ return url_pattern
+
+ def get_public_upload_url(
+ self,
+ key: str,
+ ) -> str:
+ return self.public_upload_url_pattern.format(key=key)
+
+ def get_boto_client(self) -> botocore.client.BaseClient:
+ """
+ Creating the client takes a long time so we need to cache it.
+ """
+ if self._boto_client is None:
+ config = Config(signature_version=botocore.UNSIGNED)
+ self._boto_client = self.session.client(
+ "s3",
+ region_name=settings.S3_REGION,
+ endpoint_url=settings.S3_ENDPOINT_URL,
+ config=config,
+ )
+ return self._boto_client
def delete_file_from_s3(self, path_id: str, bucket: ServiceResource) -> bool:
key = bucket.Object(path_id)
| Performance issue fetching messages with S3 backend enabled
This is a bug introduced in Zulip 4.0: getting URLs for avatar images is very expensive due to this helper function:
```
In [10]: t = time.time()
...: for i in range(1000): x = upload_backend.get_public_upload_url("foo")
...: print (time.time() - t)
4.389735460281372
```
This is a regression introduced in https://github.com/zulip/zulip/commit/5a4aecfc40f6c57324a703b74b0b74dc4df3e652
@ryanreh99 @mateuszmandera FYI.
The right fix is to only call the `boto` library function once to get the URL prefix, and then do the rest with URL manipulations.
Once this is fixed, I'll release Zulip 4.4 with the fix, since this is a pretty serious issue.
| Hello @zulip/server-misc members, this issue was labeled with the "area: uploads" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-06-19T10:04:10 |
|
zulip/zulip | 19,012 | zulip__zulip-19012 | [
"18713"
] | 2ac5ba0bf8bb7df03ae986820619f3adf1d1150a | diff --git a/zerver/lib/push_notifications.py b/zerver/lib/push_notifications.py
--- a/zerver/lib/push_notifications.py
+++ b/zerver/lib/push_notifications.py
@@ -14,6 +14,7 @@
from django.db.models import F
from django.utils.timezone import now as timezone_now
from django.utils.translation import gettext as _
+from django.utils.translation import override as override_language
from zerver.decorator import statsd_increment
from zerver.lib.avatar import absolute_avatar_url
@@ -629,7 +630,13 @@ def process(elem: lxml.html.HtmlElement) -> str:
return plain_text
if settings.PUSH_NOTIFICATION_REDACT_CONTENT:
- return "***REDACTED***"
+ return (
+ "*"
+ + _(
+ "This organization has disabled including message content in mobile push notifications"
+ )
+ + "*"
+ )
elem = lxml.html.fromstring(rendered_content)
plain_text = process(elem)
@@ -744,17 +751,18 @@ def get_message_payload_apns(user_profile: UserProfile, message: Message) -> Dic
)
assert message.rendered_content is not None
- content, _ = truncate_content(get_mobile_push_content(message.rendered_content))
- apns_data = {
- "alert": {
- "title": get_apns_alert_title(message),
- "subtitle": get_apns_alert_subtitle(message),
- "body": content,
- },
- "sound": "default",
- "badge": get_apns_badge_count(user_profile),
- "custom": {"zulip": zulip_data},
- }
+ with override_language(user_profile.default_language):
+ content, _ = truncate_content(get_mobile_push_content(message.rendered_content))
+ apns_data = {
+ "alert": {
+ "title": get_apns_alert_title(message),
+ "subtitle": get_apns_alert_subtitle(message),
+ "body": content,
+ },
+ "sound": "default",
+ "badge": get_apns_badge_count(user_profile),
+ "custom": {"zulip": zulip_data},
+ }
return apns_data
@@ -765,17 +773,18 @@ def get_message_payload_gcm(
"""A `message` payload + options, for Android via GCM/FCM."""
data = get_message_payload(user_profile, message)
assert message.rendered_content is not None
- content, truncated = truncate_content(get_mobile_push_content(message.rendered_content))
- data.update(
- event="message",
- alert=get_gcm_alert(message),
- zulip_message_id=message.id, # message_id is reserved for CCS
- time=datetime_to_timestamp(message.date_sent),
- content=content,
- content_truncated=truncated,
- sender_full_name=message.sender.full_name,
- sender_avatar_url=absolute_avatar_url(message.sender),
- )
+ with override_language(user_profile.default_language):
+ content, truncated = truncate_content(get_mobile_push_content(message.rendered_content))
+ data.update(
+ event="message",
+ alert=get_gcm_alert(message),
+ zulip_message_id=message.id, # message_id is reserved for CCS
+ time=datetime_to_timestamp(message.date_sent),
+ content=content,
+ content_truncated=truncated,
+ sender_full_name=message.sender.full_name,
+ sender_avatar_url=absolute_avatar_url(message.sender),
+ )
gcm_options = {"priority": "high"}
return data, gcm_options
| diff --git a/zerver/tests/test_push_notifications.py b/zerver/tests/test_push_notifications.py
--- a/zerver/tests/test_push_notifications.py
+++ b/zerver/tests/test_push_notifications.py
@@ -1661,7 +1661,7 @@ def test_get_message_payload_apns_redacted_content(self) -> None:
"alert": {
"title": "Cordelia, Lear's daughter, King Hamlet, Othello, the Moor of Venice",
"subtitle": "King Hamlet:",
- "body": "***REDACTED***",
+ "body": "*This organization has disabled including message content in mobile push notifications*",
},
"sound": "default",
"badge": 0,
@@ -1807,7 +1807,7 @@ def test_get_message_payload_gcm_redacted_content(self) -> None:
"alert": "New stream message from King Hamlet in Denmark",
"zulip_message_id": message.id,
"time": datetime_to_timestamp(message.date_sent),
- "content": "***REDACTED***",
+ "content": "*This organization has disabled including message content in mobile push notifications*",
"content_truncated": False,
"server": settings.EXTERNAL_HOST,
"realm_id": hamlet.realm.id,
| internationalization or configuration of ***REDACTED*** message in push notifications
for now, redacted message in push notifications does not honour internationalization and also is not configurable.
apparently it's hard-coded string in
```
# cat zerver/lib/push_notifications.py |grep -C3 REDACTED
return plain_text
if settings.PUSH_NOTIFICATION_REDACT_CONTENT:
return "***REDACTED***"
elem = lxml.html.fromstring(rendered_content)
plain_text = process(elem)
```
please make that string at least configurable via /etc/zulip config files, so non-english speaking users get a better clue what that really means.
it would also be good to explain, why it's being redacted at all, so i would like to replace that message with something like "Inhalt aus Datenschutzgründen nicht übermittelt" (content not transmitted for reason of data protection).
i'm no native english speaker, but i know english to some degree and even me needed to consult a dictionary to get the proper meaning of this word. dict.cc telling redacted is "euphemism for: censored" , and nobody likes censorship....
| Hello @zulip/server-i18n members, this issue was labeled with the "area: i18n" label, so you may want to check it out!
<!-- areaLabelAddition -->
Yeah, this should just be tagged for translation. I'd also consider just changing the string; I'm not sure "redacted" is what I'd choose. | 2021-06-25T19:07:48 |
zulip/zulip | 19,023 | zulip__zulip-19023 | [
"18778"
] | 5e824a6d6da5cb40b8d987f1df19f5f80139fbda | diff --git a/zerver/views/home.py b/zerver/views/home.py
--- a/zerver/views/home.py
+++ b/zerver/views/home.py
@@ -1,6 +1,6 @@
import logging
import secrets
-from typing import Any, Dict, List, Optional, Tuple
+from typing import List, Optional, Tuple
from django.conf import settings
from django.http import HttpRequest, HttpResponse, HttpResponseRedirect
@@ -99,17 +99,6 @@ def update_last_reminder(user_profile: Optional[UserProfile]) -> None:
user_profile.save(update_fields=["last_reminder"])
-def compute_navbar_logo_url(page_params: Dict[str, Any]) -> str:
- if (
- page_params["color_scheme"] == 2
- and page_params["realm_night_logo_source"] != Realm.LOGO_DEFAULT
- ):
- navbar_logo_url = page_params["realm_night_logo_url"]
- else:
- navbar_logo_url = page_params["realm_logo_url"]
- return navbar_logo_url
-
-
def home(request: HttpRequest) -> HttpResponse:
if not settings.ROOT_DOMAIN_LANDING_PAGE:
return home_real(request)
@@ -209,8 +198,6 @@ def home_real(request: HttpRequest) -> HttpResponse:
user_permission_info = get_user_permission_info(user_profile)
- navbar_logo_url = compute_navbar_logo_url(page_params)
-
response = render(
request,
"zerver/app/index.html",
@@ -225,7 +212,6 @@ def home_real(request: HttpRequest) -> HttpResponse:
"is_admin": user_permission_info.is_realm_admin,
"is_guest": user_permission_info.is_guest,
"color_scheme": user_permission_info.color_scheme,
- "navbar_logo_url": navbar_logo_url,
"embedded": narrow_stream is not None,
"max_file_upload_size_mib": settings.MAX_FILE_UPLOAD_SIZE,
},
| diff --git a/zerver/tests/test_home.py b/zerver/tests/test_home.py
--- a/zerver/tests/test_home.py
+++ b/zerver/tests/test_home.py
@@ -11,14 +11,8 @@
from django.utils.timezone import now as timezone_now
from corporate.models import Customer, CustomerPlan
-from zerver.lib.actions import (
- change_user_is_active,
- do_change_logo_source,
- do_change_plan_type,
- do_create_user,
-)
+from zerver.lib.actions import change_user_is_active, do_change_plan_type, do_create_user
from zerver.lib.compatibility import LAST_SERVER_UPGRADE_TIME, is_outdated_server
-from zerver.lib.events import add_realm_logo_fields
from zerver.lib.home import (
get_billing_info,
get_furthest_read_time,
@@ -39,7 +33,6 @@
get_system_bot,
get_user,
)
-from zerver.views.home import compute_navbar_logo_url
from zerver.worker.queue_processors import UserActivityWorker
logger_string = "zulip.soft_deactivation"
@@ -860,73 +853,6 @@ def test_desktop_home(self) -> None:
path = urllib.parse.urlparse(result["Location"]).path
self.assertEqual(path, "/")
- def test_compute_navbar_logo_url(self) -> None:
- user_profile = self.example_user("hamlet")
-
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_NIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params), "/static/images/logo/zulip-org-logo.svg?version=0"
- )
-
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_LIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params), "/static/images/logo/zulip-org-logo.svg?version=0"
- )
-
- do_change_logo_source(
- user_profile.realm, Realm.LOGO_UPLOADED, night=False, acting_user=user_profile
- )
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_NIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params),
- f"/user_avatars/{user_profile.realm_id}/realm/logo.png?version=2",
- )
-
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_LIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params),
- f"/user_avatars/{user_profile.realm_id}/realm/logo.png?version=2",
- )
-
- do_change_logo_source(
- user_profile.realm, Realm.LOGO_UPLOADED, night=True, acting_user=user_profile
- )
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_NIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params),
- f"/user_avatars/{user_profile.realm_id}/realm/night_logo.png?version=2",
- )
-
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_LIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params),
- f"/user_avatars/{user_profile.realm_id}/realm/logo.png?version=2",
- )
-
- # This configuration isn't super supported in the UI and is a
- # weird choice, but we have a test for it anyway.
- do_change_logo_source(
- user_profile.realm, Realm.LOGO_DEFAULT, night=False, acting_user=user_profile
- )
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_NIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params),
- f"/user_avatars/{user_profile.realm_id}/realm/night_logo.png?version=2",
- )
-
- page_params = {"color_scheme": user_profile.COLOR_SCHEME_LIGHT}
- add_realm_logo_fields(page_params, user_profile.realm)
- self.assertEqual(
- compute_navbar_logo_url(page_params), "/static/images/logo/zulip-org-logo.svg?version=0"
- )
-
@override_settings(SERVER_UPGRADE_NAG_DEADLINE_DAYS=365)
def test_is_outdated_server(self) -> None:
# Check when server_upgrade_nag_deadline > last_server_upgrade_time
| Wrong Organization Logo in Automatic color scheme
I have two separate Organization Logos configured for day mode and night mode. When I select in my settings color scheme "automatic", on page load always the organization logo for day mode is selected although the rest of the page is in night mode. When I go to settings and select day mode or night mode, the logo is changed to the correct one. When I change color scheme back to "automatic", also the correct logo is shown, until the page is reloaded with CMD+R (in browser or macOS app) or when restarting the macOS app.
This bug occurs in the macOS app, as well as all tested browsers: Safari, Brave (Chromium), Firefox, Google Chrome. All browsers are updated to their latest stable release version.
| Hello @zulip/server-settings members, this issue was labeled with the "area: settings (admin/org)", "area: settings UI" labels, so you may want to check it out!
<!-- areaLabelAddition -->
@Lumrenion thanks for the report!
@sahil839 can you reproduce this? It seems like a very plausible bug, and probably has an easy fix.
Yes I can reproduce. The bug here is that we pass the url directly from backend considering only day and night mode, and for we can check the browser's color scheme in the frontend only, so I think the correct fix is to decide the correct logo url in frontend.
@zulipbot claim | 2021-06-27T11:31:59 |
zulip/zulip | 19,038 | zulip__zulip-19038 | [
"18848"
] | 5f74e78beeac0a101c7998a669b2fb93cacfb759 | diff --git a/zerver/lib/markdown/help_relative_links.py b/zerver/lib/markdown/help_relative_links.py
--- a/zerver/lib/markdown/help_relative_links.py
+++ b/zerver/lib/markdown/help_relative_links.py
@@ -17,7 +17,7 @@
# name is what the item is called in the gear menu: `Select **name**.`
# link is used for relative links: `Select [name](link).`
"manage-streams": ["Manage streams", "/#streams/subscribed"],
- "settings": ["Settings", "/#settings/your-account"],
+ "settings": ["Settings", "/#settings/profile"],
"manage-organization": ["Manage organization", "/#organization/organization-profile"],
"integrations": ["Integrations", "/integrations"],
"stats": ["Usage statistics", "/stats"],
diff --git a/zerver/lib/markdown/help_settings_links.py b/zerver/lib/markdown/help_settings_links.py
--- a/zerver/lib/markdown/help_settings_links.py
+++ b/zerver/lib/markdown/help_settings_links.py
@@ -16,7 +16,8 @@
# breadcrumb to that setting to the name of its setting type, the setting
# name as it appears in the user interface, and a relative link that can
# be used to get to that setting
- "your-account": ["Settings", "Your account", "/#settings/your-account"],
+ "profile": ["Settings", "Profile", "/#settings/profile"],
+ "account-and-privacy": ["Settings", "Account & privacy", "/#settings/account-and-privacy"],
"display-settings": ["Settings", "Display settings", "/#settings/display-settings"],
"notifications": ["Settings", "Notifications", "/#settings/notifications"],
"your-bots": ["Settings", "Your bots", "/#settings/your-bots"],
| diff --git a/frontend_tests/node_tests/browser_history.js b/frontend_tests/node_tests/browser_history.js
--- a/frontend_tests/node_tests/browser_history.js
+++ b/frontend_tests/node_tests/browser_history.js
@@ -22,7 +22,7 @@ function test(label, f) {
}
test("basics", () => {
- const hash1 = "#settings/your-account";
+ const hash1 = "#settings/profile";
const hash2 = "#narrow/is/private";
browser_history.go_to_location(hash1);
assert.equal(location.hash, hash1);
diff --git a/frontend_tests/node_tests/hash_util.js b/frontend_tests/node_tests/hash_util.js
--- a/frontend_tests/node_tests/hash_util.js
+++ b/frontend_tests/node_tests/hash_util.js
@@ -86,26 +86,26 @@ run_test("test_get_hash_category", () => {
assert.deepEqual(hash_util.get_hash_category("#drafts"), "drafts");
assert.deepEqual(hash_util.get_hash_category("invites"), "invites");
- location.hash = "#settings/your-account";
+ location.hash = "#settings/profile";
assert.deepEqual(hash_util.get_current_hash_category(), "settings");
});
run_test("test_get_hash_section", () => {
assert.equal(hash_util.get_hash_section("streams/subscribed"), "subscribed");
- assert.equal(hash_util.get_hash_section("#settings/your-account"), "your-account");
+ assert.equal(hash_util.get_hash_section("#settings/profile"), "profile");
assert.equal(hash_util.get_hash_section("settings/10/general/"), "10");
assert.equal(hash_util.get_hash_section("#drafts"), "");
assert.equal(hash_util.get_hash_section(""), "");
- location.hash = "#settings/your-account";
- assert.deepEqual(hash_util.get_current_hash_section(), "your-account");
+ location.hash = "#settings/profile";
+ assert.deepEqual(hash_util.get_current_hash_section(), "profile");
});
run_test("build_reload_url", () => {
- location.hash = "#settings/your-account";
- assert.equal(hash_util.build_reload_url(), "+oldhash=settings%2Fyour-account");
+ location.hash = "#settings/profile";
+ assert.equal(hash_util.build_reload_url(), "+oldhash=settings%2Fprofile");
location.hash = "#test";
assert.equal(hash_util.build_reload_url(), "+oldhash=test");
diff --git a/frontend_tests/puppeteer_tests/settings.ts b/frontend_tests/puppeteer_tests/settings.ts
--- a/frontend_tests/puppeteer_tests/settings.ts
+++ b/frontend_tests/puppeteer_tests/settings.ts
@@ -27,7 +27,7 @@ async function open_settings(page: Page): Promise<void> {
await page.waitForSelector(settings_selector, {visible: true});
await page.click(settings_selector);
- await page.waitForSelector("#settings_content .account-settings-form", {visible: true});
+ await page.waitForSelector("#settings_content .profile-settings-form", {visible: true});
const page_url = await common.page_url_with_fragment(page);
assert.ok(
page_url.includes("/#settings/"),
@@ -36,23 +36,18 @@ async function open_settings(page: Page): Promise<void> {
}
async function test_change_full_name(page: Page): Promise<void> {
- await page.click("#change_full_name");
-
- const change_full_name_button_selector = "#change_full_name_button";
- await page.waitForSelector(change_full_name_button_selector, {visible: true});
+ await page.click("#full_name");
const full_name_input_selector = 'input[name="full_name"]';
- await page.$eval(full_name_input_selector, (el) => {
- (el as HTMLInputElement).value = "";
- });
- await page.waitForFunction(() => $(":focus").attr("id") === "change_full_name_modal");
- await page.type(full_name_input_selector, "New name");
- await page.click(change_full_name_button_selector);
- await page.waitForFunction(() => $("#change_full_name").text().trim() === "New name");
- await common.wait_for_modal_to_close(page);
+ await common.clear_and_type(page, full_name_input_selector, "New name");
+
+ await page.click("#settings_content .profile-settings-form");
+ await page.waitForSelector(".full-name-change-form .alert-success", {visible: true});
+ await page.waitForFunction(() => $("#full_name").val() === "New name");
}
async function test_change_password(page: Page): Promise<void> {
+ await page.click('[data-section="account-and-privacy"]');
await page.click("#change_password");
const change_password_button_selector = "#change_password_button";
diff --git a/zerver/tests/test_middleware.py b/zerver/tests/test_middleware.py
--- a/zerver/tests/test_middleware.py
+++ b/zerver/tests/test_middleware.py
@@ -98,7 +98,7 @@ def test_admonition_and_link(self) -> None:
)
def test_settings_tab(self) -> None:
- # deactivate-your-account starts with {settings_tab|your-account}
+ # deactivate-your-account starts with {settings_tab|account-and-privacy}
self.check_title_and_description(
"/help/deactivate-your-account",
"Deactivate your account (Zulip Help Center)",
| Add "Privacy and security" section to personal Settings menu
Some personal settings are hard to find right now, and some settings pages have too many different kinds of settings. We should make the settings easier to navigate by splitting "Your account" into two sections:
1. **Profile** (1st on the list). We can try removing all the section headers and see if it's OK or confusing.
Settings (in order): Full name, Profile picture, "Deactivate account" button, everything currently under **Your account > Profile** (custom fields).
I'm not entirely sure about the "Deactivate account" button placement; we can play with it.
2. **Privacy and security** (2nd on the list)
Settings (in order):
a. **User settings**: Email, password, role
b. **Presence** (currently under **Notifications**)
c. **API key**
| Hello @zulip/server-settings members, this issue was labeled with the "area: settings (user)" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim | 2021-06-28T14:22:00 |
zulip/zulip | 19,155 | zulip__zulip-19155 | [
"19141"
] | 5872af340ff69a904cc30efaf4115e83e6afd97c | diff --git a/zerver/views/muting.py b/zerver/views/muting.py
--- a/zerver/views/muting.py
+++ b/zerver/views/muting.py
@@ -94,7 +94,9 @@ def mute_user(request: HttpRequest, user_profile: UserProfile, muted_user_id: in
if user_profile.id == muted_user_id:
raise JsonableError(_("Cannot mute self"))
- muted_user = access_user_by_id(user_profile, muted_user_id, allow_bots=False, for_admin=False)
+ muted_user = access_user_by_id(
+ user_profile, muted_user_id, allow_bots=False, allow_deactivated=True, for_admin=False
+ )
date_muted = timezone_now()
if get_mute_object(user_profile, muted_user) is not None:
@@ -107,7 +109,9 @@ def mute_user(request: HttpRequest, user_profile: UserProfile, muted_user_id: in
def unmute_user(
request: HttpRequest, user_profile: UserProfile, muted_user_id: int
) -> HttpResponse:
- muted_user = access_user_by_id(user_profile, muted_user_id, allow_bots=False, for_admin=False)
+ muted_user = access_user_by_id(
+ user_profile, muted_user_id, allow_bots=False, allow_deactivated=True, for_admin=False
+ )
mute_object = get_mute_object(user_profile, muted_user)
if mute_object is None:
| diff --git a/zerver/tests/test_muting_users.py b/zerver/tests/test_muting_users.py
--- a/zerver/tests/test_muting_users.py
+++ b/zerver/tests/test_muting_users.py
@@ -3,6 +3,7 @@
import orjson
+from zerver.lib.actions import do_deactivate_user
from zerver.lib.cache import cache_get, get_muting_users_cache_key
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.timestamp import datetime_to_timestamp
@@ -76,12 +77,15 @@ def test_add_muted_user_mute_twice(self) -> None:
result = self.api_post(hamlet, url)
self.assert_json_error(result, "User already muted")
- def test_add_muted_user_valid_data(self) -> None:
+ def _test_add_muted_user_valid_data(self, deactivate_user: bool = False) -> None:
hamlet = self.example_user("hamlet")
self.login_user(hamlet)
cordelia = self.example_user("cordelia")
mute_time = datetime(2021, 1, 1, tzinfo=timezone.utc)
+ if deactivate_user:
+ do_deactivate_user(cordelia, acting_user=None)
+
with mock.patch("zerver.views.muting.timezone_now", return_value=mute_time):
url = "/api/v1/users/me/muted_users/{}".format(cordelia.id)
result = self.api_post(hamlet, url)
@@ -112,6 +116,12 @@ def test_add_muted_user_valid_data(self) -> None:
),
)
+ def test_add_muted_user_valid_data(self) -> None:
+ self._test_add_muted_user_valid_data()
+
+ def test_add_muted_user_deactivated_user(self) -> None:
+ self._test_add_muted_user_valid_data(deactivate_user=True)
+
def test_remove_muted_user_unmute_before_muting(self) -> None:
hamlet = self.example_user("hamlet")
self.login_user(hamlet)
@@ -121,12 +131,15 @@ def test_remove_muted_user_unmute_before_muting(self) -> None:
result = self.api_delete(hamlet, url)
self.assert_json_error(result, "User is not muted")
- def test_remove_muted_user_valid_data(self) -> None:
+ def _test_remove_muted_user_valid_data(self, deactivate_user: bool = False) -> None:
hamlet = self.example_user("hamlet")
self.login_user(hamlet)
cordelia = self.example_user("cordelia")
mute_time = datetime(2021, 1, 1, tzinfo=timezone.utc)
+ if deactivate_user:
+ do_deactivate_user(cordelia, acting_user=None)
+
with mock.patch("zerver.views.muting.timezone_now", return_value=mute_time):
url = "/api/v1/users/me/muted_users/{}".format(cordelia.id)
result = self.api_post(hamlet, url)
@@ -163,6 +176,12 @@ def test_remove_muted_user_valid_data(self) -> None:
),
)
+ def test_remove_muted_user_valid_data(self) -> None:
+ self._test_remove_muted_user_valid_data()
+
+ def test_remove_muted_user_deactivated_user(self) -> None:
+ self._test_remove_muted_user_valid_data(deactivate_user=True)
+
def test_get_muting_users(self) -> None:
hamlet = self.example_user("hamlet")
self.login_user(hamlet)
| Make muting work for deactivated users
Even after a user deactivated, it may be helpful to hide that user's messages, avatar, etc. We should therefore make it possible to mute deactivated users.
This will also fix a bug where if a user tries to mute a deactivated user, the action silently fails.
| 2021-07-07T17:17:54 |
|
zulip/zulip | 19,392 | zulip__zulip-19392 | [
"15331"
] | cb998f7147842a38c33ee0418255948e4d1a7821 | diff --git a/zilencer/management/commands/populate_db.py b/zilencer/management/commands/populate_db.py
--- a/zilencer/management/commands/populate_db.py
+++ b/zilencer/management/commands/populate_db.py
@@ -118,10 +118,6 @@ def clear_database() -> None:
Session.objects.all().delete()
-# Suppress spammy output from the push notifications logger
-push_notifications_logger.disabled = True
-
-
def subscribe_users_to_streams(realm: Realm, stream_dict: Dict[str, Dict[str, Any]]) -> None:
subscriptions_to_add = []
event_time = timezone_now()
@@ -273,6 +269,9 @@ def add_arguments(self, parser: CommandParser) -> None:
)
def handle(self, **options: Any) -> None:
+ # Suppress spammy output from the push notifications logger
+ push_notifications_logger.disabled = True
+
if options["percent_huddles"] + options["percent_personals"] > 100:
self.stderr.write("Error! More than 100% of messages allocated.\n")
return
@@ -899,6 +898,8 @@ def handle(self, **options: Any) -> None:
mark_all_messages_as_read()
self.stdout.write("Successfully populated test database.\n")
+ push_notifications_logger.disabled = False
+
def mark_all_messages_as_read() -> None:
"""
| diff --git a/zerver/lib/test_classes.py b/zerver/lib/test_classes.py
--- a/zerver/lib/test_classes.py
+++ b/zerver/lib/test_classes.py
@@ -1104,7 +1104,9 @@ def simulated_markdown_failure(self) -> Iterator[None]:
"""
with self.settings(ERROR_BOT=None), mock.patch(
"zerver.lib.markdown.timeout", side_effect=subprocess.CalledProcessError(1, [])
- ), mock.patch("zerver.lib.markdown.markdown_logger"):
+ ), self.assertLogs(
+ level="ERROR"
+ ): # For markdown_logger.exception
yield
def create_default_device(
diff --git a/zerver/tests/test_decorators.py b/zerver/tests/test_decorators.py
--- a/zerver/tests/test_decorators.py
+++ b/zerver/tests/test_decorators.py
@@ -475,11 +475,11 @@ def my_webhook_raises_exception(request: HttpRequest, user_profile: UserProfile)
request.body = b"{}"
request.content_type = "text/plain"
- with mock.patch("zerver.decorator.webhook_logger.exception") as mock_exception:
+ with self.assertLogs("zulip.zerver.webhooks") as logger:
with self.assertRaisesRegex(Exception, "raised by webhook function"):
my_webhook_raises_exception(request)
- mock_exception.assert_called_with("raised by webhook function", stack_info=True)
+ self.assertIn("raised by webhook function", logger.output[0])
def test_authenticated_rest_api_view_logging_unsupported_event(self) -> None:
@authenticated_rest_api_view(webhook_client_name="ClientName")
diff --git a/zerver/tests/test_middleware.py b/zerver/tests/test_middleware.py
--- a/zerver/tests/test_middleware.py
+++ b/zerver/tests/test_middleware.py
@@ -35,9 +35,11 @@ def test_is_slow_query(self) -> None:
def test_slow_query_log(self) -> None:
self.log_data["time_started"] = time.time() - self.SLOW_QUERY_TIME
- with patch("zerver.middleware.slow_query_logger") as mock_slow_query_logger, patch(
- "zerver.middleware.logger"
- ) as mock_normal_logger:
+ with self.assertLogs(
+ "zulip.slow_queries", level="INFO"
+ ) as slow_query_logger, self.assertLogs(
+ "zulip.requests", level="INFO"
+ ) as middleware_normal_logger:
write_log_line(
self.log_data,
@@ -47,12 +49,11 @@ def test_slow_query_log(self) -> None:
requestor_for_logs="unknown",
client_name="?",
)
- mock_slow_query_logger.info.assert_called_once()
- mock_normal_logger.info.assert_called_once()
+ self.assert_length(middleware_normal_logger.output, 1)
+ self.assert_length(slow_query_logger.output, 1)
- logged_line = mock_slow_query_logger.info.call_args_list[0][0][0]
self.assertRegex(
- logged_line,
+ slow_query_logger.output[0],
r"123\.456\.789\.012 GET 200 10\.\ds .* \(unknown via \?\)",
)
diff --git a/zerver/tests/test_push_notifications.py b/zerver/tests/test_push_notifications.py
--- a/zerver/tests/test_push_notifications.py
+++ b/zerver/tests/test_push_notifications.py
@@ -8,7 +8,6 @@
from contextlib import contextmanager
from typing import Any, Dict, Iterator, List, Optional, Tuple
from unittest import mock, skipUnless
-from unittest.mock import call
from urllib import parse
import orjson
@@ -825,11 +824,9 @@ def test_end_to_end(self) -> None:
}
with mock.patch(
"zerver.lib.push_notifications.gcm_client"
- ) as mock_gcm, self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger.info"
- ) as mock_info, mock.patch(
- "zerver.lib.push_notifications.logger.warning"
- ):
+ ) as mock_gcm, self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
apns_devices = [
(b64_to_hex(device.token), device.ios_app_id, device.token)
for device in RemotePushDeviceToken.objects.filter(kind=PushDeviceToken.APNS)
@@ -845,14 +842,15 @@ def test_end_to_end(self) -> None:
mock_apns.send_notification.return_value.set_result(result)
handle_push_notification(self.user_profile.id, missed_message)
for _, _, token in apns_devices:
- mock_info.assert_any_call(
- "APNs: Success sending for user %d to device %s", self.user_profile.id, token
+ self.assertIn(
+ "INFO:zerver.lib.push_notifications:"
+ f"APNs: Success sending for user {self.user_profile.id} to device {token}",
+ logger.output,
)
for _, _, token in gcm_devices:
- mock_info.assert_any_call(
- "GCM: Sent %s as %s",
- token,
- message.id,
+ self.assertIn(
+ "INFO:zerver.lib.push_notifications:" f"GCM: Sent {token} as {message.id}",
+ logger.output,
)
@override_settings(PUSH_NOTIFICATION_BOUNCER_URL="https://push.zulip.org.example.com")
@@ -874,11 +872,9 @@ def test_unregistered_client(self) -> None:
}
with mock.patch(
"zerver.lib.push_notifications.gcm_client"
- ) as mock_gcm, self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger.info"
- ) as mock_info, mock.patch(
- "zerver.lib.push_notifications.logger.warning"
- ):
+ ) as mock_gcm, self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
apns_devices = [
(b64_to_hex(device.token), device.ios_app_id, device.token)
for device in RemotePushDeviceToken.objects.filter(kind=PushDeviceToken.APNS)
@@ -895,10 +891,10 @@ def test_unregistered_client(self) -> None:
mock_apns.send_notification.return_value.set_result(result)
handle_push_notification(self.user_profile.id, missed_message)
for _, _, token in apns_devices:
- mock_info.assert_any_call(
- "APNs: Removing invalid/expired token %s (%s)",
- token,
- "Unregistered",
+ self.assertIn(
+ "INFO:zerver.lib.push_notifications:"
+ f"APNs: Removing invalid/expired token {token} (Unregistered)",
+ logger.output,
)
self.assertEqual(
RemotePushDeviceToken.objects.filter(kind=PushDeviceToken.APNS).count(), 0
@@ -1217,15 +1213,15 @@ def test_user_message_does_not_exist(self) -> None:
sender = self.example_user("iago")
message_id = self.send_stream_message(sender, "public_stream", "test")
missed_message = {"message_id": message_id}
- with mock.patch("zerver.lib.push_notifications.logger.error") as mock_logger, mock.patch(
+ with self.assertLogs("zerver.lib.push_notifications", level="ERROR") as logger, mock.patch(
"zerver.lib.push_notifications.push_notifications_enabled", return_value=True
) as mock_push_notifications:
handle_push_notification(self.user_profile.id, missed_message)
- mock_logger.assert_called_with(
- "Could not find UserMessage with message_id %s and user_id %s",
- message_id,
- self.user_profile.id,
- exc_info=True,
+ self.assertEqual(
+ "ERROR:zerver.lib.push_notifications:"
+ f"Could not find UserMessage with message_id {message_id} and user_id {self.user_profile.id}"
+ "\nNoneType: None", # This is an effect of using `exc_info=True` in the actual logger.
+ logger.output[0],
)
mock_push_notifications.assert_called_once()
@@ -1338,49 +1334,53 @@ def test_get_apns_context(self) -> None:
def test_not_configured(self) -> None:
self.setup_apns_tokens()
- with mock.patch("zerver.lib.push_notifications.get_apns_context") as mock_get, mock.patch(
- "zerver.lib.push_notifications.logger"
- ) as mock_logging:
+ with mock.patch(
+ "zerver.lib.push_notifications.get_apns_context"
+ ) as mock_get, self.assertLogs("zerver.lib.push_notifications", level="DEBUG") as logger:
mock_get.return_value = None
self.send()
- mock_logging.debug.assert_called_once_with(
+ notification_drop_log = (
+ "DEBUG:zerver.lib.push_notifications:"
"APNs: Dropping a notification because nothing configured. "
"Set PUSH_NOTIFICATION_BOUNCER_URL (or APNS_CERT_FILE)."
)
- mock_logging.warning.assert_not_called()
+
from zerver.lib.push_notifications import initialize_push_notifications
initialize_push_notifications()
- mock_logging.warning.assert_called_once_with(
+ mobile_notifications_not_configured_log = (
+ "WARNING:zerver.lib.push_notifications:"
"Mobile push notifications are not configured.\n "
"See https://zulip.readthedocs.io/en/latest/production/mobile-push-notifications.html"
)
+ self.assertEqual(
+ [notification_drop_log, mobile_notifications_not_configured_log], logger.output
+ )
+
def test_success(self) -> None:
self.setup_apns_tokens()
- with self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger"
- ) as mock_logging:
+ with self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
result = mock.Mock()
result.is_successful = True
mock_apns.send_notification.return_value = asyncio.Future()
mock_apns.send_notification.return_value.set_result(result)
self.send()
- mock_logging.warning.assert_not_called()
for device in self.devices():
- mock_logging.info.assert_any_call(
- "APNs: Success sending for user %d to device %s",
- self.user_profile.id,
- device.token,
+ self.assertIn(
+ f"INFO:zerver.lib.push_notifications:APNs: Success sending for user {self.user_profile.id} to device {device.token}",
+ logger.output,
)
def test_http_retry(self) -> None:
import aioapns
self.setup_apns_tokens()
- with self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger"
- ) as mock_logging:
+ with self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
exception: asyncio.Future[object] = asyncio.Future()
exception.set_exception(aioapns.exceptions.ConnectionError())
result = mock.Mock()
@@ -1391,26 +1391,23 @@ def test_http_retry(self) -> None:
[exception], itertools.repeat(future)
)
self.send()
- mock_logging.warning.assert_called_once_with(
- "APNs: ConnectionError sending for user %d to device %s: %s",
- self.user_profile.id,
- self.devices()[0].token,
- "ConnectionError",
+ self.assertIn(
+ f"WARNING:zerver.lib.push_notifications:APNs: ConnectionError sending for user {self.user_profile.id} to device {self.devices()[0].token}: ConnectionError",
+ logger.output,
)
for device in self.devices():
- mock_logging.info.assert_any_call(
- "APNs: Success sending for user %d to device %s",
- self.user_profile.id,
- device.token,
+ self.assertIn(
+ f"INFO:zerver.lib.push_notifications:APNs: Success sending for user {self.user_profile.id} to device {device.token}",
+ logger.output,
)
def test_http_retry_closed(self) -> None:
import aioapns
self.setup_apns_tokens()
- with self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger"
- ) as mock_logging:
+ with self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
exception: asyncio.Future[object] = asyncio.Future()
exception.set_exception(aioapns.exceptions.ConnectionClosed())
result = mock.Mock()
@@ -1421,39 +1418,40 @@ def test_http_retry_closed(self) -> None:
[exception], itertools.repeat(future)
)
self.send()
- mock_logging.warning.assert_called_once_with(
- "APNs: ConnectionClosed sending for user %d to device %s: %s",
- self.user_profile.id,
- self.devices()[0].token,
- "ConnectionClosed",
+ self.assertIn(
+ f"WARNING:zerver.lib.push_notifications:APNs: ConnectionClosed sending for user {self.user_profile.id} to device {self.devices()[0].token}: ConnectionClosed",
+ logger.output,
)
for device in self.devices():
- mock_logging.info.assert_any_call(
- "APNs: Success sending for user %d to device %s",
- self.user_profile.id,
- device.token,
+ self.assertIn(
+ f"INFO:zerver.lib.push_notifications:APNs: Success sending for user {self.user_profile.id} to device {device.token}",
+ logger.output,
)
def test_http_retry_eventually_fails(self) -> None:
import aioapns
self.setup_apns_tokens()
- with self.mock_apns() as mock_apns, mock.patch(
- "zerver.lib.push_notifications.logger"
- ) as mock_logging:
+ with self.mock_apns() as mock_apns, self.assertLogs(
+ "zerver.lib.push_notifications", level="INFO"
+ ) as logger:
exception: asyncio.Future[object] = asyncio.Future()
exception.set_exception(aioapns.exceptions.ConnectionError())
mock_apns.send_notification.side_effect = iter([exception] * 5)
self.send(devices=self.devices()[0:1])
- self.assertEqual(mock_logging.warning.call_count, 5)
- mock_logging.warning.assert_called_with(
- "APNs: Failed to send for user %d to device %s: %s",
- self.user_profile.id,
- self.devices()[0].token,
- "HTTP error, retries exhausted",
+ self.assert_length(
+ [log_record for log_record in logger.records if log_record.levelname == "WARNING"],
+ 5,
+ )
+ self.assertIn(
+ f"WARNING:zerver.lib.push_notifications:APNs: Failed to send for user {self.user_profile.id} to device {self.devices()[0].token}: HTTP error, retries exhausted",
+ logger.output,
+ )
+ self.assert_length(
+ [log_record for log_record in logger.records if log_record.levelname == "INFO"],
+ 1,
)
- self.assertEqual(mock_logging.info.call_count, 1)
def test_modernize_apns_payload(self) -> None:
payload = {
@@ -2159,57 +2157,55 @@ def get_gcm_data(self, **kwargs: Any) -> Dict[str, Any]:
data.update(kwargs)
return data
- @mock.patch("zerver.lib.push_notifications.logger.debug")
- def test_gcm_is_none(self, mock_debug: mock.MagicMock, mock_gcm: mock.MagicMock) -> None:
+ def test_gcm_is_none(self, mock_gcm: mock.MagicMock) -> None:
mock_gcm.__bool__.return_value = False
- send_android_push_notification_to_user(self.user_profile, {}, {})
- mock_debug.assert_called_with(
- "Skipping sending a GCM push notification since PUSH_NOTIFICATION_BOUNCER_URL "
- "and ANDROID_GCM_API_KEY are both unset"
- )
+ with self.assertLogs("zerver.lib.push_notifications", level="DEBUG") as logger:
+ send_android_push_notification_to_user(self.user_profile, {}, {})
+ self.assertEqual(
+ "DEBUG:zerver.lib.push_notifications:"
+ "Skipping sending a GCM push notification since PUSH_NOTIFICATION_BOUNCER_URL "
+ "and ANDROID_GCM_API_KEY are both unset",
+ logger.output[0],
+ )
- @mock.patch("zerver.lib.push_notifications.logger.warning")
- def test_json_request_raises_ioerror(
- self, mock_warn: mock.MagicMock, mock_gcm: mock.MagicMock
- ) -> None:
+ def test_json_request_raises_ioerror(self, mock_gcm: mock.MagicMock) -> None:
mock_gcm.json_request.side_effect = IOError("error")
- send_android_push_notification_to_user(self.user_profile, {}, {})
- mock_warn.assert_called_with("Error while pushing to GCM", exc_info=True)
+ with self.assertLogs("zerver.lib.push_notifications", level="WARNING") as logger:
+ send_android_push_notification_to_user(self.user_profile, {}, {})
+ self.assertIn(
+ "WARNING:zerver.lib.push_notifications:Error while pushing to GCM\nTraceback ",
+ logger.output[0],
+ )
@mock.patch("zerver.lib.push_notifications.logger.warning")
- @mock.patch("zerver.lib.push_notifications.logger.info")
- def test_success(
- self, mock_info: mock.MagicMock, mock_warning: mock.MagicMock, mock_gcm: mock.MagicMock
- ) -> None:
+ def test_success(self, mock_warning: mock.MagicMock, mock_gcm: mock.MagicMock) -> None:
res = {}
res["success"] = {token: ind for ind, token in enumerate(self.gcm_tokens)}
mock_gcm.json_request.return_value = res
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- self.assertEqual(mock_info.call_count, 2)
- c1 = call("GCM: Sent %s as %s", "1111", 0)
- c2 = call("GCM: Sent %s as %s", "2222", 1)
- mock_info.assert_has_calls([c1, c2], any_order=True)
+ with self.assertLogs("zerver.lib.push_notifications", level="INFO") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ self.assert_length(logger.output, 2)
+ log_msg1 = f"INFO:zerver.lib.push_notifications:GCM: Sent {1111} as {0}"
+ log_msg2 = f"INFO:zerver.lib.push_notifications:GCM: Sent {2222} as {1}"
+ self.assertEqual([log_msg1, log_msg2], logger.output)
mock_warning.assert_not_called()
- @mock.patch("zerver.lib.push_notifications.logger.warning")
- def test_canonical_equal(self, mock_warning: mock.MagicMock, mock_gcm: mock.MagicMock) -> None:
+ def test_canonical_equal(self, mock_gcm: mock.MagicMock) -> None:
res = {}
res["canonical"] = {1: 1}
mock_gcm.json_request.return_value = res
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- mock_warning.assert_called_once_with(
- "GCM: Got canonical ref but it already matches our ID %s!",
- 1,
+ with self.assertLogs("zerver.lib.push_notifications", level="WARNING") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ self.assertEqual(
+ f"WARNING:zerver.lib.push_notifications:GCM: Got canonical ref but it already matches our ID {1}!",
+ logger.output[0],
)
- @mock.patch("zerver.lib.push_notifications.logger.warning")
- def test_canonical_pushdevice_not_present(
- self, mock_warning: mock.MagicMock, mock_gcm: mock.MagicMock
- ) -> None:
+ def test_canonical_pushdevice_not_present(self, mock_gcm: mock.MagicMock) -> None:
res = {}
t1 = hex_to_b64("1111")
t2 = hex_to_b64("3333")
@@ -2224,17 +2220,15 @@ def get_count(hex_token: str) -> int:
self.assertEqual(get_count("3333"), 0)
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- msg = "GCM: Got canonical ref %s replacing %s but new ID not registered! Updating."
- mock_warning.assert_called_once_with(msg, t2, t1)
+ with self.assertLogs("zerver.lib.push_notifications", level="WARNING") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ msg = f"WARNING:zerver.lib.push_notifications:GCM: Got canonical ref {t2} replacing {t1} but new ID not registered! Updating."
+ self.assertEqual(msg, logger.output[0])
self.assertEqual(get_count("1111"), 0)
self.assertEqual(get_count("3333"), 1)
- @mock.patch("zerver.lib.push_notifications.logger.info")
- def test_canonical_pushdevice_different(
- self, mock_info: mock.MagicMock, mock_gcm: mock.MagicMock
- ) -> None:
+ def test_canonical_pushdevice_different(self, mock_gcm: mock.MagicMock) -> None:
res = {}
old_token = hex_to_b64("1111")
new_token = hex_to_b64("2222")
@@ -2249,18 +2243,17 @@ def get_count(hex_token: str) -> int:
self.assertEqual(get_count("2222"), 1)
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- mock_info.assert_called_once_with(
- "GCM: Got canonical ref %s, dropping %s",
- new_token,
- old_token,
- )
+ with self.assertLogs("zerver.lib.push_notifications", level="INFO") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ self.assertEqual(
+ f"INFO:zerver.lib.push_notifications:GCM: Got canonical ref {new_token}, dropping {old_token}",
+ logger.output[0],
+ )
self.assertEqual(get_count("1111"), 0)
self.assertEqual(get_count("2222"), 1)
- @mock.patch("zerver.lib.push_notifications.logger.info")
- def test_not_registered(self, mock_info: mock.MagicMock, mock_gcm: mock.MagicMock) -> None:
+ def test_not_registered(self, mock_gcm: mock.MagicMock) -> None:
res = {}
token = hex_to_b64("1111")
res["errors"] = {"NotRegistered": [token]}
@@ -2273,21 +2266,24 @@ def get_count(hex_token: str) -> int:
self.assertEqual(get_count("1111"), 1)
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- mock_info.assert_called_once_with("GCM: Removing %s", token)
+ with self.assertLogs("zerver.lib.push_notifications", level="INFO") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ self.assertEqual(
+ f"INFO:zerver.lib.push_notifications:GCM: Removing {token}", logger.output[0]
+ )
self.assertEqual(get_count("1111"), 0)
- @mock.patch("zerver.lib.push_notifications.logger.warning")
- def test_failure(self, mock_warn: mock.MagicMock, mock_gcm: mock.MagicMock) -> None:
+ def test_failure(self, mock_gcm: mock.MagicMock) -> None:
res = {}
token = hex_to_b64("1111")
res["errors"] = {"Failed": [token]}
mock_gcm.json_request.return_value = res
data = self.get_gcm_data()
- send_android_push_notification_to_user(self.user_profile, data, {})
- c1 = call("GCM: Delivery to %s failed: %s", token, "Failed")
- mock_warn.assert_has_calls([c1], any_order=True)
+ with self.assertLogs("zerver.lib.push_notifications", level="WARNING") as logger:
+ send_android_push_notification_to_user(self.user_profile, data, {})
+ msg = f"WARNING:zerver.lib.push_notifications:GCM: Delivery to {token} failed: Failed"
+ self.assertEqual(msg, logger.output[0])
class TestClearOnRead(ZulipTestCase):
| Migrate all backend tests to use `assertLogs`.
We currently use `mock.patch` approach to verify log outputs. `assertLogs` is a better option than `mock.patch`.
For info on `assertLogs`, see https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertLogs
Seeing how `assertLogs` are used in `test_auth_backends.py`, added in [this commit](https://github.com/zulip/zulip/commit/d30f11888a009dea59711d24696e140d905488c6), might be helpful.
| Hello @zulip/server-testing members, this issue was labeled with the "area: testing-infrastructure" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
Welcome to Zulip, @agupta01! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
Hello @palashcode, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
@zulipbot abandon
@zulipbot claim
Welcome to Zulip, @palashcode! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@zulipbot claim
@chdinesh1089 I am using your [https://github.com/zulip/zulip/commit/d30f11888a009dea59711d24696e140d905488c6](url) commit as a reference to replace mock.patch with assertLogs. I made some changes as shown in code below and tested. The test is failing with "AssertionError: no logs of level WARNING or higher triggered on logging.warning". Can you please guide me on what I am doing wrong?
```python
# with mock.patch("logging.warning") as mock_warn:
with self.assertLogs("logging.warning", level='WARNING') as m:
result = self.get_log_into_subdomain(data, force_token='nonsense')
# mock_warn.assert_called_once_with("log_into_subdomain: Malformed token given: %s", "nonsense")
self.assertEqual(m.output, [self.logger_output("log_into_subdomain: Malformed token given: nonsense", 'warning')])
self.assertEqual(result.status_code, 400)
```
`logging.warning` uses root logger, so we do not need to add `"logging.warning"` in `assertLogs` as it defaults to root logger as stated [here](https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertLogs).
So I think it'll work if we remove `"logging.warning"` and make it ` with self.assertLogs("logging.warning", level='WARNING') as m:`.
> "AssertionError: no logs of level WARNING or higher triggered on logging.warning".
(From this, we can see that `assertLogs` is looking for a logging call with a logger named `logging.warning`, but our actual call is to the root logger)
This could help with any confusion you might probably have https://stackoverflow.com/questions/11225846/what-is-the-difference-between-logging-info-and-logging-getlogger-info
@chdinesh1089 Thanks. Created a PR [https://github.com/zulip/zulip/pull/15708](url). Please have a look.
@zulipbot claim
Hello @palashcode, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
@zulipbot claim
**ERROR:** You have already claimed this issue.
Following #15708, we're about 180/250 converted:
```
tabbott@coset:~/zulip$ git grep mock[.]patch | grep log | wc
69 292 7866
tabbott@coset:~/zulip$ git grep assertLogs | wc
180 992 19059
```
@timabbott are all `mock.patch` migrated to `assertLogs`? If not I would like to work on it.
@shubham00jain are you working on it, because I saw that you have assigned yourself for other issue.
@timabbott from your previous comment I think there is still work to be done on this issue, I have gone through some of the previous commits referencing this issue, from which I think I could work on it.
So I wanted to ask,can I start working on it?
Yeah, go for it!
@zulipbot claim
Hello @m-e-l-u-h-a-n, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon!
Hello, I would like to work on this feature! Can I claim this?
@zulipbot claim
Hello @noabenefraim, it looks like someone has already claimed this issue! Since we believe multiple assignments to the same issue may cause some confusion, we encourage you to search for other unclaimed issues to work on. However, you can always reclaim this issue if no one is working on it.
We look forward to your valuable contributions!
@zulipbot claim
@zulipbot abandon
@zulipbot claim
@zulipbot claim
Hi, @noabenefraim I am continuing to work on this issue even now, I had made some changes which you can refer in pr #16619 and after getting some more reference from [this comment](https://github.com/zulip/zulip/pull/16619#issuecomment-719064732), I have made more changes to other tests and only 2-3 files are remaining for which I will create a pr soon, so I don't think there would be much to work on it. If you wanted to work on it I would make a pull request with the changes I have made till now. So you may continue over the remaining part, otherwise that may create a confusion on the state of work. I was not able to focus on it as I am having exams these days and they will be over by tomorrow :smiley: .
@m-e-l-u-h-a-n if you're not currently working on this issue, can I continue from where you've left off?
@zulipbot claim
Hello @cmunaco, it looks like you've currently claimed 1 issue in this repository. We encourage new contributors to focus their efforts on at most 1 issue at a time, so please complete your work on your other claimed issues before trying to claim this issue again.
We look forward to your valuable contributions!
@zulipbot claim
@zulipbot claim
@chdinesh1089
I think this issue should be closed
can you look into this?
git grep mock[.]patch | grep log | wc
0 0 0
git grep assertLogs | wc
271 1478 26981
Hi @gvarun1,
You can use `git grep -n patch | grep '\(logger\|logging\)'`. It will output a lot of lines but I think out of those only:
* `zerver/tests/test_push_notifications.py`
* `zerver/webhooks/github/tests.py`
require migration although if you need to discuss about others you can ask here or in [chat.zulip.org](https://chat.zulip.org).
You can use #16619 and #16818 for reference.
@m-e-l-u-h-a-n thanks a lot for the references, I will do my best to get my first PR
Yeah, I think @m-e-l-u-h-a-n is right. All the best with your first PR @gvarun1 !
Hello @gvarun1, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning -->
@zulipbot abandon
Due to some unexpected emergency, I stopped working
@zulipbot claim
@chdinesh1089 Do you need mock.path to be replaced with assertLogs in all the test files?
@chdinesh1089 Can you help me out here? I just need a start.
@zulipbot abandon
@manavdesai27 sorry about the delay. Yes, we want to replace logging mocks(which are done with `mock.patch`) with `assertLogs`. There are only a few as @m-e-l-u-h-a-n [said above](https://github.com/zulip/zulip/issues/15331#issuecomment-862088818).
It would be super nice to finish off those last few and close this out, not least of which because this issue has a bunch of cruft on it (100+ comments, etc.) that make it not super great for new folks to work on. @chdinesh1089 @m-e-l-u-h-a-n would either of you be up for finishing this?
I’ll do it. | 2021-07-26T18:01:52 |
zulip/zulip | 19,415 | zulip__zulip-19415 | [
"18677"
] | c16d0414797f380c14859fa63454befed113a4d4 | diff --git a/zerver/models.py b/zerver/models.py
--- a/zerver/models.py
+++ b/zerver/models.py
@@ -1716,6 +1716,15 @@ def has_billing_access(self) -> bool:
def is_realm_owner(self) -> bool:
return self.role == UserProfile.ROLE_REALM_OWNER
+ @is_realm_owner.setter
+ def is_realm_owner(self, value: bool) -> None:
+ if value:
+ self.role = UserProfile.ROLE_REALM_OWNER
+ elif self.role == UserProfile.ROLE_REALM_OWNER:
+ # We need to be careful to not accidentally change
+ # ROLE_GUEST to ROLE_MEMBER here.
+ self.role = UserProfile.ROLE_MEMBER
+
@property
def is_guest(self) -> bool:
return self.role == UserProfile.ROLE_GUEST
@@ -1733,6 +1742,15 @@ def is_guest(self, value: bool) -> None:
def is_moderator(self) -> bool:
return self.role == UserProfile.ROLE_MODERATOR
+ @is_moderator.setter
+ def is_moderator(self, value: bool) -> None:
+ if value:
+ self.role = UserProfile.ROLE_MODERATOR
+ elif self.role == UserProfile.ROLE_MODERATOR:
+ # We need to be careful to not accidentally change
+ # ROLE_GUEST to ROLE_MEMBER here.
+ self.role = UserProfile.ROLE_MEMBER
+
@property
def is_incoming_webhook(self) -> bool:
return self.bot_type == UserProfile.INCOMING_WEBHOOK_BOT
| diff --git a/zerver/tests/test_users.py b/zerver/tests/test_users.py
--- a/zerver/tests/test_users.py
+++ b/zerver/tests/test_users.py
@@ -95,6 +95,14 @@ def test_role_setters(self) -> None:
self.assertEqual(user_profile.is_guest, False)
self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_ADMINISTRATOR)
+ user_profile.is_realm_owner = False
+ self.assertEqual(user_profile.is_realm_owner, False)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_ADMINISTRATOR)
+
+ user_profile.is_moderator = False
+ self.assertEqual(user_profile.is_moderator, False)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_ADMINISTRATOR)
+
user_profile.is_realm_admin = False
self.assertEqual(user_profile.is_realm_admin, False)
self.assertEqual(user_profile.role, UserProfile.ROLE_MEMBER)
@@ -111,6 +119,22 @@ def test_role_setters(self) -> None:
self.assertEqual(user_profile.is_guest, False)
self.assertEqual(user_profile.role, UserProfile.ROLE_MEMBER)
+ user_profile.is_realm_owner = True
+ self.assertEqual(user_profile.is_realm_owner, True)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_REALM_OWNER)
+
+ user_profile.is_realm_owner = False
+ self.assertEqual(user_profile.is_realm_owner, False)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_MEMBER)
+
+ user_profile.is_moderator = True
+ self.assertEqual(user_profile.is_moderator, True)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_MODERATOR)
+
+ user_profile.is_moderator = False
+ self.assertEqual(user_profile.is_moderator, False)
+ self.assertEqual(user_profile.role, UserProfile.ROLE_MEMBER)
+
def test_get_admin_users(self) -> None:
user_profile = self.example_user("hamlet")
do_change_user_role(user_profile, UserProfile.ROLE_MEMBER, acting_user=None)
| Problem with AUTH_LDAP_USER_FLAGS_BY_GROUP on Zulip >= 4.0
Updating Zulip from 3.4 zu 4.2 broke my ldap sync, upon closer investigation I found the cause to be the extended set of roles:
because I was using
```
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
"is_realm_admin": "cn=someGroup,ou=filterGroups,dc=mydomain,dc=com",
}
```
I got the message
```
Exception: Ldap sync would have deactivated all owners of realm . This is most likely due to a misconfiguration of LDAP settings. Rolling back...
Use the --force option if the mass deactivation is intended.
```
Commenting out the above config block was not enough, I had to manually promote at least one user to OWNER for the sync to work again because there wasn't any owner.
Then I tried adjusting my above config snippet from "is_realm_admin" to "is_realm_owner" - that did not work and completely broke the ldap sync right at the start (`AttributeError: can't set attribute`).
How can I get back the feature of having an LDAP group whose members are automatically assigned the highest possible role?
The [relevant docs](https://zulip.readthedocs.io/en/stable/production/authentication-methods.html#other-fields) are rather spartanic.
| Hello @zulip/server-authentication members, this issue was labeled with the "area: authentication" label, so you may want to check it out!
<!-- areaLabelAddition -->
Thanks for the report @debalance! I can confirm the regression.
I think this needs a code change to add an `is_realm_owner` setter similar to this function:
```
@is_realm_admin.setter
def is_realm_admin(self, value: bool) -> None:
if value:
self.role = UserProfile.ROLE_REALM_ADMINISTRATOR
elif self.role == UserProfile.ROLE_REALM_ADMINISTRATOR:
# We need to be careful to not accidentally change
# ROLE_GUEST to ROLE_MEMBER here.
self.role = UserProfile.ROLE_MEMBER
```
i.e. this:
```
diff --git a/zerver/models.py b/zerver/models.py
index 46e73f6590..103e861461 100644
--- a/zerver/models.py
+++ b/zerver/models.py
@@ -1498,6 +1498,15 @@ class UserProfile(AbstractBaseUser, PermissionsMixin):
# ROLE_GUEST to ROLE_MEMBER here.
self.role = UserProfile.ROLE_MEMBER
+ @is_realm_owner.setter
+ def is_realm_owner(self, value: bool) -> None:
+ if value:
+ self.role = UserProfile.ROLE_REALM_OWNER
+ elif self.role == UserProfile.ROLE_REALM_OWNER:
+ # We need to be careful to not accidentally change
+ # ROLE_REALM_OWNER to ROLE_MEMBER here.
+ self.role = UserProfile.ROLE_MEMBER
+
@property
def has_billing_access(self) -> bool:
return self.is_realm_owner or self.is_billing_admin
```
@debalance you're welcome to apply that patch to `zerver/models.py` in your installation and use `is_realm_owner`, though it's at present totally untested.
@mateuszmandera FYI; can you do a proper PR for this with tests and documentation, which should presumably cover `is_guest` as well? And I guess we might want to do the similar work to make `is_realm_moderator` supported as well.
Thx @timabbott , I tested your patch - it seems to be incomplete (or I missed a required step after changing the file):
```
zulip@zulip:~/deployments/current/zerver$ /home/zulip/deployments/current/manage.py sync_ldap_user_data
Traceback (most recent call last):
File "/home/zulip/deployments/current/manage.py", line 52, in <module>
execute_from_command_line(sys.argv)
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
django.setup()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/apps/registry.py", line 114, in populate
app_config.import_models()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/apps/config.py", line 301, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/zulip/deployments/2021-06-02-22-13-46/confirmation/models.py", line 20, in <module>
from zerver.models import EmailChangeStatus, MultiuseInvite, PreregistrationUser, Realm, UserProfile
File "/home/zulip/deployments/2021-06-02-22-13-46/zerver/models.py", line 1092, in <module>
class UserProfile(AbstractBaseUser, PermissionsMixin):
File "/home/zulip/deployments/2021-06-02-22-13-46/zerver/models.py", line 1498, in UserProfile
@is_realm_owner.setter
NameError: name 'is_realm_owner' is not defined
```
```
zulip@zulip:~/deployments/current$ /home/zulip/deployments/current/scripts/restart-server
Traceback (most recent call last):
File "./manage.py", line 52, in <module>
execute_from_command_line(sys.argv)
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
django.setup()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/apps/registry.py", line 114, in populate
app_config.import_models()
File "/home/zulip/deployments/2021-06-02-22-13-46/zulip-py3-venv/lib/python3.7/site-packages/django/apps/config.py", line 301, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/zulip/deployments/2021-06-02-22-13-46/confirmation/models.py", line 20, in <module>
from zerver.models import EmailChangeStatus, MultiuseInvite, PreregistrationUser, Realm, UserProfile
File "/home/zulip/deployments/2021-06-02-22-13-46/zerver/models.py", line 1092, in <module>
class UserProfile(AbstractBaseUser, PermissionsMixin):
File "/home/zulip/deployments/2021-06-02-22-13-46/zerver/models.py", line 1498, in UserProfile
@is_realm_owner.setter
NameError: name 'is_realm_owner' is not defined
Traceback (most recent call last):
File "/home/zulip/deployments/current/scripts/restart-server", line 51, in <module>
["./manage.py", "send_stats", "incr", "events.server_restart", str(int(time.time()))]
File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['./manage.py', 'send_stats', 'incr', 'events.server_restart', '1622715924']' returned non-zero exit status 1.
```
I am not sure but I think the lines should be added after the code mentioned below to fix the `NameError` -
```python
@property
def is_realm_owner(self) -> bool:
return self.role == UserProfile.ROLE_REALM_OWNER
```
@debalance can you try this once?
@sahil839 You're right, now it's working! | 2021-07-28T16:26:05 |
zulip/zulip | 19,454 | zulip__zulip-19454 | [
"19205"
] | 9968fb5081dbf21d6a904f9669f5c8a4a228b719 | diff --git a/zerver/lib/markdown/__init__.py b/zerver/lib/markdown/__init__.py
--- a/zerver/lib/markdown/__init__.py
+++ b/zerver/lib/markdown/__init__.py
@@ -1369,7 +1369,13 @@ def handleMatch(self, match: Match[str]) -> Optional[Element]:
time_input_string = match.group("time")
timestamp = None
try:
- timestamp = dateutil.parser.parse(time_input_string, tzinfos=common_timezones)
+ # Check if the time string is of the new more-readable format.
+ if "|UTC" in time_input_string:
+ # Remove and replace the non-standard characters with the ISO ones.
+ standard_format_time = time_input_string.replace("|", "T", 1).replace("|UTC", "")
+ timestamp = dateutil.parser.parse(standard_format_time, tzinfos=common_timezones)
+ else:
+ timestamp = dateutil.parser.parse(time_input_string, tzinfos=common_timezones)
except ValueError:
try:
timestamp = datetime.datetime.fromtimestamp(float(time_input_string))
| diff --git a/zerver/tests/fixtures/markdown_test_cases.json b/zerver/tests/fixtures/markdown_test_cases.json
--- a/zerver/tests/fixtures/markdown_test_cases.json
+++ b/zerver/tests/fixtures/markdown_test_cases.json
@@ -790,6 +790,19 @@
"expected_output": "<p>Let's meet at <time datetime=\"2017-06-05T22:30:00Z\">1496701800</time>.</p>",
"text_content": "Let's meet at 1496701800."
},
+ {
+ "name": "timestamp_new_format",
+ "input": "<time:2021-08-02|14:03:00|UTC+05:30>",
+ "expected_output": "<p><time datetime=\"2021-08-02T08:33:00Z\">2021-08-02|14:03:00|UTC+05:30</time></p>",
+ "text_content": "2021-08-02|14:03:00|UTC+05:30"
+ },
+ {
+ "name": "timestamp_without_utc",
+ "input": "<time:2021-08-02|14:03:00|+05:30>",
+ "expected_output": "<p><span class=\"timestamp-error\">Invalid time format: 2021-08-02|14:03:00|+05:30</span></p>",
+ "marked_expected_output": "<p><span>2021-08-02|14:03:00|+05:30</span></p>",
+ "text_content": "Invalid time format: 2021-08-02|14:03:00|+05:30"
+ },
{
"name": "tex_inline",
"input": "$$1 \\oplus 0 = 1$$",
| Improve formatting inserted by <time> widget
At present, the `<time>` widget inserts the following formatting into the compose box, which is difficult to read at a glance:
`<time:2021-07-14T00:14:00-07:00>`
Instead, we should have the `<time>` widget insert the following formatting:
`<time:2021-07-14|00:14:00|UTC-07:00>`
Our parser should accept both the old and the new format.
See also [CZO discussion thread](https://chat.zulip.org/#narrow/stream/6-frontend/topic/.3Ctime.3E.20formatting.20in.20compose.20box).
| @zulipbot claim | 2021-08-01T16:58:23 |
zulip/zulip | 19,476 | zulip__zulip-19476 | [
"18319"
] | 1965584eeccfd79ee6de37fd07bf712a93831e95 | diff --git a/zerver/lib/integrations.py b/zerver/lib/integrations.py
--- a/zerver/lib/integrations.py
+++ b/zerver/lib/integrations.py
@@ -423,6 +423,9 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
stream_name="opbeat",
function="zerver.webhooks.opbeat.view.api_opbeat_webhook",
),
+ WebhookIntegration(
+ "opencollective", ["communication"], display_name="Open Collective Incoming Webhook"
+ ),
WebhookIntegration("opsgenie", ["meta-integration", "monitoring"]),
WebhookIntegration("pagerduty", ["monitoring"], display_name="PagerDuty"),
WebhookIntegration("papertrail", ["monitoring"]),
@@ -761,6 +764,7 @@ def __init__(self, name: str, *args: Any, **kwargs: Any) -> None:
ScreenshotConfig("incident_closed.json", "003.png"),
],
"opbeat": [ScreenshotConfig("error_reopen.json")],
+ "opencollective": [ScreenshotConfig("one_time_donation.json")],
"opsgenie": [ScreenshotConfig("addrecipient.json", image_name="000.png")],
"pagerduty": [ScreenshotConfig("trigger_v2.json")],
"papertrail": [ScreenshotConfig("short_post.json", payload_as_query_param=True)],
diff --git a/zerver/webhooks/opencollective/__init__.py b/zerver/webhooks/opencollective/__init__.py
new file mode 100644
diff --git a/zerver/webhooks/opencollective/view.py b/zerver/webhooks/opencollective/view.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/opencollective/view.py
@@ -0,0 +1,47 @@
+from typing import Any, Dict
+
+from django.http import HttpRequest, HttpResponse
+
+from zerver.decorator import webhook_view
+from zerver.lib.request import REQ, has_request_variables
+from zerver.lib.response import json_success
+from zerver.lib.webhooks.common import check_send_webhook_message
+from zerver.models import UserProfile
+
+MEMBER_NAME_TEMPLATE = "{name}"
+AMOUNT_TEMPLATE = "{amount}"
+
+
+@webhook_view("OpenCollective")
+@has_request_variables
+def api_opencollective_webhook(
+ request: HttpRequest,
+ user_profile: UserProfile,
+ payload: Dict[str, Any] = REQ(argument_type="body"),
+) -> HttpResponse:
+
+ name = get_name(payload)
+ amount = get_amount(payload)
+
+ # construct the body of the message
+ body = ""
+
+ if name == "Incognito": # Incognito donation
+ body = f"An **Incognito** member donated **{amount}**! :tada:"
+ else: # non - Incognito donation
+ body = f"@_**{name}** donated **{amount}**! :tada:"
+
+ topic = "New Member"
+
+ # send the message
+ check_send_webhook_message(request, user_profile, topic, body)
+
+ return json_success()
+
+
+def get_name(payload: Dict[str, Any]) -> str:
+ return MEMBER_NAME_TEMPLATE.format(name=payload["data"]["member"]["memberCollective"]["name"])
+
+
+def get_amount(payload: Dict[str, Any]) -> str:
+ return AMOUNT_TEMPLATE.format(amount=payload["data"]["order"]["formattedAmount"])
| diff --git a/zerver/webhooks/opencollective/tests.py b/zerver/webhooks/opencollective/tests.py
new file mode 100644
--- /dev/null
+++ b/zerver/webhooks/opencollective/tests.py
@@ -0,0 +1,31 @@
+from zerver.lib.test_classes import WebhookTestCase
+
+
+class OpenCollectiveHookTests(WebhookTestCase):
+ STREAM_NAME = "test"
+ URL_TEMPLATE = "/api/v1/external/opencollective?&api_key={api_key}&stream={stream}"
+ WEBHOOK_DIR_NAME = "opencollective"
+
+ # Note: Include a test function per each distinct message condition your integration supports
+ def test_one_time_donation(self) -> None: # test one time donation
+ expected_topic = "New Member"
+ expected_message = "@_**Λευτέρης Κυριαζάνος** donated **€1.00**! :tada:"
+
+ self.check_webhook(
+ "one_time_donation",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
+
+ def test_one_time_incognito_donation(self) -> None: # test one time incognito donation
+ expected_topic = "New Member"
+ expected_message = "An **Incognito** member donated **$1.00**! :tada:"
+
+ # use fixture named helloworld_hello
+ self.check_webhook(
+ "one_time_incognito_donation",
+ expected_topic,
+ expected_message,
+ content_type="application/x-www-form-urlencoded",
+ )
| Create incoming webhook integration for Open Collective
Create an [incoming webhook](https://zulip.com/api/incoming-webhooks-overview) integration for Open Collective. The main use case is getting notifications when new financial contributors sign up.
Open Collective integrations documentation: https://docs.opencollective.com/help/collectives/integrations
Now that [Zulip has an Open Collective page](https://opencollective.com/zulip), it would be great to be able to monitor new sign-ups!
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hello @alya, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning -->
I made a simple JSON integration that pulls sample data. @adambirds handing it over to you. :)
Mind if I take a stab at this? I am fine if I am not the assignee and my work becomes redundant, I just see it as a good task for me to make my first contribution.
@zulipbot claim
Welcome to Zulip, @snailgirl2612! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
@alya
I'm trying to create a fixture for this.
How can I access the JSON integration that you've made?
@zulipbot claim
Welcome to Zulip, @thanasisath31! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
* Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
* [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
I'm not planning on working on this further, so happy to have someone else pick it up! CC @adambirds who may have been looking into it as well.
Here are sample payloads from the json integration.
A one-time $1 donation I made to get sample data triggered both of these:
```
{
"createdAt": "2021-04-30T19:04:52.962Z",
"id": 1121190,
"type": "collective.transaction.created",
"CollectiveId": 30312,
"data": {
"transaction": {
"amount": 100,
"currency": "USD",
"formattedAmount": "$1.00",
"formattedAmountWithInterval": "$1.00"
},
"fromCollective": {
"id": 251827,
"type": "USER",
"slug": "alya-abbott",
"name": "Alya Abbott",
"twitterHandle": null,
"githubHandle": null,
"image": "https://opencollective-production.s3.us-west-1.amazonaws.com/ba943750-9722-11eb-8df5-4deb989f8775.jpeg"
}
}
}
{
"createdAt": "2021-04-30T19:04:52.991Z",
"id": 1121191,
"type": "collective.member.created",
"CollectiveId": 30312,
"data": {
"member": {
"role": "BACKER",
"description": null,
"since": "2021-04-30T19:04:52.976Z",
"memberCollective": {
"id": 251827,
"type": "USER",
"slug": "alya-abbott",
"name": "Alya Abbott",
"company": null,
"website": null,
"twitterHandle": null,
"githubHandle": null,
"description": null,
"previewImage": "https://res.cloudinary.com/opencollective/image/fetch/c_thumb,g_face,h_48,r_max,w_48,bo_3px_solid_white/c_thumb,h_48,r_max,w_48,bo_2px_solid_rgb:66C71A/e_trim/f_jpg/https%3A%2F%2Fopencollective-production.s3.us-west-1.amazonaws.com%2Fba943750-9722-11eb-8df5-4deb989f8775.jpeg"
}
},
"order": {
"id": 168627,
"totalAmount": 100,
"currency": "USD",
"description": "Financial contribution to Zulip",
"interval": null,
"createdAt": "2021-04-30T19:04:47.348Z",
"quantity": 1,
"formattedAmount": "$1.00",
"formattedAmountWithInterval": "$1.00"
}
}
}
```
An incognito monthly donation I created looks like this (also 2 events):
```
{
"createdAt": "2021-04-30T19:20:04.346Z",
"id": 1121199,
"type": "collective.member.created",
"CollectiveId": 30312,
"data": {
"member": {
"role": "BACKER",
"description": null,
"since": "2021-04-30T19:20:04.314Z",
"memberCollective": {
"type": "USER",
"name": "Incognito",
"previewImage": null
}
},
"order": {
"id": 168629,
"totalAmount": 100,
"currency": "USD",
"description": "Monthly financial contribution to Zulip",
"interval": "month",
"createdAt": "2021-04-30T19:19:58.647Z",
"quantity": 1,
"formattedAmount": "$1.00",
"formattedAmountWithInterval": "$1.00 / month"
}
}
}
{
"createdAt": "2021-04-30T19:20:04.294Z",
"id": 1121198,
"type": "collective.transaction.created",
"CollectiveId": 30312,
"data": {
"transaction": {
"amount": 100,
"currency": "USD",
"formattedAmount": "$1.00",
"formattedAmountWithInterval": "$1.00"
},
"fromCollective": {
"id": 262738,
"type": "USER",
"slug": "incognito-b4a9278e",
"name": "Incognito",
"twitterHandle": null,
"githubHandle": null
}
}
}
```
@zulipbot unclaim
@zulipbot claim
@zulipbot claim
@zulipbot unclaim
@zulipbot claim
@zulipbot claim
@timabbott @alya
Hello, i have made an integration using only one fixture which was generated by an other Opencollective account i made in order to get sample fixtures.
When i made a one dollar one time donation and by setting the OpenCollective webhook trigger on new Members i got this response (basically is event 2 alya got my making the same type of donation)
`{
"createdAt": "2021-06-21T17:25:24.344Z",
"id": 1242511,
"type": "collective.member.created",
"CollectiveId": 281290,
"data": {
"member": {
"role": "BACKER",
"description": null,
"since": "2021-06-21T17:25:24.332Z",
"memberCollective": {
"id": 280138,
"type": "USER",
"slug": "leyteris-kyriazanos",
"name": "Λευτέρης Κυριαζάνος",
"company": null,
"website": null,
"twitterHandle": null,
"githubHandle": null,
"description": null,
"previewImage": null
}
},
"order": {
"id": 183773,
"totalAmount": 100,
"currency": "EUR",
"description": "Financial contribution to Test-Webhooks",
"interval": null,
"createdAt": "2021-06-21T17:25:17.968Z",
"quantity": 1,
"formattedAmount": "€1.00",
"formattedAmountWithInterval": "€1.00"
}
}
}`
i have made a simple integration that looks like this.

Problem is that i don't have all the possible generated fixtures that can be created by openCollective when a new member registers. For example i don't have the fixture created when someone becomes a backer or sponsor. And i can't seem to find templates anywhere. Any advice on this?
I think you should submit an integration that ignores all events other than the ones we have fixtures for above. At least 90% of the value is in supporting just that event type. It's unclear to me whether it's worth supporting other event types at all, but we can do that audit once we have a basic integration merged.
@timabbott Ok thank you, that's clear!
Are the below considered 4 different fixtures? When i made a donation only one event was triggered (not 2).
Can the template vary regarding the same type of donation or there can be both at the same time? The 2 events triggered
confused me to be honest.
> I'm not planning on working on this further, so happy to have someone else pick it up! CC @adambirds who may have been looking into it as well.
>
> Here are sample payloads from the json integration.
>
> A one-time $1 donation I made to get sample data triggered both of these:
>
> ```
> {
> "createdAt": "2021-04-30T19:04:52.962Z",
> "id": 1121190,
> "type": "collective.transaction.created",
> "CollectiveId": 30312,
> "data": {
> "transaction": {
> "amount": 100,
> "currency": "USD",
> "formattedAmount": "$1.00",
> "formattedAmountWithInterval": "$1.00"
> },
> "fromCollective": {
> "id": 251827,
> "type": "USER",
> "slug": "alya-abbott",
> "name": "Alya Abbott",
> "twitterHandle": null,
> "githubHandle": null,
> "image": "https://opencollective-production.s3.us-west-1.amazonaws.com/ba943750-9722-11eb-8df5-4deb989f8775.jpeg"
> }
> }
> }
>
> {
> "createdAt": "2021-04-30T19:04:52.991Z",
> "id": 1121191,
> "type": "collective.member.created",
> "CollectiveId": 30312,
> "data": {
> "member": {
> "role": "BACKER",
> "description": null,
> "since": "2021-04-30T19:04:52.976Z",
> "memberCollective": {
> "id": 251827,
> "type": "USER",
> "slug": "alya-abbott",
> "name": "Alya Abbott",
> "company": null,
> "website": null,
> "twitterHandle": null,
> "githubHandle": null,
> "description": null,
> "previewImage": "https://res.cloudinary.com/opencollective/image/fetch/c_thumb,g_face,h_48,r_max,w_48,bo_3px_solid_white/c_thumb,h_48,r_max,w_48,bo_2px_solid_rgb:66C71A/e_trim/f_jpg/https%3A%2F%2Fopencollective-production.s3.us-west-1.amazonaws.com%2Fba943750-9722-11eb-8df5-4deb989f8775.jpeg"
> }
> },
> "order": {
> "id": 168627,
> "totalAmount": 100,
> "currency": "USD",
> "description": "Financial contribution to Zulip",
> "interval": null,
> "createdAt": "2021-04-30T19:04:47.348Z",
> "quantity": 1,
> "formattedAmount": "$1.00",
> "formattedAmountWithInterval": "$1.00"
> }
> }
> }
> ```
>
> An incognito monthly donation I created looks like this (also 2 events):
>
> ```
> {
> "createdAt": "2021-04-30T19:20:04.346Z",
> "id": 1121199,
> "type": "collective.member.created",
> "CollectiveId": 30312,
> "data": {
> "member": {
> "role": "BACKER",
> "description": null,
> "since": "2021-04-30T19:20:04.314Z",
> "memberCollective": {
> "type": "USER",
> "name": "Incognito",
> "previewImage": null
> }
> },
> "order": {
> "id": 168629,
> "totalAmount": 100,
> "currency": "USD",
> "description": "Monthly financial contribution to Zulip",
> "interval": "month",
> "createdAt": "2021-04-30T19:19:58.647Z",
> "quantity": 1,
> "formattedAmount": "$1.00",
> "formattedAmountWithInterval": "$1.00 / month"
> }
> }
> }
>
> {
> "createdAt": "2021-04-30T19:20:04.294Z",
> "id": 1121198,
> "type": "collective.transaction.created",
> "CollectiveId": 30312,
> "data": {
> "transaction": {
> "amount": 100,
> "currency": "USD",
> "formattedAmount": "$1.00",
> "formattedAmountWithInterval": "$1.00"
> },
> "fromCollective": {
> "id": 262738,
> "type": "USER",
> "slug": "incognito-b4a9278e",
> "name": "Incognito",
> "twitterHandle": null,
> "githubHandle": null
> }
> }
> }
> ```
I suspect the difference is that you get both events for a new person, but if an existing person does a new donation, you only et the `transaction.created` event. I would consider just implementing this:
* `collective.transaction.created` should send a notification about the new transaction including the financial details. (aka line 2 of your screenshot, but with the name mentioned).
(I'm not sure there's anything the Zulip integration needs to do with "new member").
@timabbott Open Collective has multiple options as to when to trigger the webhook.

I suppose then i should change "New Member" to "New Transaction" and get a basic fixture for that
| 2021-08-03T11:00:45 |
zulip/zulip | 19,502 | zulip__zulip-19502 | [
"19468"
] | ea6929457c8845dfc6d6931e0bcde87e74a29e28 | diff --git a/zerver/models.py b/zerver/models.py
--- a/zerver/models.py
+++ b/zerver/models.py
@@ -29,7 +29,7 @@
from django.db import models, transaction
from django.db.models import CASCADE, Manager, Q, Sum
from django.db.models.query import QuerySet
-from django.db.models.signals import post_delete, post_save
+from django.db.models.signals import post_delete, post_save, pre_delete
from django.utils.functional import Promise
from django.utils.timezone import now as timezone_now
from django.utils.translation import gettext as _
@@ -849,14 +849,27 @@ def presence_disabled(self) -> bool:
return self.is_zephyr_mirror_realm
-def realm_post_delete_handler(*, instance: Realm, **kwargs: object) -> None:
+post_save.connect(flush_realm, sender=Realm)
+
+
+# We register realm cache flushing in a duplicate way to be run both
+# pre_delete and post_delete on purpose:
+# 1. pre_delete is needed because flush_realm wants to flush the UserProfile caches,
+# and UserProfile objects are deleted via on_delete=CASCADE before the post_delete handler
+# is called, which results in the `flush_realm` logic not having access to the details
+# for the deleted users if called at that time.
+# 2. post_delete is run as a precaution to reduce the risk of races where items might be
+# added to the cache after the pre_delete handler but before the save.
+# Note that it does not eliminate this risk, not least because it only flushes
+# the realm cache, and not the user caches, for the reasons explained above.
+def realm_pre_and_post_delete_handler(*, instance: Realm, **kwargs: object) -> None:
# This would be better as a functools.partial, but for some reason
# Django doesn't call it even when it's registered as a post_delete handler.
flush_realm(instance=instance, from_deletion=True)
-post_save.connect(flush_realm, sender=Realm)
-post_delete.connect(realm_post_delete_handler, sender=Realm)
+pre_delete.connect(realm_pre_and_post_delete_handler, sender=Realm)
+post_delete.connect(realm_pre_and_post_delete_handler, sender=Realm)
def get_realm(string_id: str) -> Realm:
| diff --git a/zerver/tests/test_realm.py b/zerver/tests/test_realm.py
--- a/zerver/tests/test_realm.py
+++ b/zerver/tests/test_realm.py
@@ -186,6 +186,14 @@ def test_do_deactivate_realm_clears_user_realm_cache(self) -> None:
user = get_user_profile_by_id(hamlet_id)
self.assertTrue(user.realm.deactivated)
+ def test_do_change_realm_delete_clears_user_realm_cache(self) -> None:
+ hamlet_id = self.example_user("hamlet").id
+ get_user_profile_by_id(hamlet_id)
+ realm = get_realm("zulip")
+ realm.delete()
+ with self.assertRaises(UserProfile.DoesNotExist):
+ get_user_profile_by_id(hamlet_id)
+
def test_do_change_realm_subdomain_clears_user_realm_cache(self) -> None:
"""The main complicated thing about changing realm subdomains is
updating the cache, and we start by populating the cache for
| `delete_realm` deletes users before flushing, causing `delete_user_profile_caches` to do nothing
`manage.py delete_realm` calls `realm.delete()`, which deletes all of the UserProfile objects. As a post_delete handler, we call `flush_realm`:
https://github.com/zulip/zulip/blob/6a3e98d14bab04fc06ce6b9c9ccfbd69b6030a83/zerver/models.py#L852-L859
But by then, there are no UserProfile objects, which means that `delete_user_profile_caches` doesn't find get users to delete the caches of:
https://github.com/zulip/zulip/blob/fae92f2e3f6e59aac4d5a2edb4657db00194bc11/zerver/lib/cache.py#L667-L668
We should make `manage.py delete_realm` call first loop through users in the realm and delete those using `do_delete_user` or something.
| Hello @zulip/server-misc members, this issue was labeled with the "area: export/import" label, so you may want to check it out!
<!-- areaLabelAddition --> | 2021-08-04T15:02:58 |
zulip/zulip | 19,513 | zulip__zulip-19513 | [
"19287"
] | 7f0381d4c7f17706e7840ca54c171db940f0d885 | diff --git a/zerver/views/registration.py b/zerver/views/registration.py
--- a/zerver/views/registration.py
+++ b/zerver/views/registration.py
@@ -711,6 +711,17 @@ def find_account(request: HttpRequest) -> HttpResponse:
form = FindMyTeamForm(request.POST)
if form.is_valid():
emails = form.cleaned_data["emails"]
+ for i in range(len(emails)):
+ try:
+ rate_limit_request_by_ip(request, domain="find_account_by_ip")
+ except RateLimited as e:
+ assert e.secs_to_freedom is not None
+ return render(
+ request,
+ "zerver/rate_limit_exceeded.html",
+ context={"retry_after": int(e.secs_to_freedom)},
+ status=429,
+ )
# Django doesn't support __iexact__in lookup with EmailField, so we have
# to use Qs to get around that without needing to do multiple queries.
diff --git a/zproject/computed_settings.py b/zproject/computed_settings.py
--- a/zproject/computed_settings.py
+++ b/zproject/computed_settings.py
@@ -387,6 +387,9 @@ def get_dirs(self) -> List[PosixPath]:
"create_realm_by_ip": [
(1800, 5),
],
+ "find_account_by_ip": [
+ (3600, 10),
+ ],
"password_reset_form_by_email": [
(3600, 2), # 2 reset emails per hour
(86400, 5), # 5 per day
| diff --git a/zerver/tests/test_external.py b/zerver/tests/test_external.py
--- a/zerver/tests/test_external.py
+++ b/zerver/tests/test_external.py
@@ -1,5 +1,6 @@
import time
-from typing import Callable, Optional
+from contextlib import contextmanager
+from typing import Callable, Iterator, Optional
from unittest import mock, skipUnless
import DNS
@@ -67,6 +68,17 @@ def test_notmailinglist(self) -> None:
email_is_not_mit_mailing_list("[email protected]")
+@contextmanager
+def rate_limit_rule(range_seconds: int, num_requests: int, domain: str) -> Iterator[None]:
+ add_ratelimit_rule(range_seconds, num_requests, domain=domain)
+ try:
+ yield
+ finally:
+ # We need this in a finally block to ensure the test cleans up after itself
+ # even in case of failure, to avoid polluting the rules state.
+ remove_ratelimit_rule(range_seconds, num_requests, domain=domain)
+
+
class RateLimitTests(ZulipTestCase):
def setUp(self) -> None:
super().setUp()
@@ -155,6 +167,8 @@ def default_assert_func(result: HttpResponse) -> None:
for i in range(6):
with mock.patch("time.time", return_value=(start_time + i * 0.1)):
result = request_func()
+ if i < 5:
+ self.assertNotEqual(result.status_code, 429)
assert_func(result)
@@ -172,35 +186,51 @@ def test_hit_ratelimits_as_user(self) -> None:
self.do_test_hit_ratelimits(lambda: self.send_api_message(user, "some stuff"))
+ @rate_limit_rule(1, 5, domain="api_by_ip")
def test_hit_ratelimits_as_ip(self) -> None:
- add_ratelimit_rule(1, 5, domain="api_by_ip")
- try:
- RateLimitedIPAddr("127.0.0.1").clear_history()
- self.do_test_hit_ratelimits(self.send_unauthed_api_request)
- finally:
- # We need this in a finally block to ensure the test cleans up after itself
- # even in case of failure, to avoid polluting the rules state.
- remove_ratelimit_rule(1, 5, domain="api_by_ip")
+ RateLimitedIPAddr("127.0.0.1").clear_history()
+ self.do_test_hit_ratelimits(self.send_unauthed_api_request)
+ @rate_limit_rule(1, 5, domain="create_realm_by_ip")
def test_create_realm_rate_limiting(self) -> None:
def assert_func(result: HttpResponse) -> None:
self.assertEqual(result.status_code, 429)
self.assert_in_response("Rate limit exceeded.", result)
with self.settings(OPEN_REALM_CREATION=True):
- add_ratelimit_rule(1, 5, domain="create_realm_by_ip")
- try:
- RateLimitedIPAddr("127.0.0.1").clear_history()
- self.do_test_hit_ratelimits(
- lambda: self.client_post("/new/", {"email": "[email protected]"}),
- assert_func=assert_func,
- )
- finally:
- remove_ratelimit_rule(1, 5, domain="create_realm_by_ip")
+ RateLimitedIPAddr("127.0.0.1", domain="create_realm_by_ip").clear_history()
+ self.do_test_hit_ratelimits(
+ lambda: self.client_post("/new/", {"email": "[email protected]"}),
+ assert_func=assert_func,
+ )
+
+ def test_find_account_rate_limiting(self) -> None:
+ def assert_func(result: HttpResponse) -> None:
+ self.assertEqual(result.status_code, 429)
+ self.assert_in_response("Rate limit exceeded.", result)
+
+ with rate_limit_rule(1, 5, domain="find_account_by_ip"):
+ RateLimitedIPAddr("127.0.0.1", domain="find_account_by_ip").clear_history()
+ self.do_test_hit_ratelimits(
+ lambda: self.client_post("/accounts/find/", {"emails": "[email protected]"}),
+ assert_func=assert_func,
+ )
+
+ # Now test whether submitting multiple emails is handled correctly.
+ # The limit is set to 10 per second, so 5 requests with 2 emails
+ # submitted in each should be allowed.
+ with rate_limit_rule(1, 10, domain="find_account_by_ip"):
+ RateLimitedIPAddr("127.0.0.1", domain="find_account_by_ip").clear_history()
+ self.do_test_hit_ratelimits(
+ lambda: self.client_post(
+ "/accounts/find/", {"emails": "[email protected],[email protected]"}
+ ),
+ assert_func=assert_func,
+ )
@skipUnless(settings.ZILENCER_ENABLED, "requires zilencer")
+ @rate_limit_rule(1, 5, domain="api_by_remote_server")
def test_hit_ratelimits_as_remote_server(self) -> None:
- add_ratelimit_rule(1, 5, domain="api_by_remote_server")
server_uuid = "1234-abcd"
server = RemoteZulipServer(
uuid=server_uuid,
@@ -228,7 +258,6 @@ def test_hit_ratelimits_as_remote_server(self) -> None:
)
finally:
self.DEFAULT_SUBDOMAIN = original_default_subdomain
- remove_ratelimit_rule(1, 5, domain="api_by_remote_server")
def test_hit_ratelimiterlockingexception(self) -> None:
user = self.example_user("cordelia")
diff --git a/zproject/test_extra_settings.py b/zproject/test_extra_settings.py
--- a/zproject/test_extra_settings.py
+++ b/zproject/test_extra_settings.py
@@ -265,6 +265,7 @@ def set_loglevel(logger_name: str, level: str) -> None:
"api_by_remote_server": [],
"authenticate_by_username": [],
"create_realm_by_ip": [],
+ "find_account_by_ip": [],
"password_reset_form_by_email": [],
}
| Rate-limit by IP any publicly-accessible email-sending endpoints
#16190 added IP-based rate-limiting for un-authed requests in the `rate_limit` view decorator, which is effectively just REST endpoints. However, there are non-REST unauthenticated endpoints which also bear protecting. Specifically, the `/accounts/find/` and `/new/` endpoints automatically send emails to user-supplied addresses, and are not currently rate-limited.
We should add an additional `authenticate_by_ip` rate limiting rule at 5/30min (matching the `authenticate_by_username` rule), and enforce that ratelimit for the above two endpoints.
| Hello @mateuszmandera, you claimed this issue to work on it, but this issue and any referenced pull requests haven't been updated for 10 days. Are you still working on this issue?
If so, please update this issue by leaving a comment on this issue to let me know that you're still working on it. Otherwise, I'll automatically remove you from this issue in 4 days.
If you've decided to work on something else, simply comment `@zulipbot abandon` so that someone else can claim it and continue from where you left off.
Thank you for your valuable contributions to Zulip!
<!-- inactiveWarning -->
I think this is half-solved, with #19319 addressing one of the endpoints. We still need to cover `/accounts/find/` as well, then we can resolve this. | 2021-08-05T10:11:28 |
zulip/zulip | 19,550 | zulip__zulip-19550 | [
"10607"
] | 9bed17e0abb907f1def902e70eb71382423f396a | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -48,4 +48,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = "159.1"
+PROVISION_VERSION = "159.2"
| dark mode: Date picker styling doesn't fit dark mode

Related to #10545 in the aspect that the date picker needs better styling (could possibly be combined with this one as well).
| Hmm, yeah. We get the date picker from (I think) the flatpickr library.
It looks like flatpickr has its own "dark" [theme](https://flatpickr.js.org/themes/), but it doesn't seem like there's an easy way to programmatically change it since each theme comes with its own stylesheet
Hmm, I think we definitely want to use that. There's two possibly approaches that might work:
(1) Use fancy SCSS to have something like
```
.night-mode {
include `flatpickr/dist/themes/dark.css`;
}
```
(If there's a valid SCSS way to do that) so that our toolchain can take care of making this conditional on the night-mode class. This would be cleanest if it actually works.
(2) Modify the JS logic in `static/js/night_mode.js` to toggle this stylesheet as well as the main thing it does with `create_stylesheet`.
The SCSS approach didn't work, so I did the JS approach you suggested:
Dark mode:

Light mode (added green colors to fix #10545):

Does this look fine? @rishig would appreciate your feedback as well!
It fixes the dark mode problem visually, so is definitely an improvement.
I think the overall styling is not very good relative to other products' calendars nowadays, but that doesn't need to block this PR.
@synicalsyntax did we end up fixing this through a new version of #10613?
Hello @zulip/server-settings members, this issue was labeled with the "area: settings UI" label, so you may want to check it out!
<!-- areaLabelAddition -->
I just fixed this having been misfiled in our issue tracker.
@timabbott No idea if there was a followup to #10613 | 2021-08-10T09:17:21 |
|
zulip/zulip | 19,703 | zulip__zulip-19703 | [
"19695"
] | 270a082ecb52836e90889443d76e4cc5b1ba30eb | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -20,7 +20,7 @@
# sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
-from version import ZULIP_VERSION
+from version import LATEST_RELEASE_VERSION, ZULIP_VERSION
# -- General configuration ------------------------------------------------
@@ -36,6 +36,7 @@
myst_enable_extensions = [
"colon_fence",
+ "substitution",
]
# Add any paths that contain templates here, relative to this directory.
@@ -105,6 +106,10 @@
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
+myst_substitutions = {
+ "LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
+}
+
# -- Options for HTML output ----------------------------------------------
| Indicate latest Zulip server version on ReadTheDocs installation and upgrade pages
At present, the Zulip installation and upgrade docs don't tell you what version of Zulip you'll be installing. While the installation/upgrade script doesn't require the user to know this, it would provide helpful context for someone who wants to understand what they are about to install.
We should add a line to each of the following pages that indicates the version of Zulip server that will be installed. This information should be updated automatically when a new version is released.
Pages to change:
* https://zulip.readthedocs.io/en/stable/production/upgrade-or-modify.html
* https://zulip.readthedocs.io/en/stable/production/install.html
| Hello @zulip/server-production members, this issue was labeled with the "area: documentation (production)" label, so you may want to check it out!
<!-- areaLabelAddition -->
| 2021-09-09T22:18:31 |
|
zulip/zulip | 19,719 | zulip__zulip-19719 | [
"19695"
] | 4ce37176dbb52d7e0ad3eeb1ef7c72d44e42409f | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -20,7 +20,7 @@
# sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
-from version import ZULIP_VERSION
+from version import LATEST_RELEASE_VERSION, ZULIP_VERSION
# -- General configuration ------------------------------------------------
@@ -36,6 +36,7 @@
myst_enable_extensions = [
"colon_fence",
+ "substitution",
]
# Add any paths that contain templates here, relative to this directory.
@@ -105,6 +106,10 @@
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
+myst_substitutions = {
+ "LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
+}
+
# -- Options for HTML output ----------------------------------------------
| Indicate latest Zulip server version on ReadTheDocs installation and upgrade pages
At present, the Zulip installation and upgrade docs don't tell you what version of Zulip you'll be installing. While the installation/upgrade script doesn't require the user to know this, it would provide helpful context for someone who wants to understand what they are about to install.
We should add a line to each of the following pages that indicates the version of Zulip server that will be installed. This information should be updated automatically when a new version is released.
Pages to change:
* https://zulip.readthedocs.io/en/stable/production/upgrade-or-modify.html
* https://zulip.readthedocs.io/en/stable/production/install.html
| 2021-09-11T00:10:39 |
||
zulip/zulip | 19,744 | zulip__zulip-19744 | [
"19682"
] | 67fdbbe5fdc0d4f0f3e07378d2f327f46bde5d33 | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -4991,6 +4991,36 @@ def do_change_stream_message_retention_days(
send_event(stream.realm, event, can_access_stream_user_ids(stream))
+def set_realm_permissions_based_on_org_type(realm: Realm) -> None:
+ """This function implements overrides for the default configuration
+ for new organizations when the administrator selected specific
+ organization types.
+
+ This substantially simplifies our /help/ advice for folks setting
+ up new organizations of these types.
+ """
+
+ # Custom configuration for educational organizations. The present
+ # defaults are designed for a single class, not a department or
+ # larger institution, since those are more common.
+ if (
+ realm.org_type == Realm.ORG_TYPES["education_nonprofit"]["id"]
+ or realm.org_type == Realm.ORG_TYPES["education"]["id"]
+ ):
+ # Limit email address visibility and user creation to administrators.
+ realm.email_address_visibility = Realm.EMAIL_ADDRESS_VISIBILITY_ADMINS
+ realm.invite_to_realm_policy = Realm.POLICY_ADMINS_ONLY
+ # Restrict public stream creation to staff, but allow private
+ # streams (useful for study groups, etc.).
+ realm.create_public_stream_policy = Realm.POLICY_ADMINS_ONLY
+ # Don't allow members (students) to manage user groups or
+ # stream subscriptions.
+ realm.user_group_edit_policy = Realm.POLICY_MODERATORS_ONLY
+ realm.invite_to_stream_policy = Realm.POLICY_MODERATORS_ONLY
+ # Allow moderators (TAs?) to move topics between streams.
+ realm.move_messages_between_streams_policy = Realm.POLICY_MODERATORS_ONLY
+
+
def do_create_realm(
string_id: str,
name: str,
@@ -5038,6 +5068,8 @@ def do_create_realm(
realm.demo_organization_scheduled_deletion_date = (
realm.date_created + datetime.timedelta(days=settings.DEMO_ORG_DEADLINE_DAYS)
)
+
+ set_realm_permissions_based_on_org_type(realm)
realm.save()
RealmAuditLog.objects.create(
| diff --git a/zerver/tests/test_realm.py b/zerver/tests/test_realm.py
--- a/zerver/tests/test_realm.py
+++ b/zerver/tests/test_realm.py
@@ -64,6 +64,34 @@ def test_realm_creation_on_social_auth_subdomain_disallowed(self) -> None:
with self.assertRaises(AssertionError):
do_create_realm("zulipauth", "Test Realm")
+ def test_permission_for_education_non_profit_organization(self) -> None:
+ realm = do_create_realm(
+ "test_education_non_profit",
+ "education_org_name",
+ org_type=Realm.ORG_TYPES["education_nonprofit"]["id"],
+ )
+
+ self.assertEqual(realm.create_public_stream_policy, Realm.POLICY_ADMINS_ONLY)
+ self.assertEqual(realm.create_private_stream_policy, Realm.POLICY_MEMBERS_ONLY)
+ self.assertEqual(realm.invite_to_realm_policy, Realm.POLICY_ADMINS_ONLY)
+ self.assertEqual(realm.move_messages_between_streams_policy, Realm.POLICY_MODERATORS_ONLY)
+ self.assertEqual(realm.user_group_edit_policy, Realm.POLICY_MODERATORS_ONLY)
+ self.assertEqual(realm.invite_to_stream_policy, Realm.POLICY_MODERATORS_ONLY)
+
+ def test_permission_for_education_for_profit_organization(self) -> None:
+ realm = do_create_realm(
+ "test_education_for_profit",
+ "education_org_name",
+ org_type=Realm.ORG_TYPES["education"]["id"],
+ )
+
+ self.assertEqual(realm.create_public_stream_policy, Realm.POLICY_ADMINS_ONLY)
+ self.assertEqual(realm.create_private_stream_policy, Realm.POLICY_MEMBERS_ONLY)
+ self.assertEqual(realm.invite_to_realm_policy, Realm.POLICY_ADMINS_ONLY)
+ self.assertEqual(realm.move_messages_between_streams_policy, Realm.POLICY_MODERATORS_ONLY)
+ self.assertEqual(realm.user_group_edit_policy, Realm.POLICY_MODERATORS_ONLY)
+ self.assertEqual(realm.invite_to_stream_policy, Realm.POLICY_MODERATORS_ONLY)
+
def test_do_set_realm_name_caching(self) -> None:
"""The main complicated thing about setting realm names is fighting the
cache, and we start by populating the cache for Hamlet, and we end
| Customize permissions for new organizations by organization type
At present, all new organizations have the same default settings. These settings may not be appropriate for all types of organizations. Since we now collect organization type at sign-up for Zulip Cloud organizations, we should make use of that information to set better defaults.
In particular, education organizations likely wish to restrict many administrative permissions to course staff. The new defaults should be the following.
Org types:
* Education (for-profit)
* Education (non-profit)
Default permissions different from current default:
- Who can invite new users. (New default: Admins)
- Who can access user email addresses. (New default: Admins only)
- Who can create streams. (New default: Admins for public streams, members for private streams)
- Who can add users to streams. (New default: Admins and moderators)
- Who can move messages between streams. (New default: Admins and moderators)
- Who can create and manage user groups. (New default: Admins and moderators)
| Hello @zulip/server-settings members, this issue was labeled with the "area: settings (admin/org)" label, so you may want to check it out!
<!-- areaLabelAddition -->
@alya I would like to work on this issue, I was just having a couple of doubts reagarding this issue,
- changes regarding this has to be changes in backend model where we have to set defaults ?
- do `Admins` and `Admins only` means the same ??
can i work on this ??
I believe @eeshangarg has started work on this. @eeshangarg is that right? If you are not too far along, perhaps you could guide @madrix01 on this.
> do Admins and Admins only means the same ??
Yes; for whatever reason, we use one or the other for different settings.
@alya I haven't started yet. @madrix01 Thanks for offering to work on this! Please feel free to reach out on czo if you have any questions!
I would recommend by starting to track the code for when you go through the `localhost:9991/new` flow. Going through the code for how a realm is created should help in understanding how everything comes together. A couple of starting points:
- Look at `do_create_realm` in `zerver/lib/actions.py`.
- `Realm` in `zerver/models.py` should have "policy types" such as `POLICY_ADMINS_ONLY` that represent permissions. The defaults are set in the `Realm` model itself.
- My guess is we could set custom permissions on realm creation based on the org type in `do_create_realm` or one of the functions it calls.
Thanks!
@eeshangarg thanks for helping me on this issue !!
I went through the `do_create_realm` method and also through `Realm` in `models.py`. I think the approach you gave is plausible.
I was having the doubt that when I create new realm I'm getting [this error](https://chat.zulip.org/#narrow/stream/49-development-help/topic/unable.20to.20create.20new.20realm.20in.20dev.20env/near/1254875)
And how do I check the Policies for the new realm created.
@zulipbot claim | 2021-09-13T23:00:45 |
zulip/zulip | 19,762 | zulip__zulip-19762 | [
"19659"
] | a44e7a1a606a7a1bcc43e678e8a69a421f0810e3 | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -48,4 +48,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = "161.1"
+PROVISION_VERSION = "162.0"
diff --git a/zerver/views/registration.py b/zerver/views/registration.py
--- a/zerver/views/registration.py
+++ b/zerver/views/registration.py
@@ -1,6 +1,6 @@
import logging
import urllib
-from typing import Dict, List, Optional
+from typing import Any, Dict, List, Optional
from urllib.parse import urlencode
from django.conf import settings
@@ -716,8 +716,6 @@ def accounts_home_from_multiuse_invite(request: HttpRequest, confirmation_key: s
def find_account(
request: HttpRequest, raw_emails: Optional[str] = REQ("emails", default=None)
) -> HttpResponse:
- from zerver.context_processors import common_context
-
url = reverse("find_account")
emails: List[str] = []
@@ -743,17 +741,32 @@ def find_account(
for email in emails:
emails_q |= Q(delivery_email__iexact=email)
- for user in UserProfile.objects.filter(
+ user_profiles = UserProfile.objects.filter(
emails_q, is_active=True, is_bot=False, realm__deactivated=False
- ):
- context = common_context(user)
- context.update(
- email=user.delivery_email,
- )
+ )
+
+ # We organize the data in preparation for sending exactly
+ # one outgoing email per provided email address, with each
+ # email listing all of the accounts that email address has
+ # with the current Zulip server.
+ context: Dict[str, Dict[str, Any]] = {}
+ for user in user_profiles:
+ key = user.delivery_email.lower()
+ context.setdefault(key, {})
+ context[key].setdefault("realms", [])
+ context[key]["realms"].append(user.realm)
+ context[key]["external_host"] = settings.EXTERNAL_HOST
+ # This value will end up being the last user ID among
+ # matching accounts; since it's only used for minor
+ # details like language, that arbitrary choice is OK.
+ context[key]["to_user_id"] = user.id
+
+ for delivery_email, realm_context in context.items():
+ realm_context["email"] = delivery_email
send_email(
"zerver/emails/find_team",
- to_user_ids=[user.id],
- context=context,
+ to_user_ids=[realm_context["to_user_id"]],
+ context=realm_context,
from_address=FromAddress.SUPPORT,
request=request,
)
| diff --git a/zerver/tests/test_signup.py b/zerver/tests/test_signup.py
--- a/zerver/tests/test_signup.py
+++ b/zerver/tests/test_signup.py
@@ -5440,8 +5440,13 @@ def test_result(self) -> None:
self.assertIn("[email protected]", content)
from django.core.mail import outbox
- # 3 = 1 + 2 -- Cordelia gets an email each for the "zulip" and "lear" realms.
- self.assert_length(outbox, 3)
+ self.assert_length(outbox, 2)
+ iago_message = outbox[1]
+ cordelia_message = outbox[0]
+ self.assertIn("Zulip Dev", iago_message.body)
+ self.assertNotIn("Lear & Co", iago_message.body)
+ self.assertIn("Zulip Dev", cordelia_message.body)
+ self.assertIn("Lear & Co", cordelia_message.body)
def test_find_team_ignore_invalid_email(self) -> None:
result = self.client_post(
| Send one find_account notification per email address, not per organization
When using the "find my account" feature, we send one email per organization that matches the email address:
https://github.com/zulip/zulip/blob/191b1ac2be46fe4219c170ab318aa10c714cb49a/zerver/views/registration.py#L746-L759
This can lead to a deluge of emails. We should adjust the template to contain all of the organizations, listed in a single email.
| Hello @zulip/server-development members, this issue was labeled with the "area: emails" label, so you may want to check it out!
<!-- areaLabelAddition -->
The main work here is figuring out what we want the wording/structure of the email to be in the event that there are multiple organizations. @alya FYI for a proposal on that.
Maybe something like:
-----
Hi Iago,
Your email address [email protected] is associated with the following Zulip accounts:
- Zulip Dev: Log in at http://localhost:9991.
- …
If you have trouble logging in, please contact Zulip support by replying to this email.
Thanks for using Zulip!
---
Thoughts?
I suppose it's possible that the user had different names for different accounts, right? I think it's nice to use "Hi X" rather than a generic "Hi", so we could use some kind of heuristic (e.g. most commonly used name > name in most recently logged in org, or something). What do yo guys think?
@alya I feel like it is fairly possible that the user had different usernames for different accounts. Although implementing these heuristics may get a little complicated, so a generic "hi" seems a lot easier. But if we were to go with a heuristic, the most commonly used name makes the most sense to me.
How about this, to avoid the "hi" (which I find to be odd without a name):
------
Thanks for your request! Your email address [email protected] is associated with the following Zulip accounts:
Zulip Dev: Log in at http://localhost:9991.
…
If you have trouble logging in, please contact Zulip support by replying to this email.
@alya Ooh I like that better! I think I'll go with that! :) | 2021-09-15T13:30:44 |
zulip/zulip | 19,771 | zulip__zulip-19771 | [
"17456"
] | 310b8736afe5d7ee176203218cf3824d0612d740 | diff --git a/zproject/backends.py b/zproject/backends.py
--- a/zproject/backends.py
+++ b/zproject/backends.py
@@ -477,6 +477,23 @@ def check_ldap_config() -> None:
# Email search needs to be configured in this case.
assert settings.AUTH_LDAP_USERNAME_ATTR and settings.AUTH_LDAP_REVERSE_EMAIL_SEARCH
+ # These two are alternatives approaches to deactivating users based on an ldap attribute
+ # and thus don't make sense to have enabled together.
+ assert not (
+ settings.AUTH_LDAP_USER_ATTR_MAP.get("userAccountControl")
+ and settings.AUTH_LDAP_USER_ATTR_MAP.get("deactivated")
+ )
+
+
+def ldap_should_sync_active_status() -> bool:
+ if "userAccountControl" in settings.AUTH_LDAP_USER_ATTR_MAP:
+ return True
+
+ if "deactivated" in settings.AUTH_LDAP_USER_ATTR_MAP:
+ return True
+
+ return False
+
def find_ldap_users_by_email(email: str) -> List[_LDAPUser]:
"""
@@ -715,15 +732,30 @@ def sync_avatar_from_ldap(self, user: UserProfile, ldap_user: _LDAPUser) -> None
else:
logging.warning("Could not parse %s field for user %s", avatar_attr_name, user.id)
- def is_account_control_disabled_user(self, ldap_user: _LDAPUser) -> bool:
- """Implements the userAccountControl check for whether a user has been
- disabled in an Active Directory server being integrated with
- Zulip via LDAP."""
- account_control_value = ldap_user.attrs[
- settings.AUTH_LDAP_USER_ATTR_MAP["userAccountControl"]
- ][0]
- ldap_disabled = bool(int(account_control_value) & LDAP_USER_ACCOUNT_CONTROL_DISABLED_MASK)
- return ldap_disabled
+ def is_user_disabled_in_ldap(self, ldap_user: _LDAPUser) -> bool:
+ """Implements checks for whether a user has been
+ disabled in the LDAP server being integrated with
+ Zulip."""
+ if "userAccountControl" in settings.AUTH_LDAP_USER_ATTR_MAP:
+ account_control_value = ldap_user.attrs[
+ settings.AUTH_LDAP_USER_ATTR_MAP["userAccountControl"]
+ ][0]
+ return bool(int(account_control_value) & LDAP_USER_ACCOUNT_CONTROL_DISABLED_MASK)
+
+ assert "deactivated" in settings.AUTH_LDAP_USER_ATTR_MAP
+ attr_value = ldap_user.attrs[settings.AUTH_LDAP_USER_ATTR_MAP["deactivated"]][0]
+
+ # In the LDAP specification, a Boolean attribute should be
+ # *exactly* either "TRUE" or "FALSE". However,
+ # https://www.freeipa.org/page/V4/User_Life-Cycle_Management suggests
+ # that FreeIPA at least documents using Yes/No for booleans.
+ true_values = ["TRUE", "YES"]
+ false_values = ["FALSE", "NO"]
+ attr_value_upper = attr_value.upper()
+ assert (
+ attr_value_upper in true_values or attr_value_upper in false_values
+ ), f"Invalid value '{attr_value}' in the LDAP attribute mapped to deactivated"
+ return attr_value_upper in true_values
def is_account_realm_access_forbidden(self, ldap_user: _LDAPUser, realm: Realm) -> bool:
# org_membership takes priority over AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL.
@@ -881,8 +913,8 @@ def get_or_build_user(self, username: str, ldap_user: _LDAPUser) -> Tuple[UserPr
if self.is_account_realm_access_forbidden(ldap_user, self._realm):
raise ZulipLDAPException("User not allowed to access realm")
- if "userAccountControl" in settings.AUTH_LDAP_USER_ATTR_MAP: # nocoverage
- ldap_disabled = self.is_account_control_disabled_user(ldap_user)
+ if ldap_should_sync_active_status(): # nocoverage
+ ldap_disabled = self.is_user_disabled_in_ldap(ldap_user)
if ldap_disabled:
# Treat disabled users as deactivated in Zulip.
return_data["inactive_user"] = True
@@ -1012,8 +1044,8 @@ def get_or_build_user(
user = get_user_by_delivery_email(username, ldap_user.realm)
built = False
# Synchronise the UserProfile with its LDAP attributes:
- if "userAccountControl" in settings.AUTH_LDAP_USER_ATTR_MAP:
- user_disabled_in_ldap = self.is_account_control_disabled_user(ldap_user)
+ if ldap_should_sync_active_status():
+ user_disabled_in_ldap = self.is_user_disabled_in_ldap(ldap_user)
if user_disabled_in_ldap:
if user.is_active:
ldap_logger.info(
diff --git a/zproject/prod_settings_template.py b/zproject/prod_settings_template.py
--- a/zproject/prod_settings_template.py
+++ b/zproject/prod_settings_template.py
@@ -239,6 +239,9 @@
## who are disabled in LDAP/Active Directory (and reactivate users who are not).
## See docs for usage details and precise semantics.
# "userAccountControl": "userAccountControl",
+ ## Alternatively, you can map "deactivated" to a boolean attribute
+ ## that is "TRUE" for deactivated users and "FALSE" otherwise.
+ # "deactivated": "nsAccountLock",
## Restrict access to organizations using an LDAP attribute.
## See https://zulip.readthedocs.io/en/latest/production/authentication-methods.html#restricting-ldap-user-access-to-specific-organizations
# "org_membership": "department",
| diff --git a/zerver/tests/test_auth_backends.py b/zerver/tests/test_auth_backends.py
--- a/zerver/tests/test_auth_backends.py
+++ b/zerver/tests/test_auth_backends.py
@@ -5877,7 +5877,7 @@ def test_too_short_name(self) -> None:
["WARNING:django_auth_ldap:Name too short! while authenticating hamlet"],
)
- def test_deactivate_user(self) -> None:
+ def test_deactivate_user_with_useraccountcontrol_attr(self) -> None:
self.change_ldap_user_attr("hamlet", "userAccountControl", "2")
with self.settings(
@@ -5893,6 +5893,50 @@ def test_deactivate_user(self) -> None:
],
)
+ def test_deactivate_reactivate_user_with_deactivated_attr(self) -> None:
+ self.change_ldap_user_attr("hamlet", "someCustomAttr", "TRUE")
+
+ with self.settings(
+ AUTH_LDAP_USER_ATTR_MAP={"full_name": "cn", "deactivated": "someCustomAttr"}
+ ), self.assertLogs("zulip.ldap") as info_logs:
+ self.perform_ldap_sync(self.example_user("hamlet"))
+ hamlet = self.example_user("hamlet")
+ self.assertFalse(hamlet.is_active)
+ self.assertEqual(
+ info_logs.output,
+ [
+ "INFO:zulip.ldap:Deactivating user [email protected] because they are disabled in LDAP."
+ ],
+ )
+
+ self.change_ldap_user_attr("hamlet", "someCustomAttr", "FALSE")
+ with self.settings(
+ AUTH_LDAP_USER_ATTR_MAP={"full_name": "cn", "deactivated": "someCustomAttr"}
+ ), self.assertLogs("zulip.ldap") as info_logs:
+ self.perform_ldap_sync(self.example_user("hamlet"))
+ hamlet.refresh_from_db()
+ self.assertTrue(hamlet.is_active)
+ self.assertEqual(
+ info_logs.output,
+ [
+ "INFO:zulip.ldap:Reactivating user [email protected] because they are not disabled in LDAP."
+ ],
+ )
+
+ self.change_ldap_user_attr("hamlet", "someCustomAttr", "YESSS")
+ with self.settings(
+ AUTH_LDAP_USER_ATTR_MAP={"full_name": "cn", "deactivated": "someCustomAttr"}
+ ), self.assertLogs("django_auth_ldap") as ldap_logs, self.assertRaises(AssertionError):
+ self.perform_ldap_sync(self.example_user("hamlet"))
+ hamlet.refresh_from_db()
+ self.assertTrue(hamlet.is_active)
+ self.assertEqual(
+ ldap_logs.output,
+ [
+ "WARNING:django_auth_ldap:Invalid value 'YESSS' in the LDAP attribute mapped to deactivated while authenticating hamlet"
+ ],
+ )
+
@mock.patch("zproject.backends.ZulipLDAPAuthBackendBase.sync_full_name_from_ldap")
def test_dont_sync_disabled_ldap_user(self, fake_sync: mock.MagicMock) -> None:
self.change_ldap_user_attr("hamlet", "userAccountControl", "2")
| LDAP synchronisation of disabled account not working on FreeIPA
Hi,
I am using zulip in combination with FreeIPA (LDAP-based authentication server). Unfortunately, there doesn't seem to be support to automatically detect disabled accounts and to automatically disable them in zulip as well. I only found something for Active Directory.
The corresponding field in FreeIPA is called `nsAccountLock`
I think this feature would be extremely useful.
| @andreas-bulling have you read https://zulip.readthedocs.io/en/latest/production/authentication-methods.html#synchronizing-data? There's a section on automatically disabling accounts. I'm not sure whether we need code changes or just configuration to support `nsaccountlock`; do you have documentation on its format?
Yes, as I said - I found the instructions for AD but the flag/variable is different for FreeIPA. It's called `nsAccountLock` and seems to be either true/yes or false/no.
See here for more information:
https://auth.docs.cern.ch/freeipa/freeipa-attribute-tables/
https://docs.fedoraproject.org/en-US/Fedora/18/html/FreeIPA_Guide/about-sync-schema.html
Sorry for the slow reply -- I think it makes sense to support this parameter.
@mateuszmandera would you be up for doing this one? I think it should be pretty simple, since it's just a boolean for whether to deactivate the account, but we can configure it using this scheme (basically whatever you put as `deactivated` in the LHS will be mapped to marking the account as deactivated; `nsAccountLock` seems to have that True/False meaning)
```
# "deactivated": "nsAccountLock",
# "userAccountControl": "userAccountControl",
```
Great, looking forward to this feature - this will make my (admin) life a lot easier. Let me know if I can help testing it but the change indeed seems rather trivial. | 2021-09-16T18:50:15 |
zulip/zulip | 19,784 | zulip__zulip-19784 | [
"19588"
] | 743712c26764c77266c8392bcf859b1e7a5fb1c2 | diff --git a/zerver/lib/outgoing_webhook.py b/zerver/lib/outgoing_webhook.py
--- a/zerver/lib/outgoing_webhook.py
+++ b/zerver/lib/outgoing_webhook.py
@@ -20,9 +20,9 @@
from zerver.models import (
GENERIC_INTERFACE,
SLACK_INTERFACE,
+ Realm,
Service,
UserProfile,
- email_to_domain,
get_client,
get_user_profile_by_id,
)
@@ -40,7 +40,9 @@ def __init__(self, token: str, user_profile: UserProfile, service_name: str) ->
)
@abc.abstractmethod
- def make_request(self, base_url: str, event: Dict[str, Any]) -> Optional[Response]:
+ def make_request(
+ self, base_url: str, event: Dict[str, Any], realm: Realm
+ ) -> Optional[Response]:
raise NotImplementedError
@abc.abstractmethod
@@ -49,7 +51,9 @@ def process_success(self, response_json: Dict[str, Any]) -> Optional[Dict[str, A
class GenericOutgoingWebhookService(OutgoingWebhookServiceInterface):
- def make_request(self, base_url: str, event: Dict[str, Any]) -> Optional[Response]:
+ def make_request(
+ self, base_url: str, event: Dict[str, Any], realm: Realm
+ ) -> Optional[Response]:
"""
We send a simple version of the message to outgoing
webhooks, since most of them really only need
@@ -98,20 +102,38 @@ def process_success(self, response_json: Dict[str, Any]) -> Optional[Dict[str, A
class SlackOutgoingWebhookService(OutgoingWebhookServiceInterface):
- def make_request(self, base_url: str, event: Dict[str, Any]) -> Optional[Response]:
+ def make_request(
+ self, base_url: str, event: Dict[str, Any], realm: Realm
+ ) -> Optional[Response]:
if event["message"]["type"] == "private":
failure_message = "Slack outgoing webhooks don't support private messages."
fail_with_message(event, failure_message)
return None
+ # https://api.slack.com/legacy/custom-integrations/outgoing-webhooks#legacy-info__post-data
+ # documents the Slack outgoing webhook format:
+ #
+ # token=XXXXXXXXXXXXXXXXXX
+ # team_id=T0001
+ # team_domain=example
+ # channel_id=C2147483705
+ # channel_name=test
+ # thread_ts=1504640714.003543
+ # timestamp=1504640775.000005
+ # user_id=U2147483697
+ # user_name=Steve
+ # text=googlebot: What is the air-speed velocity of an unladen swallow?
+ # trigger_word=googlebot:
+
request_data = [
("token", self.token),
- ("team_id", event["message"]["sender_realm_str"]),
- ("team_domain", email_to_domain(event["message"]["sender_email"])),
- ("channel_id", event["message"]["stream_id"]),
+ ("team_id", f"T{realm.id}"),
+ ("team_domain", realm.host),
+ ("channel_id", f"C{event['message']['stream_id']}"),
("channel_name", event["message"]["display_recipient"]),
+ ("thread_ts", event["message"]["timestamp"]),
("timestamp", event["message"]["timestamp"]),
- ("user_id", event["message"]["sender_id"]),
+ ("user_id", f"U{event['message']['sender_id']}"),
("user_name", event["message"]["sender_full_name"]),
("text", event["command"]),
("trigger_word", event["trigger"]),
@@ -326,11 +348,12 @@ def do_rest_call(
"""Returns response of call if no exception occurs."""
try:
start_time = perf_counter()
+ bot_profile = service_handler.user_profile
response = service_handler.make_request(
base_url,
event,
+ bot_profile.realm,
)
- bot_profile = service_handler.user_profile
logging.info(
"Outgoing webhook request from %s@%s took %f seconds",
bot_profile.id,
| diff --git a/zerver/tests/test_outgoing_webhook_interfaces.py b/zerver/tests/test_outgoing_webhook_interfaces.py
--- a/zerver/tests/test_outgoing_webhook_interfaces.py
+++ b/zerver/tests/test_outgoing_webhook_interfaces.py
@@ -107,6 +107,7 @@ def test_make_request(self) -> None:
self.handler.make_request(
test_url,
event,
+ othello.realm,
)
session.post.assert_called_once()
self.assertEqual(session.post.call_args[0], (test_url,))
@@ -150,6 +151,7 @@ def test_process_success(self) -> None:
class TestSlackOutgoingWebhookService(ZulipTestCase):
def setUp(self) -> None:
super().setUp()
+ self.bot_user = get_user("[email protected]", get_realm("zulip"))
self.stream_message_event = {
"command": "@**test**",
"user_profile_id": 12,
@@ -187,7 +189,9 @@ def setUp(self) -> None:
}
service_class = get_service_interface_class(SLACK_INTERFACE)
- self.handler = service_class(token="abcdef", user_profile=None, service_name="test-service")
+ self.handler = service_class(
+ token="abcdef", user_profile=self.bot_user, service_name="test-service"
+ )
def test_make_request_stream_message(self) -> None:
test_url = "https://example.com/example"
@@ -195,22 +199,24 @@ def test_make_request_stream_message(self) -> None:
self.handler.make_request(
test_url,
self.stream_message_event,
+ self.bot_user.realm,
)
session.post.assert_called_once()
self.assertEqual(session.post.call_args[0], (test_url,))
request_data = session.post.call_args[1]["data"]
self.assertEqual(request_data[0][1], "abcdef") # token
- self.assertEqual(request_data[1][1], "zulip") # team_id
- self.assertEqual(request_data[2][1], "zulip.com") # team_domain
- self.assertEqual(request_data[3][1], "123") # channel_id
+ self.assertEqual(request_data[1][1], "T2") # team_id
+ self.assertEqual(request_data[2][1], "zulip.testserver") # team_domain
+ self.assertEqual(request_data[3][1], "C123") # channel_id
self.assertEqual(request_data[4][1], "integrations") # channel_name
- self.assertEqual(request_data[5][1], 123456) # timestamp
- self.assertEqual(request_data[6][1], 21) # user_id
- self.assertEqual(request_data[7][1], "Sample User") # user_name
- self.assertEqual(request_data[8][1], "@**test**") # text
- self.assertEqual(request_data[9][1], "mention") # trigger_word
- self.assertEqual(request_data[10][1], 12) # user_profile_id
+ self.assertEqual(request_data[5][1], 123456) # thread_id
+ self.assertEqual(request_data[6][1], 123456) # timestamp
+ self.assertEqual(request_data[7][1], "U21") # user_id
+ self.assertEqual(request_data[8][1], "Sample User") # user_name
+ self.assertEqual(request_data[9][1], "@**test**") # text
+ self.assertEqual(request_data[10][1], "mention") # trigger_word
+ self.assertEqual(request_data[11][1], 12) # user_profile_id
@mock.patch("zerver.lib.outgoing_webhook.fail_with_message")
def test_make_request_private_message(self, mock_fail_with_message: mock.Mock) -> None:
@@ -219,6 +225,7 @@ def test_make_request_private_message(self, mock_fail_with_message: mock.Mock) -
response = self.handler.make_request(
test_url,
self.private_message_event,
+ self.bot_user.realm,
)
session.post.assert_not_called()
self.assertIsNone(response)
| Don't send empty `team_id` in Slack-format outgoing webhooks, when `string_id` empty
Discussed here: https://chat.zulip.org/#narrow/stream/127-integrations/topic/gitlab.20slash.20commands/near/1247658
On a single-realm server slack-compatible outgoing webhook sends an empty `team_id`.
GitLab's [Slack slash commands](https://docs.gitlab.com/ee/user/project/integrations/slack_slash_commands.html), for example, relies on the `team_id` field being non-empty.
| Hello @zulip/server-integrations members, this issue was labeled with the "area: integrations" label, so you may want to check it out!
<!-- areaLabelAddition -->
| 2021-09-17T19:44:49 |
zulip/zulip | 19,818 | zulip__zulip-19818 | [
"19810"
] | b43852953b74bbe3fb87acc533814cdbf4efe4f8 | diff --git a/zerver/lib/markdown/preprocessor_priorities.py b/zerver/lib/markdown/preprocessor_priorities.py
--- a/zerver/lib/markdown/preprocessor_priorities.py
+++ b/zerver/lib/markdown/preprocessor_priorities.py
@@ -1,6 +1,7 @@
# Note that in the Markdown preprocessor registry, the highest
# numeric value is considered the highest priority, so the dict
# below is ordered from highest-to-lowest priority.
+# Priorities for the built-in preprocessors are commented out.
PREPROCESSOR_PRIORITES = {
"generate_parameter_description": 535,
"generate_response_description": 531,
@@ -10,9 +11,12 @@
"generate_return_values": 510,
"generate_api_arguments": 505,
"include": 500,
+ # "include_wrapper": 500,
"help_relative_links": 475,
"setting": 450,
+ # "normalize_whitespace": 30,
"fenced_code_block": 25,
+ # "html_block": 20,
"tabbed_sections": -500,
"nested_code_blocks": -500,
"emoticon_translations": -505,
| markdown: Document built-in preprocessor priorities.
As a follow-up to #19783, it would be good to document the priorities assigned to the built-in preprocessors that the Python-Markdown library has. A couple of notes:
- This involves a bit of grunt work, the quickest way to do this is to loop over and print `md_engine.preprocessors._priorities` in `zerver/lib/templates.py`.
- Note that in `templates.py`, there are different cases where different sets of preprocessors are added, so one has to do the additional work to figure out which preprocessors are running in which of those cases and then document all the priorities that are for built-in preprocessors.
- The file to put these priorities in is: `zerver/lib/markdown/preprocessor_priorities..py`.
Thanks!
| Hello @zulip/server-markdown members, this issue was labeled with the "area: markdown" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
Welcome to Zulip, @kevjn! We just sent you an invite to collaborate on this repository at https://github.com/zulip/zulip/invitations. Please accept this invite in order to claim this issue and begin a fun, rewarding experience contributing to Zulip!
Here's some tips to get you off to a good start:
- Join me on the [Zulip developers' server](https://chat.zulip.org), to get help, chat about this issue, and meet the other developers.
- [Unwatch this repository](https://help.github.com/articles/unwatching-repositories/), so that you don't get 100 emails a day.
As you work on this issue, you'll also want to refer to the [Zulip code contribution guide](https://zulip.readthedocs.io/en/latest/contributing/index.html), as well as the rest of the developer documentation on that site.
See you on the other side (that is, the pull request side)!
| 2021-09-23T09:44:24 |
|
zulip/zulip | 19,828 | zulip__zulip-19828 | [
"19730"
] | 10c47b5d6c3300e9e54c9b8dd5147498439b569b | diff --git a/zerver/lib/actions.py b/zerver/lib/actions.py
--- a/zerver/lib/actions.py
+++ b/zerver/lib/actions.py
@@ -5885,15 +5885,14 @@ def maybe_send_resolve_topic_notifications(
if old_topic.lstrip(RESOLVED_TOPIC_PREFIX) != new_topic.lstrip(RESOLVED_TOPIC_PREFIX):
return
- if new_topic.startswith(RESOLVED_TOPIC_PREFIX) and not old_topic.startswith(
+ topic_resolved: bool = new_topic.startswith(RESOLVED_TOPIC_PREFIX) and not old_topic.startswith(
RESOLVED_TOPIC_PREFIX
- ):
- notification_string = _("{user} has marked this topic as resolved.")
- elif old_topic.startswith(RESOLVED_TOPIC_PREFIX) and not new_topic.startswith(
+ )
+ topic_unresolved: bool = old_topic.startswith(
RESOLVED_TOPIC_PREFIX
- ):
- notification_string = _("{user} has marked this topic as unresolved.")
- else:
+ ) and not new_topic.startswith(RESOLVED_TOPIC_PREFIX)
+
+ if not topic_resolved and not topic_unresolved:
# If there's some other weird topic that does not toggle the
# state of "topic starts with RESOLVED_TOPIC_PREFIX", we do
# nothing. Any other logic could result in cases where we send
@@ -5912,6 +5911,11 @@ def maybe_send_resolve_topic_notifications(
sender = get_system_bot(settings.NOTIFICATION_BOT, user_profile.realm_id)
user_mention = f"@_**{user_profile.full_name}|{user_profile.id}**"
with override_language(stream.realm.default_language):
+ if topic_resolved:
+ notification_string = _("{user} has marked this topic as resolved.")
+ elif topic_unresolved:
+ notification_string = _("{user} has marked this topic as unresolved.")
+
internal_send_stream_message(
sender,
stream,
| Resolve topic message should use server's native language
When a user has set their language to a language that doesn't match the language in which people communicate, notification bot will post a message in the user's language instead of the language used for communication. In the case where I noticed it, notification bot posted the french message "`@<censored>` a marqué ce sujet comme résolu." on an english server. I think the message posted should either be in a language set globally for the server or be translated for every user.
| Agreed; I feel like this came up in the last week for another reason as well. This should be addressed by using `with override_language(realm.default_language)` like we do other messages by Notification Bot. `maybe_send_resolve_topic_notifications` in `zerver/lib/actions.py` is the location of the code in question.
Hello @zulip/server-message-view members, this issue was labeled with the "area: message-editing" label, so you may want to check it out!
<!-- areaLabelAddition -->
| 2021-09-23T22:24:43 |
|
zulip/zulip | 19,832 | zulip__zulip-19832 | [
"9006"
] | 660ccccf681c1036584c96dbe5068acacb5ff82f | diff --git a/zerver/data_import/slack.py b/zerver/data_import/slack.py
--- a/zerver/data_import/slack.py
+++ b/zerver/data_import/slack.py
@@ -1,3 +1,4 @@
+import datetime
import logging
import os
import posixpath
@@ -703,6 +704,7 @@ def convert_slack_workspace_messages(
zerver_realmemoji: List[ZerverFieldsT],
domain_name: str,
output_dir: str,
+ convert_slack_threads: bool,
chunk_size: int = MESSAGE_BATCH_CHUNK_SIZE,
) -> Tuple[List[ZerverFieldsT], List[ZerverFieldsT], List[ZerverFieldsT]]:
"""
@@ -764,6 +766,7 @@ def convert_slack_workspace_messages(
dm_members,
domain_name,
long_term_idle,
+ convert_slack_threads,
)
message_json = dict(zerver_message=zerver_message, zerver_usermessage=zerver_usermessage)
@@ -844,6 +847,7 @@ def channel_message_to_zerver_message(
dm_members: DMMembersT,
domain_name: str,
long_term_idle: Set[int],
+ convert_slack_threads: bool,
) -> Tuple[
List[ZerverFieldsT],
List[ZerverFieldsT],
@@ -867,6 +871,8 @@ def channel_message_to_zerver_message(
total_user_messages = 0
total_skipped_user_messages = 0
+ thread_counter: Dict[str, int] = defaultdict(int)
+ thread_map: Dict[str, str] = {}
for message in all_messages:
slack_user_id = get_message_sending_user(message)
if not slack_user_id:
@@ -955,7 +961,25 @@ def channel_message_to_zerver_message(
has_attachment = file_info["has_attachment"]
has_image = file_info["has_image"]
+ # Slack's unthreaded messages go into a single topic, while
+ # threads each generate a unique topic labeled by the date and
+ # a counter among topics on that day.
topic_name = "imported from Slack"
+ if convert_slack_threads and "thread_ts" in message:
+ thread_ts = datetime.datetime.fromtimestamp(
+ float(message["thread_ts"]), tz=datetime.timezone.utc
+ )
+ thread_ts_str = thread_ts.strftime(r"%Y/%m/%d %H:%M:%S")
+ # The topic name is "2015-08-18 Slack thread 2", where the counter at the end is to disambiguate
+ # threads with the same date.
+ if thread_ts_str in thread_map:
+ topic_name = thread_map[thread_ts_str]
+ else:
+ thread_date = thread_ts.strftime(r"%Y-%m-%d")
+ thread_counter[thread_date] += 1
+ count = thread_counter[thread_date]
+ topic_name = f"{thread_date} Slack thread {count}"
+ thread_map[thread_ts_str] = topic_name
zulip_message = build_message(
topic_name=topic_name,
@@ -1311,7 +1335,13 @@ def fetch_team_icons(
return records
-def do_convert_data(original_path: str, output_dir: str, token: str, threads: int = 6) -> None:
+def do_convert_data(
+ original_path: str,
+ output_dir: str,
+ token: str,
+ threads: int = 6,
+ convert_slack_threads: bool = False,
+) -> None:
# Subdomain is set by the user while running the import command
realm_subdomain = ""
realm_id = 0
@@ -1380,6 +1410,7 @@ def do_convert_data(original_path: str, output_dir: str, token: str, threads: in
realm["zerver_realmemoji"],
domain_name,
output_dir,
+ convert_slack_threads,
)
# Move zerver_reactions to realm.json file
diff --git a/zerver/management/commands/convert_slack_data.py b/zerver/management/commands/convert_slack_data.py
--- a/zerver/management/commands/convert_slack_data.py
+++ b/zerver/management/commands/convert_slack_data.py
@@ -34,6 +34,12 @@ def add_arguments(self, parser: CommandParser) -> None:
help="Threads to use in exporting UserMessage objects in parallel",
)
+ parser.add_argument(
+ "--no-convert-slack-threads",
+ action="store_true",
+ help="If specified, do not convert Slack threads to separate Zulip topics",
+ )
+
parser.formatter_class = argparse.RawTextHelpFormatter
def handle(self, *args: Any, **options: Any) -> None:
@@ -56,4 +62,11 @@ def handle(self, *args: Any, **options: Any) -> None:
raise CommandError(f"Slack data directory not found: '{path}'")
print("Converting data ...")
- do_convert_data(path, output_dir, token, threads=num_threads)
+ convert_slack_threads = not options["no_convert_slack_threads"]
+ do_convert_data(
+ path,
+ output_dir,
+ token,
+ threads=num_threads,
+ convert_slack_threads=convert_slack_threads,
+ )
| diff --git a/zerver/tests/test_slack_importer.py b/zerver/tests/test_slack_importer.py
--- a/zerver/tests/test_slack_importer.py
+++ b/zerver/tests/test_slack_importer.py
@@ -930,6 +930,7 @@ def test_channel_message_to_zerver_message(self, mock_build_usermessage: mock.Mo
dm_members,
"domain",
set(),
+ convert_slack_threads=False,
)
# functioning already tested in helper function
self.assertEqual(zerver_usermessage, [])
@@ -992,6 +993,119 @@ def test_channel_message_to_zerver_message(self, mock_build_usermessage: mock.Mo
self.assertEqual(zerver_message[7]["sender"], 43)
self.assertEqual(zerver_message[8]["sender"], 5)
+ @mock.patch("zerver.data_import.slack.build_usermessages", return_value=(2, 4))
+ def test_channel_message_to_zerver_message_with_threads(
+ self, mock_build_usermessage: mock.Mock
+ ) -> None:
+ user_data = [
+ {"id": "U066MTL5U", "name": "john doe", "deleted": False, "real_name": "John"},
+ {"id": "U061A5N1G", "name": "jane doe", "deleted": False, "real_name": "Jane"},
+ {"id": "U061A1R2R", "name": "jon", "deleted": False, "real_name": "Jon"},
+ ]
+
+ slack_user_id_to_zulip_user_id = {"U066MTL5U": 5, "U061A5N1G": 24, "U061A1R2R": 43}
+
+ all_messages: List[Dict[str, Any]] = [
+ {
+ "text": "<@U066MTL5U> has joined the channel",
+ "subtype": "channel_join",
+ "user": "U066MTL5U",
+ "ts": "1434139102.000002",
+ "channel_name": "random",
+ },
+ {
+ "text": "<@U061A5N1G>: hey!",
+ "user": "U061A1R2R",
+ "ts": "1437868294.000006",
+ "has_image": True,
+ "channel_name": "random",
+ },
+ {
+ "text": "random",
+ "user": "U061A5N1G",
+ "ts": "1439868294.000006",
+ # Thread!
+ "thread_ts": "1434139102.000002",
+ "channel_name": "random",
+ },
+ {
+ "text": "random",
+ "user": "U061A5N1G",
+ "ts": "1439868294.000007",
+ "thread_ts": "1434139102.000002",
+ "channel_name": "random",
+ },
+ {
+ "text": "random",
+ "user": "U061A5N1G",
+ "ts": "1439868294.000008",
+ # A different Thread!
+ "thread_ts": "1439868294.000008",
+ "channel_name": "random",
+ },
+ {
+ "text": "random",
+ "user": "U061A5N1G",
+ "ts": "1439868295.000008",
+ # Another different Thread!
+ "thread_ts": "1439868295.000008",
+ "channel_name": "random",
+ },
+ ]
+
+ slack_recipient_name_to_zulip_recipient_id = {
+ "random": 2,
+ "general": 1,
+ }
+ dm_members: DMMembersT = {}
+
+ zerver_usermessage: List[Dict[str, Any]] = []
+ subscriber_map: Dict[int, Set[int]] = {}
+ added_channels: Dict[str, Tuple[str, int]] = {"random": ("c5", 1), "general": ("c6", 2)}
+
+ (
+ zerver_message,
+ zerver_usermessage,
+ attachment,
+ uploads,
+ reaction,
+ ) = channel_message_to_zerver_message(
+ 1,
+ user_data,
+ slack_user_id_to_zulip_user_id,
+ slack_recipient_name_to_zulip_recipient_id,
+ all_messages,
+ [],
+ subscriber_map,
+ added_channels,
+ dm_members,
+ "domain",
+ set(),
+ convert_slack_threads=True,
+ )
+ # functioning already tested in helper function
+ self.assertEqual(zerver_usermessage, [])
+ # subtype: channel_join is filtered
+ self.assert_length(zerver_message, 5)
+
+ self.assertEqual(uploads, [])
+ self.assertEqual(attachment, [])
+
+ # Message conversion already tested in tests.test_slack_message_conversion
+ self.assertEqual(zerver_message[0]["content"], "@**Jane**: hey!")
+ self.assertEqual(zerver_message[0]["has_link"], False)
+ self.assertEqual(zerver_message[1]["content"], "random")
+ self.assertEqual(zerver_message[1][EXPORT_TOPIC_NAME], "2015-06-12 Slack thread 1")
+ self.assertEqual(zerver_message[2][EXPORT_TOPIC_NAME], "2015-06-12 Slack thread 1")
+ # A new thread with a different date from 2015-06-12, starts the counter from 1.
+ self.assertEqual(zerver_message[3][EXPORT_TOPIC_NAME], "2015-08-18 Slack thread 1")
+ # A new thread with a different timestamp, but the same date as 2015-08-18, starts the
+ # counter from 2.
+ self.assertEqual(zerver_message[4][EXPORT_TOPIC_NAME], "2015-08-18 Slack thread 2")
+ self.assertEqual(
+ zerver_message[1]["recipient"], slack_recipient_name_to_zulip_recipient_id["random"]
+ )
+
@mock.patch("zerver.data_import.slack.channel_message_to_zerver_message")
@mock.patch("zerver.data_import.slack.get_messages_iterator")
def test_convert_slack_workspace_messages(
@@ -1045,6 +1159,7 @@ def fake_get_messages_iter(
[],
"domain",
output_dir=output_dir,
+ convert_slack_threads=False,
chunk_size=1,
)
| Improve how we import Slack threads into Zulip
There's no coherent way to import the Slack "threads" feature into Zulip, but I think we could do better as follows:
* When importing a message that's in a Slack thread replying to message X, we should add a line at the top of the import messages of the form "Slack thread reply to [Tim Abbott's message](link_to_tims_message)"
This requires some pain to implement (since we need the message ID for tim's message, so we'll need to do an attachment-style rewriting thing to map the message ID). But it's make the history a lot more readable.
| The documentation needs to be added to say that this is not yet implemented.
@zulipbot claim
Hello @rheaparekh, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.
You can reclaim this issue or claim any other issue by commenting `@zulipbot claim` on that issue.
Thanks for your contributions, and hope to see you again soon! | 2021-09-24T07:04:08 |
Subsets and Splits