An invalid draft is the draft of a topic with a short title or body.
The client does not save these, but it will ask the client if they want
to save it. Even if the answer is 'yes', the draft is discarded. This
commit skips Save button for small drafts.
The current behaviour was producing random tests failures which where consistently reproducible using `seed=32037592518471299633729129648744282271`
The cause of this error, is a previous test not giving any topicId or categoryId resulting in a cache key "undefined-undefined", just like a possibly previous test. Reseting cache between tests, seems the most straightforward and future proof solution
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This PR contains only tests. These tests are from my old PR with refactoring of future-date-input-selector. That PR was closed because we had some changes in our planes about our time-pickers and additionally these tests were flaky.
Tests in this PR aren't flaky, since they use fake time moments in the future. Tests just document current behaviour of future-date-input-selector.
Will show the last 6 seen users as filtering suggestions when typing @ in quick search. (Previously the user suggestion required a character after the @.)
This also adds a default limit of 6 to the user search query, previously the backend was returning 20 results but a maximum of 6 results was being shown anyway.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* FIX: Clear stale status of reloaded reviewables
Navigating away from and back to the reviewables reloaded Reviewable
records, but did not clear the "stale" attribute.
* FEATURE: Show user who last acted on reviewable
When a user acts on a reviewable, all other clients are notified and a
generic "reviewable was resolved by someone" notice was shown instead of
the buttons. There is no need to keep secret the username of the acting
user.
Replaces the autocomplete overlay for categories and usernames on the search input and adds suggestions as items in the search results instead. Also adds the same behaviour for @mentions as well as special `in: status: order:` keywords. See PR for more details.
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
This PR adds the first use of Uppy in our codebase, hidden behind a enable_experimental_image_uploader site setting. When the setting is enabled only the user card background uploader will use the new uppy-image-uploader component added in this PR.
I've introduced an UppyUpload mixin that has feature parity with the existing Upload mixin, and improves it slightly to deal with multiple/single file distinctions and validations better. For now, this just supports the XHRUpload plugin for uppy, which keeps our existing POST to /uploads.json.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
A more complex algorithm was used to achieve consensus between server
and client lists of notifications. This commit uses a different and
more simple approach that ignores order, but updates read status of
existing notifications and removes stale notifications.
And also move all the "top topics by period" routes to query string param.
/top/monthly => /top?period=monthly
/c/:slug/:id/l/top/monthly => /c/:slug/:id/l/top?period=monthly
/tag/:slug/l/top/daily => /tag/:slug/l/top?period=daily (new)
This PR changes the order of the topic timer options
into a more logical order when the topic is open/closed.
Also, we are now hiding the "Schedule Publishing" option
if the topic is not a private message or in a private category.
It does not make sense to schedule publishing to a different
category for a public topic.
Multiselect data can be saved but when all are removed then data are not cleared
Ajax function is removing an empty array from request data. In that case, we should change `[]` to `null`.
We need that empty values to properly empty data.
The error was:
```
↪ Unit | Model | topic::recover [✔]
↪ Unit | Utility | emoji::emojiUnescape [✔]
↪ Unit | Utility | pretty-text::quoting a quote [✔]
↪ Unit | Utility | click-track::routes to internal urlsUnhandled request in test environment: /forum/t/1234/recover (PUT)
Error: Unhandled request in test environment: /forum/t/1234/recover (PUT)
at Pretender.server.unhandledRequest (discourse/tests/setup-tests:173:15)
at Pretender.handleRequest (pretender:400:14)
at FakeRequest.send (pretender:169:21)
at Object.send (jquery:10100:10)
at Function.ajax (jquery:9683:15)
at performAjax (discourse/app/lib/ajax:174:19)
at eval (discourse/app/lib/ajax:183:11)
at invokeCallback (ember:63104:17)
at publish (ember:63087:9)
at eval (ember:57463:16)
[✘]
```
* DEV: Don't duplicate a function
We changed (https://github.com/discourse/discourse/pull/13407) behaviour of the topic level bookmark button recently. That PR made the button be opening the edit bookmark modal when there is only one bookmark on the topic instead of just removing that bookmark as it was before.
This PR fixes the next problems that weren't taken into account in the previous PR:
1. Everything should work fine even on very big topics when a bookmarked post is unloaded from the post stream. I've added code that loads the post we need and makes everything work as expected
2. When at least one bookmark on the topic has a reminder, we should always be showing the icon with a clock on the topic level bookmark button
3. We should show correct tooltips for the topic level bookmark button
We want to submit the flag modal on pressing CTRL + ENTER and CMD + ENTER.
Here's how our modals work:
Every modal can be dismissed by pressing ESC. This behaviour can be disabled for a specific modal if we need to.
Every modal can be submitted by pressing ENTER if the cursor wasn't on a text area or a form at the moment of pressing.
Now, the flag modal is actually a one big form and pressing ENTER doesn't submit it. I've added submitting by CTRL+ENTER but at first it was interfering with the basic modal submitting by ENTER. It's a pretty tricky thing to fix because we use the keyup event for submitting by ENTER and we need to use the keydown event for submitting with modifiers (because submitting by CMD+ENTER on Macs doesn't work with keyup).
Eventually, I fixed the problem just by adding a possibility to disable default submitting on ENTER (in the same way as we already have the possibility of disabling dismissing on ESC). Then I disabled default submitting for the flag form and implemented submitting by CTRL+ENTER and CMD+ENTER. This way everything is simple and robust. I did it only for the flag modal but it'll be easy and safe to add the same behaviour to another modal.
Add Members could also invite new users via emails, but that was a less
known fact. Splitting the previous modal into two more accessible
modals should make this feature more discoverable.
Effectively reverts 3ddc33b07c
Makes the failure states testable; see the uncommented test.
I don't think we're re-catching these errors anyway?
_update:_
We did in a single instance in discourse-code-review but it wasn't really intentional and I fixed it in https://github.com/discourse/discourse-code-review/pull/73
* pretender wasn't catching the request because it ran after this test finished
* restore wasn't needed, we do `sinon.restore()` after each test
The error was:
```
↪ Unit | Model | user::resolvedTimezone [✔]
↪ Unit | Utility | url::routeTo with prefixUnhandled request in test environment: /forum/u/chuck.json (PUT)
Error: Unhandled request in test environment: /forum/u/chuck.json (PUT)
at Pretender.server.unhandledRequest (discourse/tests/setup-tests:173:15)
at Pretender.handleRequest (pretender:400:14)
at FakeRequest.send (pretender:169:21)
at Object.send (jquery:10100:10)
at Function.ajax (jquery:9683:15)
at performAjax (discourse/app/lib/ajax:174:19)
at eval (discourse/app/lib/ajax:183:11)
at invokeCallback (ember:63104:17)
at publish (ember:63087:9)
at eval (ember:57463:16)
[✘]
```
A minimal reproduction:
`http://localhost:3001/qunit?seed=3&testId=da76996b&testId=e52a53e7`