PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
This mixin needs to be shared between the composer and composer-like
user interfaces. This commit makes it so the events and the underlying
data model is configurable by the component extending the ComposerUploadUppy
mixin.
Also removes two MessageBus unsubscribe calls which were unnecessary.
The user-topic-list template is also in use in other places when we want to improve blank page syndrome, so this PR is a preparation for that changes as well.
- uses tagName=""
- removes user property which is not being used
- extract utility functions
- better wording for boolean properties
- initializes all properties
- uses @action
- uses optional chaining
- other minor changes
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
Major changes included:
- better support for screen readers
- trapping focus in modals
- better tabbing order in composer
- alerts on no content found/number of items found
- better autofocus in modals
- mini-tag-chooser is now a multi-select component
- each multi-select-component will now display selection on one row
When a theme's default color scheme is not marked as user selectable, we were outputting the numeric ID in the UI. This outputs "Theme default" instead.
Adds uppy upload functionality behind a
enable_experimental_composer_uploader site setting (default false,
and hidden).
When enabled this site setting will make the composer-editor-uppy
component be used within composer.hbs, which in turn points to
a ComposerUploadUppy mixin which overrides the relevant
functions from ComposerUpload. This uppy uploader has parity
with all the features of jQuery file uploader in the original
composer-editor, including:
progress tracking
error handling
number of files validation
pasting files
dragging and dropping files
updating upload placeholders
upload markdown resolvers
processing actions (the only one we have so far is the media optimization
worker by falco, this works)
cancelling uploads
For now all uploads still go via the /uploads.json endpoint, direct
S3 support will be added later.
Also included in this PR are some changes to the media optimization
service, to support uppy's different file data structures, and also
to make the promise tracking and resolving more robust. Currently
it uses the file name to track promises, we can switch to something
more unique later if needed.
Does not include custom upload handlers, that will come
in a later PR, it is a tricky problem to handle.
Also, this new functionality will not be used in encrypted PMs because
encrypted PM uploads rely on custom upload handlers.
The invite acceptance page is an alternative signup flow, so it makes sense to include the new 'link' functionality there as well.
Followup to 7dc8f8b794
When a user signs up via an external auth method, a new link is added to the signup modal which allows them to connect an existing Discourse account. This will only happen if:
- There is at least 1 other auth method available
and
- The current auth method permits users to disconnect/reconnect their accounts themselves
In the group interaction UI, if the default_notification_level for
a group was set to 0 (muted) it incorrectly showed as Watching in
the UI because of the ember or() helper, using JS comparison, considered
0 to be a falsey value and always showed 3 (watching) instead.
When declaring your widget you can now add an option like: `services: ['cool']`
And your widget instances will automatically get a `this.cool` property
which will resolve to the service. This saves having to look it up
yourself.
Currently when a user clicks on an edit notification, we use `appEvents` to
notify the topics controller that it should open up the history modal for the
edited post and the appEvents callback opens up the history modal in the next
Ember runloop (by scheduling an `afterRender` callback).
There are 2 problems with this implementation:
1) the callbacks are fired/executed too early and if the post has never been
loaded from the server (i.e. not in cache), we will not get a modal history
because the method that shows the modal `return`s if it can't find the post:
016efeadf6/app/assets/javascripts/discourse/app/controllers/topic.js (L145-L152)
2) when clicking an edit notification from a non-topic page, you're redirected
to the topic page that contains the edited post and you'll see the history
modal briefly and it'll be closed immediately. The reason for this is because
we attempt to show the history modal before the route transition finishes
completely, and we have cleanup code in `initializers/page-tracking.js` that's
called after every transition and it does several things one of which is
closing any open modals.
The fix in this commit defers showing the history modal until posts are loaded
(whether fresh or cached). It works by storing some bits of information (topic
id, post number, revision number) whenever the user clicks on an edit
notification, and when the user is redirected to the topic (or scrolled to the
edited post if they're already in the topic), the post stream model checks if
we have stored information of an edit notification and requests the history
modal to be shown by the topics controller.
An invalid draft is the draft of a topic with a short title or body.
The client does not save these, but it will ask the client if they want
to save it. Even if the answer is 'yes', the draft is discarded. This
commit skips Save button for small drafts.
The current behaviour was producing random tests failures which where consistently reproducible using `seed=32037592518471299633729129648744282271`
The cause of this error, is a previous test not giving any topicId or categoryId resulting in a cache key "undefined-undefined", just like a possibly previous test. Reseting cache between tests, seems the most straightforward and future proof solution
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This PR contains only tests. These tests are from my old PR with refactoring of future-date-input-selector. That PR was closed because we had some changes in our planes about our time-pickers and additionally these tests were flaky.
Tests in this PR aren't flaky, since they use fake time moments in the future. Tests just document current behaviour of future-date-input-selector.
Will show the last 6 seen users as filtering suggestions when typing @ in quick search. (Previously the user suggestion required a character after the @.)
This also adds a default limit of 6 to the user search query, previously the backend was returning 20 results but a maximum of 6 results was being shown anyway.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>