When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
Active Record's `to_sql` method seems to return an empty string instead
of the expected SQL query when called on a query involving an
unpersisted model instance.
This replaces the admin `user` used in the specs with a persisted instance.
Backports the following:
* 40e8912395
* bbcb69461f
Which were showing an error when users were
trying to claim invites multiple times and
a subsequent follow-up fix.
The upgrade of node in our discourse_test docker image has caused these to start failing. Ember-cli assets are default-disabled on the stable branch, so there is little need to run these tests.
- Ensure it works with prefixed S3 buckets
- Perform a sanity check that all current assets are present on S3 before starting deletion
- Remove the lifecycle rule configuration and delete expired assets immediately. This task should be run post-deploy anyway, so adding a 10-day window is not required
This task is supposed to skip uploading if the asset is already present in S3. However, when a bucket 'folder path' was configured, this logic was broken and so the assets would be re-uploaded every time.
This commit fixes that logic to include the bucket 'folder path' in the check
This commit adds some protections in InviteRedeemer to ensure that email
can never be nil, which could cause issues with inviting the invited
person to private topics since there was an incorrect inner join.
If the email is nil and the invite is scoped to an email, we just use
that invite.email unconditionally. If a redeeming_user (an existing
user) is passed in when redeeming an email, we use their email to
override the passed in email. Otherwise we just use the passed in
email. We now raise an error after all this if the email is still nil.
This commit also adds some tests to catch the private topic fix, and
some general improvements and comments around the invite code.
This commit also includes a migration to delete TopicAllowedUser records
for users who were mistakenly added to topics as part of the invite
redemption process.
Before this commit, we did not have guardian checks in place to determine if a
topic's title associated with a user badge should be displayed or not.
This means that the topic title of topics with restricted access
could be leaked to anon and users without access if certain conditions
are met. While we will not specify the conditions required, we have internally
assessed that the odds of meeting such conditions are low.
With this commit, we will now apply a guardian check to ensure that the
current user is able to see a topic before the topic's title is included
in the serialized object of a `UserBadge`.
Before this commit, there was no way for us to efficiently check an
array of topics for which a user can see. Therefore, this commit
introduces the `TopicGuardian#can_see_topic_ids` method which accepts an
array of `Topic#id`s and filters out the ids which the user is not
allowed to see. The `TopicGuardian#can_see_topic_ids` method is meant to
maintain feature parity with `TopicGuardian#can_see_topic?` at all
times so a consistency check has been added in our tests to ensure that
`TopicGuardian#can_see_topic_ids` returns the same result as
`TopicGuardian#can_see_topic?`. In the near future, the plan is for us
to switch to `TopicGuardian#can_see_topic_ids` completely but I'm not
doing that in this commit as we have to be careful with the performance
impact of such a change.
This method is currently not being used in the current commit but will
be relied on in a subsequent commit.
We are already caching any DB_HOST and REDIS_HOST (and their
accompanying replicas), we should also cache the resolved addresses for
the MessageBus specific Redis. This is a noop if no MB redis is defined
in config. A side effect is that the MB will also support SRV lookup and
priorities, following the same convention as the other cached services.
The port argument was added to redis_healthcheck so that the script
supports a setup where Redis is running on a non-default port.
Did some minor refactoring to improve readability when filtering out the
CRITICAL_HOST_ENV_VARS. The `select` block was a bit confusing, so the
sequence was made easier to follow.
We were coercing an environment variable to an int in a few places, so
the `env_as_int` method was introduced to do that coercion in one place and
for convenience purposes default to a value if provided.
See /t/68301/30.
There are situations where a container running Discourse may want to
cache the critical DNS services without running the cache_critical_dns
service, for example running migrations prior to running a full bore
application container.
Add a `--once` argument for the cache_critical_dns script that will
only execute the main loop once, and return the status code for the
script to use when exiting. 0 indicates no errors occured during SRV
resolution, and 1 indicates a failure during the SRV lookup.
Nothing is reported to prometheus in run_once mode. Generally this
mode of operation would be a part of a unix pipeline, in which the exit
status is a more meaningful and immediate signal than a prometheus metric.
The reporting has been moved into it's own method that can be called
only when the script is running as a service.
See /t/69597.
Describes the behaviour and configuration of the cache_critical_dns
script, mainly cribbed from commit messages. Tries to make this program
a bit less of an enigma.
The `PG::Connection#ping` method is only reliable for checking if the
given host is accepting connections, and not if the authentication
details are valid.
This extends the healthcheck to confirm that the auth details are
able to both create a connection and execute queries against the
database.
We expect the empty query to return an empty result set, so we can
assert on that. If a failure occurs for any reason, the healthcheck will
return false.
An SRV RR contains a priority value for each of the SRV targets that
are present, ranging from 0 - 65535. When caching SRV records we may want to
filter out any targets above or below a particular threshold.
This change adds support for specifying a lower and/or upper bound on
target priorities for any SRV RRs. Any targets returned when resolving
the SRV RR whose priority does not fall between the lower and upper
thresholds are ignored.
For example: Let's say we are running two Redis servers, a primary and
cold server as a backup (but not a replica). Both servers would pass health
checks, but clearly the primary should be preferred over the backup
server. In this case, we could configure our SRV RR with the primary
target as priority 1 and backup target as priority 10. The
`DISCOURSE_REDIS_HOST_SRV_LE` could then be set to 1 and the target with
priority 10 would be ignored.
See /t/66045.
This removes the option to override the sleep time between caching of
DNS records. The override was invalid because `''.to_i` is 0 in Ruby,
causing a tight loop calling the `run` method.
For Redis connections that operate over TLS, we need to ensure that we
are setting the correct arguments for the Redis client. We can utilise
the existing environment variable `DISCOURSE_REDIS_USE_SSL` to toggle
this behaviour.
No SSL verification is performed for two reasons:
- the Discourse application will perform a verification against any FQDN
as specified for the Redis host
- the healthcheck is run against the _resolved_ IP address for the Redis
hostname, and any SSL verification will always fail against a direct
IP address
If no SSL arguments are provided, the IP address is never cached against
the hostname as no healthy address is ever found in the HealthyCache.
Modify the cache_critical_dns script for SRV RR awareness. The new
behaviour is only enabled when one or more of the following environment
variables are present (and only for a host where the `DISCOURSE_*_HOST_SRV`
variable is present):
- `DISCOURSE_DB_HOST_SRV`
- `DISCOURSE_DB_REPLICA_HOST_SRV`
- `DISCOURSE_REDIS_HOST_SRV`
- `DISCOURSE_REDIS_REPLICA_HOST_SRV`
Some minor changes in refactor to original script behaviour:
- add Name and SRVName classes for storing resolved addresses for a hostname
- pass DNS client into main run loop instead of creating inside the loop
- ensure all times are UTC
- add environment override for system hosts file path and time between DNS
checks mainly for testing purposes
The environment variable for `BUNDLE_GEMFILE` is set to enables Ruby to
load gems that are installed and vendored via the project's Gemfile.
This script is usually not run from the project directory as it is
configured as a system service (see
71ba9fb7b5/templates/cache-dns.template.yml (L19))
and therefore cannot load gems like `pg` or `redis` from the default
load paths. Setting this environment variable configures bundler to look
in the correct project directory during it's setup phase.
When a `DISCOURSE_*_HOST_SRV` environment variable is present, the
decision for which target to cache is as follows:
- resolve the SRV targets for the provided hostname
- lookup the addresses for all of the resolved SRV targets via the
A and AAAA RRs for the target's hostname
- perform a protocol-aware healthcheck (PostgreSQL or Redis pings)
- pick the newest target that passes the healthcheck
From there, the resolved address for the SRV target is cached against
the hostname as specified by the original form of the environment
variable.
For example: The hostname specified by the `DISCOURSE_DB_HOST` record
is `database.example.com`, and the `DISCOURSE_DB_HOST_SRV` record is
`database._postgresql._tcp.sd.example.com`. An SRV RR lookup will return
zero or more targets. Each of the targets will be queried for A and AAAA
RRs. For each of the addresses returned, the newest address that passes
a protocol-aware healthcheck will be cached. This address is cached so
that if any newer address for the SRV target appears we can perform a
health check and prefer the newer address if the check passes.
All resolved SRV targets are cached for a minimum of 30 minutes in memory
so that we can prefer newer hosts over older hosts when more than one target
is returned. Any host in the cache that hasn't been seen for more than 30
minutes is purged.
See /t/61485.
Building does not persist the object in the database which is
unrealistic since we're mostly dealing with persisted objects in
production.
In theory, this will result our test suite taking longer to run since we
now have to write to the database. However, I don't expect the increase
to be significant and it is actually no different than us adding new
tests which fabricates more objects.
* SECURITY: moderator shouldn't be able to import a theme via API.
* DEV: apply `AdminConstraint` for all the "themes" routes.
Co-authored-by: Vinoth Kannan <svkn.87@gmail.com>
Adds limits to location and website fields at model and DB level to
match the bio_raw field limits. A limit cannot be added at the DB level
for bio_raw because it is a postgres text field.
The migration here uses version `6.1` instead of `7.0` since `stable`
is not on that version of rails yet, otherwise this is the same as `beta`
apart from also removing the new tests which caused too many conflicts.
Co-authored-by: Alan Guo Xiang Tan gxtan1990@gmail.com
Logging out failed when the current user was cached by an instance of `Auth::DefaultCurrentUserProvider` and `#log_off_user` was called on a different instance of that class.
Co-authored-by: Sam <sam.saffron@gmail.com>
This happened when a middleware accessed the `currentUser` before a controller had a chance to populate the `action_dispatch.request.path_parameters` env variable. In that case Discourse would always cache `nil` as `currentUser`.
In certain situations, a logged in user can redeem an invite with an email that
either doesn't match the invite's email or does not adhere to the email domain
restriction of an invite link. The impact of this flaw is aggrevated
when the invite has been configured to add the user that accepts the
invite into restricted groups.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
When a site has `SiteSetting.invite_only` enabled, we create a
`ReviewableUser`record when activating a user if the user is not
approved. Therefore, we need to approve the user when redeeming an
invite.
There are some uncertainties surrounding why a `ReviewableRecord` is
created for a user in an invites only site but this commit does not seek
to address that.
Follow-up to 7c4e2d33fa
`run-qunit.js` does not expect QUnit tests to start automatically but
our wizard QUnit setup did not respect the `qunit_disable_auto_start`
URL param. Hence, tests would start running automatically and when a
subsequent `QUnit.start()` function call is made, we ended up getting a
`QUnit.start cannot be called inside a test context.` error.
This error can be consistently reproduced in the `discourse:discourse_test` container but not in
the local development environment. I do not know why and did not feel
like it is important at this point in time to know why.
This security fix affects sites which have `SiteSetting.must_approve_users`
enabled. There are intentional and unintentional cases where invited
users can be auto approved and are deemed to have skipped the staff approval process.
Instead of trying to reason about when auto-approval should happen, we have decided that
enabling the `must_approve_users` setting going forward will just mean that all new users
must be explicitly approved by a staff user in the review queue. The only case where users are auto
approved is when the `auto_approve_email_domains` site setting is used.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
(Stable backport of 7ed899f)
There is a couple of layers of caching for theme JavaScript in Discourse:
The first layer is the `javascript_caches` table in the database. When a theme
with JavaScript files is installed, Discourse stores each one of the JavaScript
files in the `theme_fields` table, and then concatenates the files, compiles
them, computes a SHA1 digest of the compiled JavaScript and store the results
along with the SHA1 digest in the `javascript_caches` table.
Now when a request comes in, we need to render `<script>` tags for the
activated theme(s) of the site. To do this, we retrieve the `javascript_caches`
records of the activated themes and generate a `<script>` tag for each record.
The `src` attribute of these tags is a path to the `/theme-javascripts/:digest`
route which simply responds with the compiled JavaScript that has the requested
digest.
The second layer is a distributed cache whose purpose is to make rendering
`<script>` a lot more efficient. Without this cache, we'd have to query the
`javascript_caches` table to retrieve the SHA1 digests for every single
request. So we use this cache to store the `<script>` tags themselves so that
we only have to retrieve the `javascript_caches` records of the activated
themes for the first request and future requests simply get the cached
`<script>` tags.
What this commit does it ensures that the SHA1 digest in the
`javascript_caches` table stay the same across compilations by adding an order
by id clause to the query that loads the `theme_fields` records. Currently, we
specify no order when retrieving the `theme_fields` records so the order in
which they're retrieved can change across compilations and therefore cause the
SHA1 to change even though the individual records have not changed at all.
An inconsistent SHA1 digest across compilations can cause the database cache
and the distributed cache to have different digests and that causes the
JavaScript to fail to load (and if the theme heavily customizes the site, it
gives the impression that the site is broken) until the cache is cleared.
This can happen in busy sites when 2 concurrent requests recompile the
JavaScript files of a theme at the same time (this can happen when deploying a
new Discourse version) and request A updates the database cache after request B
did, and request B updates the distributed cache after request A did.
Internal ticket: t60783.
Co-authored-by: David Taylor <david@taylorhq.com>
Co-authored-by: Osama Sayegh <asooomaasoooma90@gmail.com>
The values in Discourse dropdown menus only come from admin-defined strings, not unsanitised end-user input, so this lack of escaping was not exploitable.
All current browser treat the HTML document (not the body element) as
the scrollable document element. Hence in all current browsers,
`document.body.scrollTop` returns 0. This commit removes all usage of
this property, because it is effectively 0.
Co-authored-by: David Taylor <david@taylorhq.com>
After this commit, category group permissions can only be seen by users
that are allowed to manage a category. In the past, we inadvertently
included a category's group permissions settings in `CategoriesController#show`
and `CategoriesController#find_by_slug` endpoints for normal users when
those settings are only a concern to users that can manage a category.
The permissions for the 'everyone' group were not serialized because
the list of groups a user can view did not include it. This bug was
introduced in commit dfaf9831f7.
Since 3fd7b31a2a some tests
were failing with this error:
> Error: Unhandled request in test environment: /c/feature/find_by_slug.json
> (GET) at http://localhost:7357/assets/test-helpers.js
This commit fixes the issue by adding the missing pretender. Also
noticed while fixing this that the parameter for the translation
was incorrect -- it was `group` instead of `groupNames`, so that
is fixed here too, along with moving the onShow functions into
@afterRender decorated private functions. There is no need for the
appevent listeners.
Our group fabrication creates groups with name "my_group_#{n}" where n
is the sequence number of the group being created. However, this can
cause the test to be flaky if and when a group with name `my_group_10`
is created as it will be ordered before
`my_group_9`. This commits makes the group names determinstic to
eliminate any flakiness.
This reverts commit 558bc6b746.
In certain instances when viewing a category, the name of a group with
restricted visilbity may be revealed to users which do not have the
required permission.
when bundler is loaded, it sets the `RUBYOPT` environment variable to setup bundler. However, it was causing weird errors like the following when we try to install
custom plugin gems into a specific directory.
```
/home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/source/git.rb:214:in `rescue in load_spec_files': https://github.com/discourse/mail.git is not yet checked out. Run `bundle install` first. (Bundler::GitError)
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/source/git.rb:210:in `load_spec_files'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/source/path.rb:107:in `local_specs'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/source/git.rb:178:in `specs'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/lazy_specification.rb:88:in `__materialize__'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/spec_set.rb:75:in `block in materialize'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/spec_set.rb:72:in `map!'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/spec_set.rb:72:in `materialize'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/definition.rb:468:in `materialize'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/definition.rb:190:in `specs'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/definition.rb:238:in `specs_for'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/runtime.rb:18:in `setup'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler.rb:151:in `setup'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/setup.rb:20:in `block in <top (required)>'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/ui/shell.rb:136:in `with_level'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/ui/shell.rb:88:in `silence'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/gems/2.7.0/gems/bundler-2.3.5/lib/bundler/setup.rb:20:in `<top (required)>'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/site_ruby/2.7.0/rubygems/core_ext/kernel_require.rb:85:in `require'
from /home/tgxworld/.asdf/installs/ruby/2.7.5/lib/ruby/site_ruby/2.7.0/rubygems/core_ext/kernel_require.rb:85:in `require'
```
We have 3 branches which we care about, main, beta and stable.
However, each of this branch has different compatibilties with plugins
and we want to respect that.
Themes often cache `nil` values in a DistributedCache. This bug meant that we were re-calculating some values on every request, AND triggering message-bus publishing on every request.
This fix should provide a significant performance improvement for busy sites.
accept HTML attribute is not fully supported on iOS yet and can contain
only MIME types. This changes the input to allow all files and the
extension check is performed later in JavaScript.
* FEATURE: RS512, RS384 and RS256 COSE algorithms
These algorithms are not implemented by cose-ruby, but used in the web
authentication API and were marked as supported.
* FEATURE: Use all algorithms supported by cose-ruby
Previously only a subset of the algorithms were allowed.
When changing to uppy for file uploads we forgot to add
these conditions to the paste event from 9c96511ec4
Basically, if you are pasting more than just a file (e.g. text,
html, rtf), then we should not handle the file and upload it, and
instead just paste in the text. This causes issues with spreadsheet
tools, that will copy the text representation and also an image
representation of cells to the user's clipboard.
This also moves the paste event for composer-upload-uppy to the
element found by the `editorClass` property, so it shares the paste
event with d-editor (via TextareaTextManipulation), which makes testing
this possible as the ember paste bindings are not picked up unless both
paste events are on the same element.
In an earlier PR, we decided that we only want to block a domain if
the blocked domain in the SiteSetting is the final destination (/t/59305). That
PR used `FinalDestination#get`. `resolve` however is used several places
but blocks domains along the redirect chain when certain options are provided.
This commit changes the default options for `resolve` to not do that. Existing
users of `FinalDestination#resolve` are
- `Oneboxer#external_onebox`
- our onebox helper `fetch_html_doc`, which is used in amazon, standard embed
and youtube
- these folks already go through `Oneboxer#external_onebox` which already
blocks correctly
Combines 68fe6903f7 and
7b7e707fa2.
We no longer use jQuery UI for anything since getting
rid of jQuery file uploader in 667a8a6,
so we can safely remove these now.
Also removes the blueimp-file-upload and jquery.iframe-transport
dependencies that were formerly used by jQuery file uploader
This is a workaround a behavior change in Chromium v97.
The following text was sent to the blink-dev mailing list:
> This change broke a SingleSignOn login on the FOSS software Discourse. We have a flow like:
>
> 1. User visits forum.siteA.com, click login
> 2. Gets redirected to idp.siteB.com
> 3. Fills login details
> 4. Gets redirected to forum.siteA.com/session/sso_login?parameters
> 5. Gets redirected to forum.siteA.com/homepage
>
> On step 4, the response includes a `set-cookie` header, with proper `HttpOnly; SameSite=Lax; Secure `and set. But if there is an active service worker, the login will fail as that cookie will be rejected by Chromium due to SameSite rules now.
>
> t=2971 [st=258] COOKIE_INCLUSION_STATUS
> --> domain = "forum.siteA.com"
> --> name = "_t"
> --> operation = "store"
> --> path = "/"
> --> status = "EXCLUDE_SAMESITE_LAX, DO_NOT_WARN"
>
> The service worker is a vanilla WorkboxJS service worker that intercepts all GETs with the "Network First" strategy.
>
> Disabling the service worker or using Firefox results in a successful login. There is no warning in either DevTools network tab nor the console that the cookie was rejected.
>
> Chrome 96: login works
> Chrome 97: login does not work
> Chrome 98: login does not work
>
> Is this expected behavior? Even if the request `GET forum.siteA.com` was initiated because of a redirect from a different domain, is it expected that Chrome will silently drop same site cookies from forum.siteA.com?
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
The `plugin:pull_compatible_all` task is intended to take incompatible plugins and downgrade them to an earlier version. Problem is, when running the rake task in development/production environments, the plugins have already been activated. If an incompatible plugin raises an error in `plugin.rb` then the rake task will be unable to start.
This commit centralises our LOAD_PLUGINS detection, adds support for LOAD_PLUGINS=0 in dev/prod, and adds a warning to `plugin:pull_compatible_all` if it's run with plugins enabled.
MessageBus::Diagnostics allows anyone with access to carry out certain
operations that may result in a denial of service. The impact of this is
greater on multisiite clusters.
Under some conditions, these varied responses could lead to cache poisoning, hence the 'security' label.
For the stable branch, we are disabling the use of Ember CLI against production sites. A new implementation has been added to the tests-passed/beta branches
When rendering the markdown code blocks we replace the
offending characters in the output string with spans highlighting a textual
representation of the character, along with a title attribute with
information about why the character was highlighted.
The list of characters stripped by this fix, which are the bidirectional
characters considered relevant, are:
U+202A
U+202B
U+202C
U+202D
U+202E
U+2066
U+2067
U+2068
U+2069
This only affects multisite Discourse instances (where multiple forums are served from a single application server). The vast majority of self-hosted Discourse forums do not fall into this category.
On affected instances, this vulnerability could allow encrypted session cookies to be re-used between sites served by the same application instance.
This will sign intermediary proxies and/or misconfigured CDNs to not
cache those error responses.
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
An upstream validation bug in the aws-sdk-sns library could enable RCE under certain circumstances. This commit updates the upstream gem, and adds additional validation to provide defense-in-depth.
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
Previously auto focus would only work on modals that include buttons or
inputs.
To avoid a situation where information modals such as keyboard shortcuts
do not get focus, simply focus on the close button as a fallback.
Previously we had no role set for various topic links, nor did we have any
headers.
This teaches screen readers that topic links in topic lists are to be treated
as H2. We opted for this less radical change cause a change of the element
type would probably result in many broken themes.
Confirmed on NVDA you can very quickly breeze through topic lists now. Minor
edge case is pinned topics which can be a bit annoying due to multiple links.
NVDA does not detect HTML5 articles as regions. This explicitly sets a
region with an aria-label denoting post numbers making it much easier to
know where you are in a topic.
Note role: article which is more semantically correct is not respected by
NVDA d/D shortcut, hence the much more generic "region" role.
Previously certain images may lead to convert / identify to run for unreasonable
amounts of time
This adds a maximum amount of time these commands can run prior to forcing
them to stop
There are a few issues which require us to do this:
- We install the latest version of bundler on every rebuild. Therefore we're running 2.2.15 everywhere, even for 'stable' clusters
- Bundler has changed how gem platforms are managed. That meant that on the stable branch we were building libv8 from source via the 'ruby' package, rather than using the precompiled x86_64-linux binary
- Building the libv8 from source is currently failing
Together, these things mean that builds of `stable` are currently failing. Each of the above issues should likely be fixed, but this commit provides the quickest route to get things working again. Note that despite the Gemfile.lock update, no gem versions have changed.
The regular expression to detect private IP addresses did not always detect them successfully.
Changed to use ruby's in-built IPAddr.new(ip_address).private? method instead
which does the same thing but covers all cases.
NewPostManager’s `post_needs_approval_in_its_category` method should allow category group moderators to create topics/reply to topics that where they have appropraite permissions.
(ie, if a user has permission to moderate a post, any posts made by them shouldn’t be sent to moderation)
If a list of email addresses is pasted into a group’s Add Members form
that has one or more email addresses of users who already belong to the
group and all other email addresses are for users who do not yet exist
on the forum then no invites were being sent. This commit ensures that
we send invites to new users.
This moves all the rate limiting for user second factor (based on `params[:second_factor_token]` existing) to the one place, which rate limits by IP and also by username if a user is found.
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
On forums with a large amount of posts when a user had a bookmark in the topic, PostgreSQL was using an inefficient query plan to fetch the first post of the topic. When running this ActiveRecord query:
```
topic.posts.with_deleted.where(post_number: 1).first
```
The following query plan was produced:
```
Limit (cost=0.43..583.49 rows=1 width=891) (actual time=3850.515..3850.515 rows=1 loops=1)
-> Index Scan using posts_pkey on posts (cost=0.43..391231.51 rows=671 width=891) (actual time=3850.514..3850.514
rows=1 loops=1)
Filter: ((topic_id = 160918) AND (post_number = 1))
Rows Removed by Filter: 2274520
Planning time: 0.200 ms
Execution time: 3850.559 ms
(6 rows)
```
The issue here is the combination of ORDER BY and LIMIT causing the ineficcient Index Scan using posts_pkey on posts to be used. When we correct the AR call to this:
```
topic.posts.with_deleted.find_by(post_number: 1)
```
We end up with a query that still has a LIMIT but no ORDER BY, which in turn creates a much more efficient query plan:
```
Limit (cost=0.43..1.44 rows=1 width=891) (actual time=0.033..0.034 rows=1 loops=1)
-> Index Scan using index_posts_on_topic_id_and_post_number on posts (cost=0.43..678.82 rows=671 width=891) (actua
l time=0.033..0.033 rows=1 loops=1)
Index Cond: ((topic_id = 160918) AND (post_number = 1))
Planning time: 0.167 ms
Execution time: 0.072 ms
(5 rows)
```
This query plan uses the correct index, `Index Scan using index_posts_on_topic_id_and_post_number on posts`. Note that this is only a problem on forums with a larger amount of posts; tiny forums would not notice the difference. On large forums a query for a topic that takes 1s without a bookmark can take 8-30 seconds, and even end up with 502 errors from nginx.
See https://meta.discourse.org/t/email-address-change-confirmation-email-not-sent-but-every-other-notification-emails-are/165358
In short: with disable emails set to non-staff, email address change confirmation emails (those sent to the new address) are not sent for staff or admin members.
This was happening because we were looking up the staff user with the to_address of the email, but the to address was the new email address because we are sending a confirm email change email, and thus the user could not be found. We didn't need to do this anyway because we are passing the user into the Email::Sender class anyway.
You can now create a file in your plugin/theme in the `api-initializers`
directory which has a simpler template than previous initializers.
Example:
```
// api-initializers/my-plugin.js
import { apiInitializer } from "discourse/lib/api";
export default apiInitializer("0.8", api => {
console.log("hello world from api initializer!");
});
```
On the topic view route we query for reviewables of each post in the stream,
using a query that filters on two unindexed columns. This results in a Parallel Seq Scan
over all rows, which can take quite some time (~20ms was seen) on forums with lots of flags
After index is added PostgreSQL planner opts for a simple Index Scan and runs in sub 1ms.
Before:
```
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=11401.08..11404.87 rows=20 width=28) (actual time=19.209..19.209 rows=1 loops=1)
Group Key: r.target_id
-> Gather Merge (cost=11401.08..11404.41 rows=26 width=28) (actual time=19.202..20.419 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial GroupAggregate (cost=10401.06..10401.38 rows=13 width=28) (actual time=16.958..16.958 rows=0 loops=3)
Group Key: r.target_id
-> Sort (cost=10401.06..10401.09 rows=13 width=16) (actual time=16.956..16.956 rows=0 loops=3)
Sort Key: r.target_id
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=0.42..10400.82 rows=13 width=16) (actual time=15.894..16.938 rows=0 loops=3)
-> Parallel Seq Scan on reviewables r (cost=0.00..10302.47 rows=8 width=12) (actual time=15.882..16.927 rows=0 loops=3)
Filter: (((target_type)::text = 'Post'::text) AND (target_id = ANY ('{7565483,7565563,7565566,7565567,7565568,7565569,7565579,7565580,7565583,7565586,7565588,7565589,7565601,7565602,7565603,7565613,7565620,7565623,7565624,7565626}'::integer[])))
Rows Removed by Filter: 49183
-> Index Scan using index_reviewable_scores_on_reviewable_id on reviewable_scores s (cost=0.42..12.27 rows=2 width=8) (actual time=0.029..0.030 rows=1 loops=1)
Index Cond: (reviewable_id = r.id)
Planning Time: 0.318 ms
Execution Time: 20.470 ms
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=0.84..342.54 rows=20 width=28) (actual time=0.038..0.038 rows=1 loops=1)
Group Key: r.target_id
-> Nested Loop (cost=0.84..341.95 rows=31 width=16) (actual time=0.020..0.033 rows=1 loops=1)
-> Index Scan using index_reviewables_on_target_id on reviewables r (cost=0.42..96.07 rows=20 width=12) (actual time=0.013..0.026 rows=1 loops=1)
Index Cond: (target_id = ANY ('{7565483,7565563,7565566,7565567,7565568,7565569,7565579,7565580,7565583,7565586,7565588,7565589,7565601,7565602,7565603,7565613,7565620,7565623,7565624,7565626}'::integer[]))
-> Index Scan using index_reviewable_scores_on_reviewable_id on reviewable_scores s (cost=0.42..12.27 rows=2 width=8) (actual time=0.005..0.005 rows=1 loops=1)
Index Cond: (reviewable_id = r.id)
Planning Time: 0.253 ms
Execution Time: 0.067 ms
```
Add update for fetching git commits if they do not exist, eg with
clone --depth 1 - only can fetch via git fetch --depth 1 {remote} {ref}
the ref needs to be a full, non-ambiguous reference.
Useful if you want to, say, have your unicorn listen on a Unix domain
socket, rather than a TCP port, or you want to be able to bind to a
single address other than 127.0.0.1.
This pushes v8 from Chrome 73 (March 2019) -> 84 (July 14 2020)
Not expecting any user facing changes, but it is super nice to be on latest
v8 :confetti:
* strip out the href and xlink:href attributes from use element that
are _not_ anchors in svgs which can be used for XSS
* adding the content-disposition: attachment ensures that
uploaded SVGs cannot be opened and executed using the XSS exploit.
svgs embedded using an img tag do not suffer from the same exploit
Adds a new rake task `plugin:checkout_compatible_all` and
`plugin:checkout_compatible[plugin-name]` that check out compatible plugin
versions.
Supports a .discourse-compatibility file in the root of plugins and themes that
list out a plugin's compatibility with certain discourse versions:
eg: .discourse-compatibility
```
2.5.0.beta6: some-git-hash
2.4.4.beta4: some-git-tag
2.2.0: git-reference
```
This ensures older Discourse installs are able to find and install older
versions of plugins without intervention, through the manifest only.
It iterates through the versions in descending order. If the current Discourse
version matches an item in the manifest, it checks out the listed plugin target.
If the Discourse version is greater than an item in the manifest, it checks out
the next highest version listed in the manifest.
If no versions match, it makes no change.
The previous fix (f43c0a5d85) wasn't working for images that were already uploaded.
The "metadata" (eg. 'for_*' and 'secure' attributes) were not added to existing uploads.
Also used 'Upload.get_from_url' is the admin/site_setting controller to properly retrieve
an upload from its URL.
Fixed the Upload::URL_REGEX to use the \h (hexadecimal) for the SHA
Follow-up-to: f43c0a5d85
When uploading an image as a site setting, we need to return the "raw" URL, otherwise
when saving the site setting, the upload won't be looked up properly.
Follow-up-to: f11363d446
autocomplete resolving to [] was causing it to stop working.
Instead we have a special const (SKIP) which ensures it will
continue to be evaluated and only this instance is skipped.
In 91c89df6, I fixed the onebox to support local topics with a slug-less URL.
This commit fixes all the other spots (search, topic links and user badges) where we look up for a local topic.
Follow-up-to: 91c89df6
In French, the help trigger has a raw content of "afficher l'aider" which is then cooked into "afficher l’aide" (note the different quote character).
Since we were checking the raw content of the trigger against the cooked version of the post, this trigger never worked in French.
This changes so that we cook the trigger before checking in against the cooked version of the post.
DEV: new 'discobot_username' method that is used everywhere instead of 'discobot_user.username' / 'discobot_user.username_lower'
There is a feature in search where we take over from the tokenizer
in postgres and attempt to inject more words into search.
So for example: sam.i.am will inject the words i and am.
This is not ideal cause there are many edge cases and this can
cause extreme index bloat.
This is an opening move commit to make it configurable, over the
next few weeks we will evaluate and decide if we disable this by
default or simply remove.
The logic of adding additional search results does not seem to be
needed anymore.
It appears to be a relic of an old implementation.
This saves an entire search query for every search made.
* FIX: Correct version comparison logic when comparing stable to beta
For example, version 1.3.0 should be considered higher than 1.3.0.beta3. So `Discourse.has_needed_version?('1.3.0', '1.3.0.beta3')` should return true
* Switch to use Gem::Version to compare versions
This fix ensures that if a staged user is linked to or quoted they won't
be emailed about it.
A staged user could email into a category, and another user could quote
them inside of a completely different category and we don't want a
staged user to receive an email for this.
Bug report:
https://meta.discourse.org/t/-/145202/9
Additionally correctly handle cookie path for authentication_data
There were two bugs that exposed an interesting case where two discourse
instances hosted across two subfolder installs in the same domain
with oauth may clash and cause strange redirection on first login:
Log in to example.com/forum1. authentication_data cookie is set with path /
On the first redirection, the current authentication_data cookie is not unset.
Log in to example.com/forum2. In this case, the authentication_data cookie
is already set from forum1 - the initial page load will incorrectly redirect
the user to the redirect URL from the already-stored cookie, to /forum1.
This removes this issue by:
* Setting the cookie for the correct path, and not having it on root
* Correctly removing the cookie on first login
Pop up a confirmation box when there is input. This prevents accidental closing
of the dialog boxes due to clicking outside.
This adds a development hook on modals in the form of a `beforeClose`
function. Modal windows can abort the close if the funtion returns false.
Additionally fixing a few issues with loop and state on the modal popups:
Escape key with bootbox is keyup.
Updating modal to close on keyup as well so escape key is working.
Fixes an issue where pressing esc will loop immediately back to the modal by:
keydown -> bootbox -> keyup -> acts as "cancel", restores modal
Needs a next call to reopenModal otherwise, keyup is handled again by the modal.
Fixes an issue where pressing esc will loop immediately back to the confirm:
esc keyup will be handled and bubble immediately back to the modal.
Additionally, only handle key events when the #discourse-modal is visible.
This resolves issues where escape or enter events were being handled by
a hidden modal window.
Meta report: https://meta.discourse.org/t/short-url-secure-uploads-s3/144224
* if the show_short route is hit for an upload that is
secure, we redirect to the secure presigned URL. however
this was not taking into account multisite so the db name
was left off the path which broke the presigned URL
* we now use the correct url_for method if we know the
upload (like in the show_short case) which takes into
account multisite
Due to unicorn env object recycling request.ip could point at the wrong
ip address by the time defer block is called. This usually would happen
under load.
This also avoids keeping the entire request object as referenced by the
closure.
Meta report: https://meta.discourse.org/t/excessive-requests-to-uploads-lookup-urls-leading-to-429-response/143119
* The data-orig-src attribute was not being removed from cooked
video and audio so the composer was infinitely trying to get the
URLs for them, which would never resolve to anything
* Also the code that retrieved the short URL was unscoped, and was
getting everything on the page. if running from the composer we
now scope to the preview window
* Also fixed a minor issue where the element href for the video
and audio tags was not being set when the short URL was found
In some cases CTE caused pathologically bad query plans.
This optimises it so query runs by itself and caches for lifetime
of the topic query object.
This lightweight caching is done cause topic query will often
execute two queries (one for pinned and one for non pinned)
The server and client used two different formats for preload keys. The
server was using 'topic_list_c/SLUG/l/latest', but the client was using
'topic_list_c/SLUG/ID/l/latest'.
This commit is an addition to 374534f00e.
Anonymous users could query the invite json and see counts and
summaries which is not allowed in the UX of Discourse.
This commit has those endpoints return a 403 unless the user is
allowed to invite.
Note this commit also fixes an issue where the edit post actions was trying to focus the edit textarea, but was using jquery functions on a DOM node.
scrollTo is not available on IE11 but that shouldn't cause much trouble.
MaxMind now requires an account with a license key to download files.
Discourse admins can register for such an account at:
https://www.maxmind.com/en/geolite2/signup
License key generation is available in the profile section.
Once registered you can set the license key using `DISCOURSE_MAXMIND_LICENSE_KEY`
This amends it so we unconditionally skip MaxMind DB downloads if no license key exists.
This is a bottom up rewrite of Discourse cache to support faster performance
and a limited surface area.
ActiveSupport::Cache::Store accepts many options we do not use, this partial
implementation only picks the bits out that we do use and want to support.
Additionally params are named which avoids typos such as "expires_at" vs "expires_in"
This also moves a few spots in Discourse to use Discourse.cache over setex
Performance of setex and Discourse.cache.write is similar.
To eliminate a DDOS attack vector, we're taking the following measures:
The endpoint will be rate-limited to 3 requests every 60 seconds (per user).
A 24 hours max-age cache header is sent with the response.
The route will be hijacked to generate the certificate in the background.
We expect mini profiler only to show up on accounts that are flagged as
developer accounts.
Unfortunately there was a bypass on any controllers that mix in ApplicationHelper
Per new lifecycle https://developers.google.com/web/updates/2018/07/page-lifecycle-api
On Android and latest Chrome when an app transitions from "frozen" to
active the new "resume" event fires with no accompanying "visibilitychange"
event.
This means that often background tabs may be stuck thinking that discourse
has no focus when, indeed, it has.
This leads to cases where no posts are marked read anymore.
This corrects an XSS in ?pp=help.
Also removes the jQuery dependency from rack-mini-profiler and restricts
memory sensitive profiling methods development only.
* FIX: User should get notified when a post is deleted
* FEATURE: Notify posters when restoring flagged posts
* Fix typo
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Improve tests
When activating a user via an external provider, this would cause the "this account is not activated" message to show on the first attempt, even though the account had been activated correctly.
This is a very long standing bug we had, if a plugin attempted to amend a
serializer core was not "correcting" the situation for all descendant classes
this often only showed up in production cause production eager loads serializers
prior to plugins amending them.
This is a critical fix for various plugins
This adds a 1 minute rate limit to all JS error reporting per IP. Previously
we would only use the global rate limit.
This also introduces DISCOURSE_ENABLE_JS_ERROR_REPORTING, if it is set to
false then no JS error reporting will be allowed on the site.
All posts created by the user are counted unless they are deleted,
belong to a PM sent between a non-human user and the user or belong
to a PM created by the user which doesn't have any other recipients.
It also makes the guardian prevent self-deletes when SSO is enabled.
Add the Array.from polyfill for IE11. This is required to support the transpiled ES6 spread syntex generated by babel: https://babeljs.io/docs/en/caveats/
This is a low severity security fix because it requires a logged in
admin user to update a site setting via the API directly to an invalid
value.
The fix adds validation for the affected site settings, as well as a
secondary fix to prevent injection in the event of bad data somehow
already exists.
There is a security hole in lodash with prototype pollution. It's not
clear if Discourse is affected but to be on the safe side we will
upgrade right away.
Note that the front end Discourse does not appear to use `defaultsDeep`
in our custom build and should be protected.
Note this is very low severity as the group needs to be created with a
default title that contains HTML, and group creation is restricted to
staff members right now.
A bug where input focus is displaced on modals was fixed in iOS 11.3 update. This hack was causing problems on topic page since hiding main-outlet results in lost read position after opening and closing a modal.
WS-2019-0064: Versions of handlebars prior to 4.0.14 are vulnerable to Prototype Pollution. Templates may alter an Objects prototype, thus allowing an attacker to execute arbitrary code on the server.
This is to address: https://www.npmjs.com/advisories/755
It is a low priority fix, as Discourse does not allow end users to input
raw handlebars templates.
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Co-authored-by: David Taylor <david@taylorhq.com>
This gives more control over the request. In particular we can easily
lookup DNS dynamically, instead of only upon NGINX startup.
Previously, NGINX was looking up IP for the letter avatar service and
caching the CDN IP address, this caused issues if CDN changed IP, in
which letter avatars would be broken till a container restarted.
NGINX config has been updated to add caching. This change will require
a container rebuild.
The proxy will now function in development environments, so the patch
for `letter_avatar_proxy` has been removed.
Generally we should never be touching AR objects in migrations, this is
super risky as we may end up with invalid schema cache.
This code from 2013 did it unconditionally. This change amends it so:
1. We only load up schema if we have no choice
2. We flush the cache before and after
This makes this migration far less risky.
Previously, we would initialize an ImageOptim object each time we resize.
This object init is mega expensive (170ms on a VERY fast machine):
```
[1] pry(main)> Benchmark.measure { FileHelper.image_optim }
=> #<Benchmark::Tms:0x00007f55440c1de0
@cstime=0.055742,
@cutime=0.141031,
@label="",
@real=0.17165619300794788,
@stime=0.0002750000000000252,
@total=0.19890400000000008,
@utime=0.0018560000000000798>
```
This happens cause during init it hunts for all the right binaries and sets
up internals.
We now memoize this object to avoid a huge amount of pointless work.
Historically due to https://meta.discourse.org/t/why-is-discourse-so-slow-on-android/8823
we decreased page sizes of both home page and topic page on android by half.
This was done on the server side and as a side effect and caused page sizes on android
to mismatch between Android and non Android.
Unfortunately about a year ago googlebot started pretending it is Android,
this cause Google to start indexing pages as what android would see. So
it saw double the amount of pages in the index as what exists on desktop.
This in turn caused double the amount of indexing work and a large amount
of broken links on long topics.
This fix removes all special behavior which is no longer needed due to
other performance work in Discourse including raw handlebars on home page
and virtual dom on topic pages.
I tested we do not need this on Blu Advance 5.0 it has 1.3 GHZ mediatec mt6580
This phone retails for around $50 USD.
If we decide long term that we want any hacks like this we will shift them
to the client side. It can just hold data in memory without rendering.
This ensures that the hostname rails uses for various helpers always matches
the Discourse hostname
# Conflicts:
# config/application.rb
# spec/requests/application_controller_spec.rb
This release contains security fixes to the underlying rack library
used by Discourse.
Impact is not too high as we do not use request.scheme in our templates
If we detect redis is in readonly we can not correctly get a mutex
raise an exception to notify caller
When getting optimized images avoid the distributed mutex unless
for some reason it is the first call and we need to generate a thumb
In redis readonly no thumbnails will be generated
Checking `plugin.enabled?` while initializing plugins causes issues in two ways:
- An application restart is required for changes to take effect. A load-balanced multi-server environment could behave very weirdly if containers restart at different times.
- In a multisite environment, it takes the `enabled?` setting from the default site. Changes on that site affect all other sites in the cluster.
Instead, `plugin.enabled?` should be checked at runtime, in the context of a request. This commit removes `plugin.enabled?` from many `instance.rb` methods.
I have added a working `plugin.enabled?` implementation for methods that actually affect security/functionality:
- `post_custom_fields_whitelist`
- `whitelist_staff_user_custom_field`
- `add_permitted_post_create_param`
Despite `navigator.share` being defined the call was failing with this error:
```
sharing DOMException: Internal error: could not connect to Web Share interface.
```
Somewhere there was a regression and a user couldn't remove their own
title. If they selected '(none)' in the UI it would say it was saved,
but it would not actually be updated in the db.
* Since we can no longer restore into a different schema,
we will move tables in the public schema into the backup schema
first before restoring the dump file which goes into the public
schema. The downside to this approach is that we will increase
the downtime experienced during the restore process. Downtime
would equal the duration of restoring the dump file.
* This exposes the token in the Sidekiq dashboard which can be
viewed by an admin and defeats the purpose of using a token
in the download backup email ink.
* Email and username are both allowed to be used for logging in.
Therefore, it is easier to just store the user's id rather than
to store the username and email in the session.
In this branch (stable) we can't run the sanitizer because the bundle is not
loaded. The long badge description is not sanitized, but it
has to be created by an admin so it's extremely low risk.
In the beta / tests-passed branches the text is sanitized.
- Increase size of email column to varchar(513)
- Give error message on signup when email is too large
Overall impact: Low, allows signups from blocked domains. Main risk is increased spam.
I can't believe they just pulled the old gem and broke people deploying
our site to production. I get it, your name changed, but don't break
other people's apps with no deprecations.
This security fix needs SSO to be configured, and the user has to go
through the entire auth process before being redirected to the wrong host so
it is probably lower priority for most installs.
# The email must be present in some form since many of the methods
# for processing + redemption rely on it. If it's still nil after
# these checks then we have hit an edge case and should not proceed!
defensure_email_is_present!(email)
ifemail.blank?
Rails.logger.warn(
"email param was blank in InviteRedeemer for invite ID #{invite.id}. The `redeeming_user` was #{self.redeeming_user.present??"(ID: #{self.redeeming_user.id})":"not"} present.",
warn_local_payload_url:"It seems you are trying to set up the webhook to a local url. Event delivered to a local address may cause side-effect or unexpected behaviors. Continue?"
secret_invalid:"Secret must not have any blank characters."
secret_too_short:"Secret should be at least 12 characters."
secret_placeholder:"An optional string, used for generating signature"
@ -4534,7 +4534,6 @@ en:
is_private:"Theme is in a private git repository"
remote_branch:"Branch name (optional)"
public_key:"Grant the following public key access to the repo:"
public_key_note:"After entering a valid private repository URL above, an SSH key will be generated and displayed here."
broken_theme_alert:"Tu sitio puede que no funcione porque el tema / componente %{theme} tiene errores. Desactívalo en %{path}."
broken_decorator_alert:"Puede que las publicaciones no se muestren correctamente porque uno de los decoradores de contenido de publicaciones de tu sitio está causando errores. Revisa la consola de desarrollo del navegador para más información."
s3:
regions:
ap_northeast_1:"Asia-Pacífico (Tokio)"
@ -3071,7 +3072,7 @@ es:
default_list_filter:"Filtro de lista por defecto:"
allow_badges_label:"Permitir que se concedan insignias en esta categoría"
edit_permissions:"Editar permisos"
reviewable_by_group:"Además del personal, el contenido de esta categoría también puede ser revisado por:"
reviewable_by_group:"Además del personal, los siguientes grupos también pueden revisar el contenido de esta categoría:"
review_group_name:"nombre del grupo"
require_topic_approval:"Requiere aprobación del moderador para todos los temas nuevos"
require_reply_approval:"Requiere aprobación del moderador para todas las respuestas nuevas"
allow_badges_label:"Badges laten toekennen in deze categorie"
edit_permissions:"Toestemmingen bewerken"
reviewable_by_group:"Naast stafleden kan inhoud in deze categorie ook worden beoordeeld door:"
review_group_name:"groepsnaam"
require_topic_approval:"Goedkeuring van moderator voor alle nieuwe topics vereisen"
require_reply_approval:"Goedkeuring van moderator voor alle nieuwe antwoorden vereisen"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.