Manual:Job queue/it
In 2009 (MediaWiki 1.6) the job queue was introduced to perform long-running tasks asynchronously. The job queue is designed to hold many short tasks using batch processing.
Set up
It is recommended that you instead schedule the running of jobs completely in the background, via the command line.
By default, jobs are run at the end of a web request. Disable this default behaviour by setting $wgJobRunRate
to 0
.
runJobs.php
as the same user as the web server runs as, to ensure that permissions to the filesystem are correctly accounted for if jobs touch uploaded files.Cron
You could use cron to run the jobs every hour. Add the following to your crontab file:
0 * * * * /usr/bin/php /var/www/wiki/maintenance/runJobs.php --maxtime=3600 > /var/log/runJobs.log 2>&1
Using Cron makes it easy to get started, but can make email notifications and cascading template feel slow (to wait up to an hour). Consider using one of the below approaches to set up a continuous job runner instead.
Continuous service
If you have shell access and the possibility to create init scripts, you can create a simple service to run jobs as they become available, and also throttle them to prevent the job runner to monopolise the CPU resources of the server:
Create a bash script, for example at /usr/local/bin/mwjobrunner
:
Create script
#!/bin/bash
# Put the MediaWiki installation path on the line below
MW_INSTALL_PATH="/home/www/www.mywikisite.example/mediawiki"
RUN_JOBS="$MW_INSTALL_PATH/maintenance/runJobs.php --maxtime=3600"
echo Starting job service...
# Wait a minute after the server starts up to give other processes time to get started
sleep 60
echo Started.
while true; do
# Job types that need to be run ASAP no matter how many of them are in the queue
# Those jobs should be very "cheap" to run
php $RUN_JOBS --type="enotifNotify"
# Everything else, limit the number of jobs on each batch
# The --wait parameter will pause the execution here until new jobs are added,
# to avoid running the loop without anything to do
php $RUN_JOBS --wait --maxjobs=20
# Wait some seconds to let the CPU do other things, like handling web requests, etc
echo Waiting for 10 seconds...
sleep 10
done
Depending on how fast the server is and the load it handles, you can adapt the number of jobs to run on each cycle and the number of seconds to wait on each cycle.
Make the script executable (chmod 755
).
Create service
If using systemd, create a new service unit by creating the file /etc/systemd/system/mw-jobqueue.service
.
Change the User
parameter to the user that runs PHP on your web server:
[Unit]
Description=MediaWiki Job runner
[Service]
ExecStart=/usr/local/bin/mwjobrunner
Nice=10
ProtectSystem=full
User=php-fpm
OOMScoreAdjust=200
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Enable it and start it with those commands:
sudo systemctl enable mw-jobqueue
sudo systemctl start mw-jobqueue
sudo systemctl status mw-jobqueue
Job execution on page requests
By default, at the end of each web request, one job is taken from the job queue and executed.
This behaviour is controlled by the $wgJobRunRate configuration variable.
Setting this variable to 1
, will run a job on each request.
Setting this variable to 0
will disable the execution of jobs during web requests completely, so that you can instead run runJobs.php manually or periodically from the command line.
Versione MediaWiki: | <translate> ≥</translate> 1.23 |
When enabled, jobs will be executed by opening a socket and making an internal HTTP request to an unlisted special page: Special:RunJobs. Vedere anche la sezione asincrona.
Performance issue
If the performance burden of running jobs on every web request is too great but you are unable to run jobs from the command line, you can reduce $wgJobRunRate to a number between 1
and 0
.
This means a job will execute on average every 1 / $wgJobRunRate
requests.
$wgJobRunRate = 0.01;
Manual usage
There is also a way to empty the job queue manually, for example after changing a template that's present on many pages.
Simply run the maintenance/runJobs.php
maintenance script.
Per esempio:
/path-to-my-wiki/maintenance$ php ./runJobs.php
Abandoned jobs
A job can fail for some reasons. To understand why, you have to inspect the related log file.
In any case, if a job fails 3 times (so if the system has done that number of attempts), the job is then considered "abandoned" and it's not executed again.
Relevant source code:
https://doc.wikimedia.org/mediawiki-core/master/php/JobQueue_8php_source.html#l00085
An abandoned job is:
- not executed anymore from runJobs.php
- not counted from Manual:ShowJobs.php
- not automatically removed from the database
- but are included in the count of Special:Statistics
History
Asynchronous
The configuration variable $wgRunJobsAsync has been added to force the execution of jobs synchronously, in scenarios where making an internal HTTP request for job execution is not wanted.
When running jobs asynchronously, it will open an internal HTTP connection for handling the execution of jobs, and will return the contents of the page immediately to the client without waiting for the job to complete. Otherwise, the job will be executed in the same process and the client will have to wait until the job is completed. When the job does not run asynchronously, if a fatal error occurs during job execution, it will propagate to the client, aborting the load of the page.
Note that even if $wgRunJobsAsync is set to true, if PHP can't open a socket to make the internal HTTP request, it will fall back to the synchronous job execution. However, there are a variety of situations where this internal request may fail, and jobs won't be run, without falling back to the synchronous job execution. Starting with MediaWiki 1.28.1 and 1.27.2, $wgRunJobsAsync now defaults to false.
Deferred updates
The deferred updates mechanism allows the execution of code to be scheduled for the end of the request, after all content has been sent to the browser. This is similar to queuing a job, except that it runs immediately instead of upto several minutes/hours in the future.
DeferredUpdates
was introduced in MediaWiki 1.23 and received major changes during MediaWiki 1.27 and 1.28.
The goal of this mechanism is speed up the web responses by doing less work, as well as to prioritise some work that would previously be a job to run as soon as possible after the end of the response.
A deferrable update can implement EnqueueableDataUpdate
in order to be queueable as a Job as well.
This is used by RefreshSecondaryDataUpdate
in core, for example, which means if the update fails for any reason, MediaWiki will fallback to queuing as a job and try again later as to fulfil the contract in question.
Modifiche introdotte con MediaWiki 1.22
In MediaWiki 1.22 , the job queue execution on each page request was changed (Gerrit change 59797) so, instead of executing the job inside the same PHP process that's rendering the page, a new PHP CLI command is spawned to execute runJobs.php in the background. It will only work if $wgPhpCli is set to an actual path or safe mode is off, otherwise, the old method will be used.
This new execution method could cause some problems:
- If $wgPhpCli is set to an incompatible version of PHP (e.g.: an outdated version) jobs may fail to run (fixed in 1.23).
- PHP
open_basedir
restrictions are in effect, and $wgPhpCli is disallowed (<translate> task <tvar name=1>T62208</tvar></translate>, fixed in 1.23).
- Performance: even if the job queue is empty, the new PHP process is started anyway (<translate> task <tvar name=1>T62210</tvar></translate>, fixed in 1.23).
- Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (<translate> task <tvar name=1>T60719</tvar></translate>, fixed in 1.22)
- It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (<translate> task <tvar name=1>T62698</tvar></translate>, fixed in 1.23)
- Normal shell limits like $wgMaxShellMemory , $wgMaxShellTime , $wgMaxShellFileSize are enforced on the runJobs.php process that's being executed in the background.
There's no way to revert to the old on-request job queue handling, besides setting $wgPhpCli to false
, for example, which may cause other problems (<translate> task <tvar name=1>T63387</tvar></translate>).
It can be disabled completely by setting $wgJobRunRate = 0;
, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.
Modifiche introdotte con MediaWiki 1.23
In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.
It was first designed as an API entry point (Gerrit change 113038) but later changed to be the unlisted special page Special:RunJobs (Gerrit change 118336).
While it solves various bugs introduced in 1.22, it still requires loading a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.
Modifiche introdotte con MediaWiki 1.27
In MediaWiki 1.25 and MediaWiki 1.26, use of $wgRunJobsAsync would sometimes cause jobs not to get run if the wiki has custom $wgServerName configuration. This was fixed in MediaWiki 1.27.
<translate> task <tvar name=1>T107290</tvar></translate>
Modifiche introdotte con MediaWiki 1.28
Between MediaWiki 1.23 and MediaWiki 1.27, use of $wgRunJobsAsync would cause jobs not to get run on if MediaWiki requests are for a server name or protocol that does not match the currently configured server name one (e.g. when supporting both HTTP and HTTPS, or when MediaWiki is behind a reverse proxy that redirects to HTTPS). This was fixed in MediaWiki 1.28.
<translate> task <tvar name=1>T68485</tvar></translate>
Modifiche introdotte con MediaWiki 1.29
In MediaWiki 1.27.0 to 1.27.3 and 1.28.0 to 1.28.2, when $wgJobRunRate is set to a value greater than 0, an error like the one below may appear in error logs, or on the page:
PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted
As a result of this error, certain updates may fail in some cases, like category members not being updated on category pages, or recent changes displaying edits of deleted pages - even if you manually run runJobs.php to clear the job queue. It has been reported as a bug (<translate> task <tvar name=1>T100085</tvar></translate>) and was solved in 1.27.4 and 1.28.3.
Esempi di lavoro
Updating links tables when a template changes
When a template changes, MediaWiki adds a job to the job queue for each article transcluding that template. Each job is a command to read an article, expand any templates, and update the link table accordingly. Previously, the host articles would remain outdated until either their parser cache expires or until a user edits the article.
HTML cache invalidation
A wider class of operations can result in invalidation of the HTML cache for a large number of pages:
- Changing an image (all the thumbnails have to be re-rendered, and their sizes recalculated)
- Deleting a page (all the links to it from other pages need to change from blue to red)
- Creating or undeleting a page (like above, but from red to blue)
- Changing a template (all the pages that transclude the template need updating)
Except for template changes, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, one job is added per 300 operations (see $wgUpdateRowsPerJob )
Note, however, that even if purging the cache of a page is a short operation, reparsing a complex page that is not in the cache may be expensive, especially if a highly used template is edited and causes lots of pages to be purged in a short period of time and your wiki has lots of concurrent visitors loading a wide spread of pages.
This can be mitigated by reducing the number of pages purged in a short period of time, by reducing $wgUpdateRowsPerJob
to a small number (20, for example) and also set $wgJobBackoffThrottling
for htmlCacheUpdate
to a low number (5, for example).
Audio and video transcoding
When using TimedMediaHandler to process local uploads of audio and video files, the job queue is used to run the potentially very slow creation of derivative transcodes at various resolutions/formats.
These are not suitable for running on web requests -- you will need a background runner.
It's recommended to set up separate runners for the webVideoTranscode
and webVideoTranscodePrioritized
job types if possible. These two queues process different subsets of files -- the first for high resolution HD videos, and the second for lower-resolution videos and audio files which process more quickly.
Typical values
During a period of low load, the job queue might be zero. At Wikimedia, the job queue is, in practice, almost never zero. In off-peak hours, it might be a few hundred to a thousand. During a busy day, it might be a few million, but it can quickly fluctuate by 10% or more. [1]
Speciale:Statistiche
Up to MediaWiki 1.16, the job queue value was shown on Special:Statistics. However, since 1.17 (rev:75272) it's been removed, and can be seen now with API:Siteinfo :
The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.
Per gli sviluppatori
Code stewardship
- <translate> Maintained by <tvar name=1>MediaWiki Interfaces Team</tvar>.</translate>
- <translate> Live chat ([[<tvar name=1>Special:MyLanguage/MediaWiki on IRC</tvar>|IRC]]): <tvar name=2><span class="plainlinks" style="font-family: monospace,Courier; white-space: pre-wrap !important; word-wrap: break-word; max-width: 1200px; overflow: auto;" title="<translate nowrap> <tvar name=1>#mediawiki-core</tvar> on Libera.Chat IRC</translate>">#mediawiki-core <translate> connect</translate></tvar></translate>
- <translate> Issue tracker: [<tvar name=url>https://phabricator.wikimedia.org/tag/mediawiki-core-jobqueue/</tvar> Phabricator <tvar name=phab>MediaWiki-Core-JobQueue</tvar>] ([<tvar name=url2>https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?projects=mediawiki-core-jobqueue</tvar> Report an issue])</translate>