If you plan to use it in such environment please do an exhaustive test before enabling it in production. Any question related to this matter should be addressed on Get Support page.
Please note that this plug-in was designed to work with a standard WordPress Network of sites.
The multisite installation is another story. The WordPress installation belongs to the network, not to you. What belong to you are your site’s files (ie. active themes and plugins, your site upload folder) and the site’s tables only (not the entire database).
However, if you are Network Super Admin, to be more explicit if you have manage_network_options capability, then you may access (additionally to your site’s files) all the plug-ins and themes, admin, includes and the whole content directory. Nevertheless you are entitled to backup all tables within network database.
Download in browser is troublesome. Expected `test`, got `%0Ates`.
What does this mean?
How does this affect you? Well, if you want to download the MySQL database script or a remote file via WP MyBackup then it’s very likely that the WordPress will throw these extra white-spaces before the download file content is sent to the browser. As such the file will be prepended with these extra white-spaces. If it is a text file then you should be able to see its content (although truncated) but if it’s a binary file (like an ZIP archive) then 1 byte more/less makes the difference.
How to fix this issue?
There are at least two different approaches:
- if you are a skilled WordPress user (software developer, website administrator) then first make sure you check the browser’s console (for JavaScript exceptions). The test is done usually hourly but if you want to check sooner then set the Support expert option `Extra-whitespace check` = 1. When that alert is generated an message like the one below will be printed on the browser’s console:
Challenging string `test` but got `XXX`
, where `XXX` is the encoded text (eg. `%20%20test` which means two spaces prepended to the challenging string) that has been received instead of the expected one. That being said:
- if you have access to a SSH console then run the following terminal command:
WPROOT=<wp-root>where wp-root is the absolute path of your WordPress root directory (or any directory you want to scan)
head -n 1 `find $WPROOT -type f -name “*.php”` |grep -H -E “^\s+<\?php” - the command above might return one/many (troublesome) lines like in the example below:{space}{space}{space}<?php {some-string} ?>where {space} denotes a whitespace character (like space, tab, CRLF, etc) and {some-string} denotes the string following the PHP start tag <?php
- now we know what {some-string} causes the problem but not which file; we are going to search which file(s) contains this {some-string}:grep -r “{some-string}” `find $WPROOT -type f -name “*.php”`
- you will get one/many files where {some-string} appears in their first line
- edit these files (make a safe-copy of them first) by removing these {space} characters from their first line, just before the PHP start tag <?php
- if that worked please notify the theme/plug-in author about this issue/fix such that he/she can update their software
A better version would be the following BASH script (let’s call it find0A.sh):
#!/bin/bash for f in `find $1 -type f -name "*.php"`;do a=$(head -n1 $f|grep -E "^\s+(<\?php)?"); # check file’s first line only if [ -n "$a" ];then printf "%s\n%s" $f $a;fi # found whitespaces? print them! a=$(head -n1 $f|grep -c -E "^$"); # check file’s first line only if [ "$a" != "0" ];then printf "Found an empty lines at beginning of %s\n" $f;fi b=$(tail -n1 $f|grep -E "(\?>)?\s+$"); # check file’s last line only if [ -n "$b" ];then printf "%s\n%s" $f $b;fi # found whitespaces? print them! b=$(tail -n1 $f|grep -c -E "^$"); # check file’s last line only if [ "$b" != "0" ];then printf "Found an empty line at end of %s\n" $f;fi done
and you call this script like: find0a.sh $WPROOT
- if you have access to a SSH console then run the following terminal command:
- if you are a non-skilled WordPress user
- We assume that the main theme is the cause. Try to switch to a WordPress built-in theme (like TwentyFifteen). Try to download the same thing: does it work? – you are done, otherwise is a plug-in which causes the trouble
- Determine which plug-in does this by deactivating them (except WP MyBackup) one at a time then retry to download the same thing.
- If it’s not fixed then continue deactivating the next plug-in.
- After you tried everything and finally works then you know which plug-in/theme caused the problem. Just report the incident to the plug-in/theme author, hopefully he/she will fix it.
- if you are a mixt between the two above (an adventurous one) then start with the steps (2) and if that doesn’t fix the problem you might try the steps (1)
On Unix like systems it can be fixed with the aid of tools like hexdump and dd:
hexdump -C -n64 $f # $f should not start with whitespaces
n=[number of whitespaces to skip] # check with aid of hexdump
dd bs=1 skip=$n if=$f of=fixed-$f # trim the whitespaces
TypeError: parent is undefined
To fix this inconvenient just press the F5 key which will force your browser to reload the current page content including those (plug-in) new JavaScript files from the WordPress web server.
Note: starting with version 0.2.2-4 this issues should not happen due the fact that the plug-in detects itself this situation and then it reloads automatically the page.
Warning: is_dir(): open_basedir restriction in effect
What is this and how the message affects me?
Perhaps the screen you are seeing has a Path parameter (or alike) which is set to a location (eq. /tmp) with access restricted by the limitation mentioned earlier. Just try to set a location that is permitted (eg. a folder within your website boundaries).
Obviously I could catch and mask these messages but the reason for letting them show in their raw format was to help you notice them and eventually search Google for a solution (which has nothing to do with the plug-in itself).
Note : please note that this plugin supports the `open_basedir` PHP directive; it has special routines that do all their best to accommodate in such environment.
Warning: disk_free_space() has been disabled for security reasons
What is this and how the message affects me?
mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead. Why is that and how this affects the backup?
So the message is just a warning thrown by your PHP engine.
Please note that starting with v0.2.3-16 MyBackup supports all three MySQL extensions: mysql, mysqli and pdo_mysql (see #1 above).
[!] Google account not linked yet. Please authenticate within the web interface then try again.
What does it mean?
Normally a Google/Dropbox Drive authorization expires after a while, however the plug-in is programmed to renew the authorization when it detects it expired. If for some reasons it wasn’t able to do that automatically then this backup target is is a conflictual state: one one hand it is enabled which means you configured the plug-in to upload the backup (also) to this destination, on the other hand when the plug-in tries to upload a file there it notices that this cannot be accessed due to the lack of authorization.
The solution is trivial: just go to that backup target screen (eg. Backup targets -> Google tab) and make sure you authorize the account again. That’s all folks!
Peer certificate cannot be authenticated with known CA certificates
I see at least two reasons for that:
- your server’s SSL certificate is signed by an unknown Certification Authority(*) (eg. self-signed SSL certificates)
- the path where your SSL certificates are stored is not accessible to the PHP (check your open_basedir option)
(*) The known Certification Authorities (CA) are stored in the `ssl/cacert.pem` within the plug-in install directory.
is_readable(): open_basedir restriction in effect. File(/dev/urandom) is not within the allowed path(s)
MyBackup does not produce this message (not that I know). However, when your PHP has the open_basedir option in effect (see php.ini), depending on what plug-ins you may have installed and how they handle the `open_basedir` effect, such a message may be generated by some other plug-ins which run (even if you are not aware about that) in parallel with the backup/restore jobs. Our plug-in just happen to be there to capture that message, so more or less it acts as a “error handler” not only for itself but also for other WordPress plug-ins that run in the same time. See also the question #10.
realpath(): open_basedir restriction in effect. File(/folder) is not within the allowed path(s)
Depending on what directory you included in the backup and what directories you allowed via the PHP `open_basedir` option there is a possibility you will get such message.
Make sure you backup only these directories that you are allowed to. See also the question #10.
Fatal error: Maximum execution time of `XX` seconds exceeded
The PHP has an option named max_execution_time which limits the number of seconds a script is allowed to run before it is terminated by the parser. The default limit is 30 seconds (ie. `XX`=30). In order to overcome this limit (a backup/restore job might run for minutes) the MyBackup has an Expert Setting named `Max execution time` (see the `Backup` tab) which will override the default PHP value (default is set to 600s).
In the case of a backup job the solution would be to try setup your backup job such that it would take less time to complete. This might be accomplished by excluding some unnecessary folder(s) from being processed, using a faster compression method/option or even spanning the backup between multiple media (default 150 MiB/media, make it smaller!).
In case this message appears while using the UI then this is really odd (as it is expected to finish in max. <2s). There is one exception though: when viewing the WP/Source Files with `Show file size` option ON it may be necessary to re-read the directory size which may take a while. If that’s the case then please try to unset that option.
Of course, if you can edit your website php.ini file then try to adjust the `max_execution_time` option 🙂
When it starts a job it creates a special locking file (tmp/logs/wpmybackup-jobs.lock) and when the job is done the locking file is cleared automatically. If for some reason or another the last job failed without being able to clear that locking file (eg. when the web server crashes, when PHP fails unexpectedly, etc) then all you need to do is to delete it manually.
To fix that is enough just to go to the `Copy backup to` tab then to choose the Dropbox|Google sub-tab. If the above is true then you should see a message like this:
Bad Request
Solution: YYY
One way to overcome this limitation would be to use the `CPU throttling` expert option available on the MyBackup’s `Backup job` tab.
If the plug-in is running on a multi-network | multisite installation then there are multiple temporary locations where the plug-in keeps its temporary files:
- while accessing the Network Admin pages as multi-network administrator
- wp-content/`uploads/wpmybackup/tmp` – keeps various temporary files
- wp-content/`uploads/wpmybackup_backups` – a working directory if system TEMP is not accessible
- while accessing a multi-network site
- wp-content/`uploads/sites/XXX/wpmybackup/tmp`, where XXX is a numeric site ID
- wp-content/`uploads/sites/XXX/wpmybackup_backups`, where XXX is a numeric site ID
- `<global-working-dir>/XXX`, where `global-working-dir` is given by `Global working directory` option in Network Settings and XXX is a numeric site ID
So on multi-network | multisite installation each site has its own temporary|working directories such that one user|site cannot interfere with the files of other users|sites.
{“error_message”:”Invalid request or service(0)”,”error_code”:200}
.
Why? How to continue from here?
Anyway, just hit the Back button and try again.
This usually does not happen again if you do accept the “untrusted” certificate usage by adding the certificate exception into your browser (usually this is just click-away, nothing fancy). So usually, after going back and retrying authenticating the Dropbox/Google service it works like a charm.
If it doesn’t then make sure you add (agree with) that exception first then go Back and try again. This is the working solution.
I never could fix this situation, except by buying a $XXX brand-name SSL certificate. Some day I will rewrite the app from scratch and implement the OAuth2 authorization in a much different way.
add_action("wpmybackup_before_job_starts", "my_custom_hook", 10, 3); function my_custom_hook (int $job_id, String $sender, int $job_type){ // $job_id: an integer that represents the job identifier // $sender : a string that represents the job starter name (like WP-Cron, WP-Admin-Async, etc) // $job_type:{0:backup,-4:restore} }
add_action("wpmybackup_after_job_ends", "my_custom_hook", 10, 3); function my_custom_hook (int $job_id, Array $job_metrics, Array $processed_arcs){ // $job_id: an integer that represents the job identifier // $job_metrics: array of info that provides info about how the job was done // $processed_arcs: array of the processed archives (key:archive name, value:array of destination targets where the archive was uploaded) }
Keep in mind that in case of multiple parallel backup jobs this action is triggered for each distinct job.
In case of the PRO version use the `wpmybackuppro_schedule_last_filter` filter instead.
How to use this WordPress filter:
$last_schedule=apply_filters('wpmybackup_schedule_last_filter',false); // free version $last_schedule=apply_filters('wpmybackuppro_schedule_last_filter',false); // pro version
Note that the second false argument is just a dummy value to comply with the apply_filters default arguments function definition. In reality you can pass whatever you like (eg. null), this argument is discarded.