Skip to content

Troubleshooting

Cannot connect by SSH

There are few known reasons for that:

  • You must have at least one SSH key added to your profile
  • Your ssh key not added to your SSH agent, try executing
    ssh-add /path/to/private/key
    
  • Try specify which key to use
    ssh user@hostname -i /path/to/private/key`
    

Emails delivery from my application fails

If you're using a server from a public cloud there's 90% chance that its IP is already compromised and blacklisted by major mail services, hence your emails won't be delivered or will land in the spam folder.

If your stack has mail transfer agent OpenSMTPD we recommend integrating it with a 3rd party email service (relay mode):

Host identification has changed

If you see the following error:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ 
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! 
Someone could be eavesdropping on you right now (man-in-the-middle attack)! 
It is also possible that a host key has just been changed. 
The fingerprint for the RSA key sent by the remote host is 
SHA256:XXXXXXXXXXXXX/XXXX. 
Please contact your system administrator. 
Add correct host key in /Users/xxx/.ssh/known_hosts to get rid of this message. 
Offending RSA key in /Users/xxx/.ssh/known_hosts:xx 
RSA host key for [node-xxxxx.wod.by]:xxxx has changed and you have requested strict checking. 
Host key verification failed.

This means that the container you're trying connect to was recreated and RSA key has changed.

To avoid this kind of errors you can disable strict host key checking for *.wod.by host by adding the following lines to ~/.ssh/config file:

Host *.wod.by
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

My server status is unreachable

This problem could be caused by the lack of memory on your server. Make sure you have enough memory:

$ free -m

If you don't have enough memory, you can use Linux Swap.

Make sure you're using swap by executing:

$ sudo swapon -s
If not, follow this guide to add swap (Ubuntu).

Also, the N/A status may be caused by a hanged wodby agent container, see how you can restart it.

If you still have the problem please contact Wodby support team.

Cannot connect server

There are few known reasons for that:

Application deployment or other tasks fail

This error means that one of the deployment steps exceeded its timeout. There are few known reasons for that:

  • There's something wrong on our side, see http://status.wodby.com/
  • Something wrong with your server, make sure you have enough free disk space
  • Check your CPU load average by running top
  • Check you have enough free RAM by running free -h
  • Check your system log for extra errors journalctl -f
  • You've reached containers limit per server (300), contact our support to increase the limit
  • A slow speed of Read/Write operations on a disk
  • Huge ping to your server due to global network issues

Application gives "File not found" error

This error means that the HTTP server could not find the entrypoint (in case of PHP-based stacks it's usually index.php) in a container. This might happen for a few reasons:

  • You have your entrypoint (e.g. index.php) in a subdirectory of your git root and you did not specify it during the initial deployment of new application
  • Your codebase is missing, could be that you've selected a wrong branch during deployment/build

Cannot update WordPress core or its plugins/themes

See this article

Infrastructure 5.x known issues ☹️

  • Sometimes we can't get logs of a task with the error Container not found. Task may have been completed but we consider it as failed
  • Sometimes we can't get the size of a backup archive so we don't show it in the dashboard
  • If you update a rolling-update container and it fails we will not be able to detect the failure. Despite the actual failure the deployment will be considered successful because the older version of the container is still intact
  • We can not handle errors of containers that failed to start, so the task will hang until it expires by timeout. Here's how you can manually check your deployment state in such cases:
    1. Access your server via SSH as root
    2. Run the following command (replace [INSTANCE UUID])
      kubectl get po -n [INSTANCE UUID]
      
    3. You will see statuses of pods (containers) of your application instance. You can get logs of the specific pod (container) either by running (if container is creating or running)
      kubectl logs [POD NAME] -n [INSTANCE UUID]
      
      or (if container is not currently running or in the error state)
      kubectl describe po [POD NAME] -n [INSTANCE UUID]
      

Infrastructure 6.x

All the known issues will be resolved in Infrastructure 6.x