This page is updated manually with status of current and recent (30ish days) events.

(Times are US/Arizona UTC-7)

Current status is: AMBER: We have a big server in need of a reboot and diagnostics check, but need to clear it first.

20200211 @7:41PM (Arizona/TS time) – There were a bunch of things either causing or contributing to the storage having latency (being slow to respond)

  • Server had been up for 424 days
  • Chinese hackers from IP address
  • Russian hackers from IP address
  • Vaultpress deciding to do backups even under high load :/
  • A Broken link checker plugin doing intense testing – might need to ban these plugins, they’re just way too abusive

The one Virtual Web server (whphx20) was offline longer since migrating it to a new server took longer than normal due to the latency.

For more details message us.

We’re going to finish clearing the core server, and then reboot it and ensure all hardware is functioning properly.

20200211 @7:27PM (Central time) -Services Restored. We’re now investigating and will update with causes.

20200211 @6:31PM (Central time)-Webhost server is not responding. Currently working to move services off of it. We apologize if your site is being impacted.

20200211 @6:28PM (Central Time) -One of our servers are overloaded. Still determining cause. And currently working to mitigate. Some sites may be impacted.

20200210 @9:30PM -One of our servers is overloaded due to a viral post. Some sites may be impacted.

20191113 @8:30PM -Did an emergency update of the “email-subscribers” plugin on the 4 sites with it to patch multiple security vulnerabilities.

20191111 @2PM – *Hosting Improvement* – We’re doing a bit of tuning to our web server config to block some fake browsers. So far, this has reduced login hack attempts by half.

20191027 @10:00AM – We’ve already patched the remote execution bug in PHP 7 mentioned here –

20191024 @6:38AM: Some bozo (IP was hitting us with thousands of simultaneous connection attempts in a DoS (Denial of Service) attack. Bozo has been blocked, and server has returned to normal speediness. There was so much traffic it was overloading one of the ISP connections which is why we saw packet loss and came to our original (wrong) assessment of what was happening.

20191024@6:26AM: Server in Quebec is under heavy attack. We looked at the traffic and had guessed wrong. We’re working with our ISP to mitigate the attack. :/

20191024 @5:35AM : One of our ISPs has a now-known issue with packet loss across one of their routers. We’re shifting traffic off that ISP, and on to another.

20191019 @12:15AM: One updated server (of 15) didn’t like the updates and has been unstable. We’re currently migrating sites off it & trying to figure out what it didn’t like about the updates. It keeps losing network connectivity.
(One gremlin always has to sneak in.)

20191018 @10:15PM: We need to do some reboots of the various servers to finish updating their operating systems. Most reboots should be <2 minutes, none should be longer than 15 min. (This will be done over multiple days.)

20191018 @3:28 PM: Network link is back up. Waiting on an answer from our vendor. They claimed there wasn’t maintenance and that a cable went bad. (That answer doesn’t seem accurate.)

20191018 @2:08 PM: One of our Phoenix data links is offline. This seems to be planned maintenance which started very late. (Waiting on an update.)
Impact will be minimal as browser are generally smart enough to use the other IP addresses for a site.

Green: I am completely operational, and all my circuits are functioning perfectly normally.

AMBER: External network issues.

RED: Zombie Apocalypse

Magenta – a service is down, but not an emergency.