Rasmus Lerdorf's Blog

January 22, 2018

Testing VPS solutions


I am trying to see if I should move from my own hardware sitting
in a data center in Milpitas to a VPS. My main criteria is that
I need at least 4 decently fast cores and at least 8G of memory.
I also need about 500G of storage and low ping times from home
in Silicon Valley. I was originally just trying to figure out how
much faster DigitalOcean's optimized droplets were compared to
the standard and posted that to Twitter. Scope creep happened and
I ended up testing 10 different providers.




For the lazy, my verdict for all 10 is right here (with a couple of referral links). For the full details including CPU, disk and network tests along with more detailed observations and screenshots, read on.


DigitalOcean         referral - get $10 credit

$40/month 8 GB 4 core 160 GB SSD + $50/month 500GB Volume in SFO2

make 34m38s 2nd try 4m39s, disk write 350MB/s, read 1.8 GB/s(!), net 2 Gbits/s




The provisioning process is amazing. Fast and responsive. Support is quick and effective.
I was a bit disappointed by the performance of the standard droplets, especially the first one I tested, but the $80 8GB/4 core/50GB SSD optimized droplets absolutely scream only being beat on PHP compile time by a bare metal Vultr box.



Vultr         referral - get $10 credit

$40/month 8GB 4 core 100GB SSD in Los Angeles

make 3m,16s, disk write 306MB/s, read 200MB/s, net 2.5 Gbits/s




A serious competitor to Digital Ocean. I would use this. Especially
if they brought block storage to the west coast. Price and performance
is great and the Web UI for provisioning and managing instances is clear
and easy to use. Even without the block storage, the bare metal instance
with the 2x240 GB SSDs has adequate space. Since it is bare-metal
I assume I would need to mirror the two drives for redundancy so it is still
not close enough to my 500GB target.



Linode          referral

$40/month 8GB 4 core 96GB SSD + $50/month 500GB volume in Fremont,CA

make 4m15s, disk write 634 MB/s, read 355 MB/s, net 1.1 Gbits/s




Everything just worked and performance was acceptable across the board with
the only exception being block volume reads. I found those to be a bit too
slow. The price/performance ratio is good. At the common $80/month price
point you get 12 GB of ram, 6 cores and 192 GB SSD. If the block volume reads
performance is improved, I could use this.



GCP (Google Cloud Platform)

$88/month 8GB 4 core 10GB SSD + $20/month 500GB HDD in Oregon

make 4m8s, disk write 159 MB/s, read 98 MB/s, net 1 Gbits/s




With the lower-cost HDD block volume storage, GCP is interesting. But I had some performance confusion testing
HDD vs. SDD and for $88 it would be nice to get a larger SSD. On the wrong side of the price/performance ratio for me.



Upcloud         referral - get $25 credit

$80/month 8GB 4 core 70GB SSD + $110/month for 500GB in Chicago

make 2m22s, disk write 481 MB/s, read 420 MB/s, net 438 Mbits/s




Good price/performance ratio and if they would bring their cheaper class of block volume service to the U.S. this
would be an option for me. As it is right now, I would have to pay $110/month for the extra 500GB of space I need on
top of the $80/month for the VPS and that puts it out of my price range.



AWS Lightsail

$80/month 8GB 2 core 80GB SSD + $50/month for 500GB in Oregon

make 4m23s, disk write 249 MB/s, read 130 MB/s, net 140 Mbits/s




Decent performance for a 2-core VPS. I couldn't figure out how to provision a 4-core one. Probably user error on my
part, but I did try for a while. I only have so much patience for large complex Web UIs. Lightsail also didn't have Debian 9
as an option at the time. Debian 8 only. $80/month for a 2 core VPS with average performance is on the expensive end
of the spectrum, so not for me.



VMHaus

$28/month 8GB 4 core 100GB nvme SSD in Los Angeles

make 3m11s, disk write 286 MB/s, read 335 MB/s, net 830 Mbits/s




Excellent price/performance ratio and the provisioning process was fast and efficient. But I can't use it since there is no way to add extra storage.



OVH

$35/month 8GB, 4 cores, 100GB SSD + $42/month for 500GB Volume in Beauharnois, Canada

make 4m18s (2 core only), disk write 550 MB/s, read 260 MB/s, net 100 Mbits/s




Terrible provisioning process. The Web UI is atrocious and some things simply didn't work. When I finally did get my VPS, it worked well though, so that one-time hassle shouldn't discourage you. Since I didn't find a free trial, I tested a cheaper 2-core option with no block storage, but it performed well. If they had a west coast POP it could be an option, but without that it isn't viable for me.



Azure

$133/month 4 core 60GB SSD in West US 2 (Oregon?)

make 3m34s, disk write 23 MB/s, read 16 MB/s, net 1.23 Gbits/s




This one was painful. Yes, I have a bit of a Microsoft aversion, but I tried to keep an open mind. Read the full description of my Azure adventure. Expensive, apparently no IPv6, slow disk IO, and I couldn't figure out block storage options. Definitely not for me.



Contabo

$11/month 12GB 4 core 300GB SSD somewhere in Germany

make 3m47s, disk write 144 MB/s, read 308 MB/s, net 88 Mbits/s




I like Contabo. It took perhaps 20 minutes to get my VPS and there was packet loss on the network, but that was resolved. For $11/month this is an outstanding deal. Being in Europe with no N.American POP I can't use it as my primary VPS, but I will probably keep this one just to have a personal box in Europe to play with.



Continue reading "Testing VPS solutions"
 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2018 10:52

February 7, 2017

A bit of PHP history

Ran across this Changelog from a long long time ago. Read from the bottom up. I added the PHP Tools lines at the bottom for context. So many early decisions made on a whim still affecting us today. And then there are things like "Removed **, // and %% operators" which did a vector dot-product and its inverse, I think. I seem to recall deleting it when I tried to document it.


No years on most of the entries, but version 1.90 was on Sept.18, 1995.



Continue reading "A bit of PHP history"
 •  0 comments  •  flag
Share on Twitter
Published on February 07, 2017 08:07

March 3, 2016

megasync for Debian 9 Stretch


Like most of my posts here, this is mostly a note to myself so I don't forget how I did it.



I Moved to Debian 9 on my desktop box at home and everything works great except I occasionally use Mega.nz and they don't provide a Debian 9 build. It would be great if they just provided a statically linked generic Linux binary, but they don't. So, to make it work, grab their Debian 8 .deb file.

Continue reading "megasync for Debian 9 Stretch"
 •  0 comments  •  flag
Share on Twitter
Published on March 03, 2016 00:50

January 24, 2016

Upgrading PHP on the EdgeRouter Lite


After nearly 7 years of service I retired my Asus RT-16 router, which wasn't really a router, but a re-purposed wifi access point running AdvancedTomato. In its place I got a Ubiquiti EdgeRouter Lite. It is Debian-based and has a dual-core 500MHz 64-Bit MIPS CPU (Cavium Octeon+), 512M of ram and a 4G removable onboard USB stick for < $100. The router is completely open and, in fact, any advanced configuration has to be done from the command line. The Web UI has been improving, but there are still many things you can't do in it. In other words, exactly the type of device I prefer.



Continue reading "Upgrading PHP on the EdgeRouter Lite"
 •  0 comments  •  flag
Share on Twitter
Published on January 24, 2016 17:37

November 12, 2013

Building a NAS

The HTPC box and various computers around the house use a mix of internal drives, external USB and eSATA drives. It is quite a mess, and backups are sporadic at best. The HTPC especially has grown organically with USB and eSATA as it needed more and more space.





So it was finally time for a decent NAS.


Continue reading "Building a NAS"
 •  1 comment  •  flag
Share on Twitter
Published on November 12, 2013 21:47

September 28, 2011

ZeroMQ + libevent in PHP

While waiting for a connection in Frankfurt I had a quick look at what it would take to make ZeroMQ and libevent co-exist in PHP and it was actually quite easy. Well, easy after Mikko Koppanen added a way to get the underlying socket fd from the ZeroMQ PHP extension.

To get this working, install the PHP ZeroMQ extension and the PHP libevent extension.

First, a little event-driven server that listens on loopback port 5555 and waits for 10 messages and then exits.





Server.php




<?php
function print_line($fd, $events, $arg) {
static $msgs = 1;
echo "CALLBACK FIRED" . PHP_EOL;
if($arg[0]->getsockopt (ZMQ::SOCKOPT_EVENTS) & ZMQ::POLL_IN) {
echo "Got incoming data" . PHP_EOL;
var_dump ($arg[0]->recv());
$arg[0]->send("Got msg $msgs");
if($msgs++ >= 10) event_base_loopexit($arg[1]);
}
}

// create base and event
$base = event_base_new();
$event = event_new();

// Allocate a new context
$context = new ZMQContext();

// Create sockets
$rep = $context->getSocket(ZMQ::SOCKET_REP);

// Connect the socket
$rep->bind("tcp://127.0.0.1:5555");

// Get the stream descriptor
$fd = $rep->getsockopt(ZMQ::SOCKOPT_FD);

// set event flags
event_set($event, $fd, EV_READ | EV_PERSIST, "print_line", array($rep, $base));

// set event base
event_base_set($event, $base);

// enable event
event_add($event);

// start event loop
event_base_loop($base);




Client.php




<?php
// Create new queue object
$queue = new ZMQSocket(new ZMQContext(), ZMQ::SOCKET_REQ, "MySock1");
$queue->connect("tcp://127.0.0.1:5555");

// Assign socket 1 to the queue, send and receive
var_dump($queue->send("hello there!")->recv());



You will notice when you run it that the server gets a couple of events that are not actually incoming messages. Right now ZeroMQ doesn't expose the nature of these events, but they are the socket initialization and client connect. You will also get one for the client disconnect. A future version of the ZeroMQ library will expose these so you can properly catch when clients connect to your server.



There really isn't much else to say. The code should be self-explanatory. If not, see the PHP libevent docs and the PHP ZeroMQ docs. And if you build something cool with this, please let me know.
 •  0 comments  •  flag
Share on Twitter
Published on September 28, 2011 23:10

May 21, 2011

ASRock Sandy Bridge Motherboard notes

I have pieced together two Sandy Bridge machines. This entry contains my notes on the two machines. Mostly for myself to refer back to later, but it might come in handy for others along the way.


Machine 1 - Overkill HTPC

Mythbuntu 10.10 initially but upgraded to full 11.04 when it was released
i5-2500k CPU
ASRock H67M LGA 1155 Intel H67 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard
Seasonic PSU
G.SKILL Ripjaws X Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Model F3-10666CL9D-8GBXL
Crucial RealSSD C300 CTFDDAC064MAG-1G1 2.5" 64GB SATA III MLC SSD
Western Digital Caviar Green WD20EARS 2TB SATA 3.0Gb/s 3.5" HD
ASUS ENGT430/DI/1GD3(LP) GeForce GT 430 (Fermi) 1GB 128-bit DDR3 PCI Express 2.0 x16 HDCP Graphics card
AVS Gear GP-IR01BK Windows Vista Infrared MCE Black Remote Control
SilverStone Aluminum/Steel Micro ATX HTPC Computer Case GD05B (Black)
SiliconDust HDHomeRun HDHR-US Dual Tuner
RCA ANT751 Outdoor Antenna (installed in attic - see http://flic.kr/p/9iFKer)



Machine 2 - Dev Box for the office

Ubuntu 11.04
i7-2600k CPU
ASRock Z68 Extreme4 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard
G.SKILL Ripjaws X Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Model F3-10666CL9D-8GBXL
G.SKILL Ripjaws Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Model F3-12800CL9D-8GBRL
Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD)
2 x SAMSUNG Spinpoint F4 HD204UI 2TB 5400 RPM SATA 3.0Gb/s 3.5" HD
CORSAIR Builder Series CX430 CMPSU-430CX 430W ATX12V Active PFC PSU
Old Antec case I had lying around

I went scouring slickdeals and other deal sites for most of these components, so there are some mismatches. Like the slightly mismatched ram in the second machine, and the fact that I am using a 2500k in an H67 (B2!) board. No real point in an unlocked cpu in a locked board, but the k was cheaper than the non-k at the time, and who knows, I could swap the motherboard. And yes, it is a B2-stepping board, so the SATA2 ports are iffy. But since I am not using them it doesn't bother me.


Continue reading "ASRock Sandy Bridge Motherboard notes"
 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2011 01:06

May 22, 2010

Writing an OAuth Provider Service

Last year I showed how to use pecl/oauth to write a Twitter OAuth Consumer. But what about writing the other end of that? What if you need to provide OAuth access to an API for your site? How do you do it?



Luckily John Jawed and Tjerk have put quite a bit of work into pecl/oauth lately and we now have full provider support in the extension. It's not documented yet at php.net/oauth, but there are some examples in svn. My particular project was to hook an OAuth provider service into a large existing Kohana-based codebase. After a couple of iterations this should now be trivial for others to do with the current pecl/oauth extension.



Continue reading "Writing an OAuth Provider Service"
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2010 21:50

February 9, 2010

A quick look at XHP

Facebook released a new PHP extension today that supports inlining XML. This is a feature known as XML Literals in Visual Basic. Go read their description here:
http://www.facebook.com/notes/facebook-engineering/xhp-a-new-way-to-write-php/294003943919


It adds an extra parsing step which maps inlined XML elements to PHP classes. These classes are core.php and html.php which covers all the main HTML elements. The syntax of those class definitions is a bit odd. That oddness is explained in the How It Works document.



Essentially, it lets you turn:


<?php
if ($_POST['name']) {
echo "<span>Hello, {$_POST['name']}.</span>";
} else {
?>
<form method="post">
What is your name?<br>
<input type="text" name="name">
<input type="submit">
</form>
&lt?php
}


into:


<?php
require './core.php';
require './html.php';
if ($_POST['name']) {
echo <span>Hello, {$_POST['name']}.</span>;
} else {
echo
<form method="post">
What is your name?<br />
<input type="text" name="name" />
<input type="submit" />
</form>;
}


The main interest, at least to me, is that because PHP now understands the XML it is outputting, filtering can be done in a context-sensitive manner. The input filtering built into PHP can not know which context a string is going to be used in. If you use a string inside an on-handler or a style attribute, for example, you need radically different filtering from it being used as regular XML PCDATA in the html body. Some will say this form is more readable as well, but that isn't something that concerns me very much.



The real question here is what is this runtime xml validation going to cost you. I have given talks in the past where I have used "class br extends html { ... }" as a classic example of something you should never do. A br tag is just a br tag. When you need one, stick a <br> in your page, don't instantiate a class and call a render() method. So, when I looked at html.php and saw:


class :br extends :xhp:html-singleton {
category %flow, %phrase;
protected $tagName = 'br';
}


I got a bit skeptical. Another thing I have been known to tell people is, "Friend don't let friends use Singletons." Which isn't something I came up with. Someone, a friend, I guess, told me that years ago. Ok ok, as Marcel points out in the comments, this isn't a real singleton, just in name.



The "singleton" looks like this:


abstract class :xhp:html-singleton extends :xhp:html-element {
children empty;

protected function stringify() {
return $this->renderBaseAttrs() . ' />';
}
}


which extends html-element which in turn extends primitive. You can go read all the code for those yourself.



Note that to build XHP you will need flex 2.5.35 which most distros won't have installed by default. Grab the flex tarball and ./configure && make install it. Then you are ready to go.



I pointed Siege at my rather underpowered AS1410 SU2300 with the above trivial form examples. The plain PHP one and the XHP version. Ran each one 5 times benchmarking for 30s each time. The plain PHP one averaged around 1300 requests/sec. Here is a representative sample:


acer:~> siege -c 3 -b -t30s http://xhp.localhost/1.php
** SIEGE 2.68
** Preparing 3 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 38239 hits
Availability: 100.00 %
Elapsed time: 29.60 secs
Data transferred: 3.97 MB
Response time: 0.00 secs
Transaction rate: 1291.86 trans/sec
Throughput: 0.13 MB/sec
Concurrency: 2.93
Successful transactions: 38239
Failed transactions: 0
Longest transaction: 0.05
Shortest transaction: 0.00


And the XHP version:


Transactions: 868 hits
Availability: 100.00 %
Elapsed time: 29.28 secs
Data transferred: 0.08 MB
Response time: 0.10 secs
Transaction rate: 29.64 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 2.99
Successful transactions: 868
Failed transactions: 0
Longest transaction: 0.21
Shortest transaction: 0.05


So, a drop from 1300 to around 30 requests per second and latency from less than 10ms to 100ms. Running XHP on plain PHP is definitely out of the question. But, knowing that Facebook uses APC heavily and looking through the code (see the MINIT function in ext.cpp) we can see that it should play nicely with APC. So, re-running our PHP version of the form, now with APC enabled, that goes from 1300 to around 1460 requests per second, and no measurable latency:


Transactions: 43773 hits
Availability: 100.00 %
Elapsed time: 29.88 secs
Data transferred: 4.55 MB
Response time: 0.00 secs
Transaction rate: 1464.96 trans/sec
Throughput: 0.15 MB/sec
Concurrency: 2.93
Successful transactions: 43773
Failed transactions: 0
Longest transaction: 0.07
Shortest transaction: 0.00


The XHP version of the form now with APC enabled:


Transactions: 9707 hits
Availability: 100.00 %
Elapsed time: 29.45 secs
Data transferred: 0.94 MB
Response time: 0.01 secs
Transaction rate: 329.61 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 2.97
Successful transactions: 9707
Failed transactions: 0
Longest transaction: 0.21
Shortest transaction: 0.00


Much better. But it is still around a 75% performance drop from 1460 to 330 and a ~10ms latency penalty. And yes, I did have a default filter enabled for these tests, so there was basic XSS filtering in place for the naked $_POST['name'] variable in the plain PHP version. Of course, the default filtering would likely fail if the user data was used in a different context. And this 75% is obviously going to depend on what else is going on during the request. If you are spending most of your time calculating a fractal or waiting on MySQL, you may not notice XHP very much at all.



The bulk of the time is spent in all the tag to class interaction. If the core.php and html.php code was all baked into the XHP extension, it would be a lot quicker, of course. So, when you combine XHP with HipHop PHP you can start to imagine that the performance penalty would be a lot less than 75% and it becomes a viable approach. Of course, this also means that if you are unable to run HipHop you probably want to think a bit and run some tests before adopting this. If you are already doing some sort of external templating, XHP could very well be a faster approach.


Continue reading "A quick look at XHP"
 •  0 comments  •  flag
Share on Twitter
Published on February 09, 2010 21:22

February 4, 2010

HipHop PHP - Nifty Trick?

In a response to a question from ReadWriteWeb, among other things, I wrote:


My main worry here is that people think this is some kind of magic
bullet that will solve their site performance problems. Generating C++
code from PHP code is a nifty trick and people seem to have gotten quite
excited about it. I'd love to see those same people get excited about
basic profiling and identifying the most costly areas of an application.
Speeding up one of the faster parts of your system isn't going to give
you anywhere near as much of a benefit as speeding up, or eliminating,
one of the slower parts of your overall system.


The "nifty trick" part of that seems to have become the story, and them
injecting a "just" in front it of it makes it sound more derogatory. Anyone
who knows me knows that I am a big fan of nifty tricks that solve the problem.
When I first heard about the Facebook effort I was assuming they were writing
a JIT based on LLVM V8 or something along those lines. Writing a good JIT is
hard. Doing static code analysis and generating compilable C++ from it is
indeed a nifty trick. It's not "just" a nifty trick, it is a cool trick that takes
advantage of a number of characteristics of PHP. The main one being that
you can't overload PHP functions. strlen() is always strlen, for example. In
Python, this would be harder because you can overload everything.



I also noted that most sites on the Web have a lot of lower hanging fruit that
would provide a much bigger performance improvement, if fixed, than doubling
the speed of the PHP execution phase. The ReadWriteWeb site, for example,
needs 160 separate HTTP requests and 41 distinct DNS lookups to load the
front page. And once you get beyond the frontend inefficiencies you usually
find Database issues, inefficient system call issues and general architecture
problems that again aren't solved by speeding up PHP execution.



If you have done your homework and find that your web servers are cpu-bound,
you are already using an opcode cache like APC
and your Callgrind callgraph
shows you that the PHP executor is a significant bottleneck, then HipHop PHP is
definitely something you should be looking at.
 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2010 10:50

Rasmus Lerdorf's Blog

Rasmus Lerdorf
Rasmus Lerdorf isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Rasmus Lerdorf's blog with rss.