in this series we’re looking at using the amber language to write scripts that transpile into bash. so far we’ve covered using shell commands and handling error cases. in this installment, we’ll be looking at control structures: loops and ifs, basically.
writing shell scripts is zero fun. the bash syntax is a mess, error handling is difficult, and any script longer than a hundred lines is basically unreadable. but we keep writing bash scripts because they’re the right tool for the job and the job must be done.
amber aims to fix this pain by being a language that gives us a sane, readable syntax that transpiles into messy bash so we don’t have to write messy bash ourselves.
this post is a four-parter that will go over the basic features of amber from the perspective of those of us who actually want to use it. we’ll start with calling shell commands and handling errors, then look at loops and if statements, the standard library of commands, and finally investigate functions.
the elegant syntax of amber is pulled away to reveal the messy bash underneath
knowing the geolocation of your site’s users is handy thing. maybe you want to force your canadian users into a degraded, second-rate version of your ecommerce site, or maybe you want to redirect people from brazil to a frontend you ran through google translate, or maybe you just want to block the netherlands because you hate the dutch. there are reasons.
traditionally, this gets done by calling a third-party geolocation api. you gotta fiddle with api keys and manage rate limits and write a bunch of code. or… we could just let nginx do it all for us.
in this post we’re going to go over how to do ip geolocation for country and city in nginx and get that data into our web app where we can use it. all of this was written for ubuntu-like systems runing nginx 1.18.0.
recursion has a bad reputation amongst programmers; it’s convoluted and complicated and difficult to debug, a real footgun. it’s something you do at school (if you went to school for that sort of thing) and then never touch again if you can avoid it. which is a drag, because there’s a lot of use cases for recursion. data structures of arbitrary depth are everywhere: file systems, dom trees, that 32kb json packet your integration partner just shovelled into your api.
in this post we’re going to over two features of php that help make recursion easier: the RecursiveIterator interface, which provides us with methods and features that make writing recursive functions easier, and the dreadfully-named RecursiveIteratorIterator class which we can use to flatten down arbitrarily-deep data structures.
php to developers: “say ‘iterator’ five times fast”
we’ll be building a recursive function using RecursiveArrayIterator, starting with a simple loop and working up to the full function. then we’ll look at how leverage RecursiveIteratorIterator to smash that nested array into a single level so we can extract data, either with a simple loop or a more-complex-but-powerful call to iterator_apply.
it’s a universal truth that one of the primary jobs of backend developers is drinking from the firehose of json the frontends are constantly spraying at us. normally, we pick apart all those key/value pairs and slot them into the neatly-arranged columns of our db, keeping our database structure nice and normal. but sometimes there are good reasons to just store a whole json object or array as an undifferentiated blob.
in the dark, old days, this json would go in a text column, which was fine if all we needed to do was store it and return it. but if we needed to do something more complex like, extract certain values from that json or, god forbid, use one in a WHERE clause, things could get out of hand pretty quickly.
fortunately, modern mysql supports JSON as a native data type and offers a whole host of useful json functions, allowing us to work with json data in a way that’s almost pleasurable. in this post we’re going to go over extracting values from json and using json data in WHERE clauses.
this column can fit so much schema-less data in it
there’s an rfc vote currently underway for a number of new array functions in php8.4 (thanks to symfony station for pointing this out!). the proposal is for four new functions for finding and evaluating arrays using callables. the functions are:
there are times when we tar and gzip a directory and the final tarball is just too damn big. maybe it doesn’t fit on any of those old thumbdrives we got at the 7-eleven, or maybe we’re trying to upload it to s3 and aws is complaining it’s too large.
let’s take a look at a quick and dirty way to split our tarballs into a bunch of smaller files of a set size.
the problem is this: we have a bunch of files, pdfs say, on our webserver that we want people to download, but only if they’re registered users. everyone else gets 404s.
there’s no shortage of ways to homeroll a solution to this issue (i often use private s3 buckets), but perhaps the most elegant way is to configure nginx to do it for us. no vendor lock in with aws, no controller methods struggling under the weight of 50mb pdfs; just nginx serving files.
in this post, we’re going to go over how to use the nginx‘s X-Accel-Redirect header with a light sprinking of php to serve files from a restricted directory.
one does not simply download mordor.pdf from the serverContinue reading →
the cold, hard truth is that the reason your web app or api is running slow is because of your database calls. you can migrate everything to frakenphp or put compression in your http server if you want, i’m not going to stop you, but if you want to shave whole seconds off your response times, you should take a hard look at all that janky sql and yolo orm queries you wrote.
the good news is that mysql has a tool to find and log your slow queries. it’s called the ‘slow query log’, basically the perfect name, and it comes pre-installed. let’s get it turned on, set up, and look at the output.
a developer discovering they’re really bad at writing sql
a week ago, i ran this db insert-heavy script on a server and forgot to turn off mysql‘s binlog feature first. the result, of course, is that i filled up the disk in about three minutes and brought the whole server down. not great for a tuesday. fortunately, finding and fixing the problem was straightforward. the downtime was only a couple of minutes.
in this post we’re going to go over inspecting our disk space; figuring out how much we have left and finding out what we spent all those blocks on. we’ll look at three basic tools:
df for inspecting disk space du for getting directory sizes find for finding files to delete these all come pre-packaged with your linux or linux-like operating system, so put that apt back in your pocket.
Opt-out complete; your visits to this website will not be recorded by the Web Analytics tool. Note that if you clear your cookies, delete the opt-out cookie, or if you change computers or Web browsers, you will need to perform the opt-out procedure again.
You may choose to prevent this website from aggregating and analyzing the actions you take here. Doing so will protect your privacy, but will also prevent the owner from learning from your actions and creating a better experience for you and other users.
The tracking opt-out feature requires cookies to be enabled.