A fond farewell to DrupalCon 2023
Wrapping up at #DrupalCon and I’ve had a blast. Great to meet new friends and reconnect with old ones. Also—Pittsburgh is a great town. See everyone online!
Here’s my contribution to the “what apps am I using?” posts.
NEDCamp is great, as always.
I wrote a blog post for Lullabot on Useful git configurations you may have missed.
From Sannie Lee, writing about the value of a liberal arts education in tech.
Whether it was a history or literature class, one common thread across all my courses was thinking critically. Looking at historical events or understanding the meaning in a novel, I learned not to take things at face value.
As a “technical” tech worker, I couldn’t agree more. I credit a lot of my “how to think” skills with my liberal arts education, and think it provides perhaps a not immediately obvious, yet altogether invaluable, benefit.
You know what’s cool? Map()
. More invaluable technical insights all the time subscribe.
As the premier published resource on quirky calendars in macOS, it is my sad duty to inform the world that they have been removed. I’m not sure which update killed them, but they’re gone in Sonoma. Whenever it happened, the world got a little bit less fun.
iOS 17’s Check In safety feature seems really useful when needed. Concerned that either the time-based or location-based flavors will have too many false positives in subway rides tho.
TIL if the data that a GraphQL query is looking for has multiple types, the entire object will be removed. This is communicated via a non-halting warning, letting the “Can’t find the object” error blow up. Spent too much time debugging the error before noticing the warning 🙄
From Luke Planet’s No One Actually Wants Simplicity
I think a good test of whether you truly love simplicity is whether you are able to remove things you have added, especially code you’ve written, even when it is still providing value, because you realise it is not providing enough value.
Reminder that while USB-C is a welcome universal plug, the rest of the situation is, and will remain, a mess.
Unity exec tells Ars he’s on a mission to earn back developer trust | Ars Technica
“There was a lot more [feedback than we expected] for sure… I think that feedback has made us better, even though it has sometimes been difficult.”
Execs always trot out the “we didn’t realize” line. Frankly, that says they are either disingenuous or impossibly stupid. Either way, not trustworthy.
Reading State of CSS. Some takeaways.
:has()
font-palette
content-visibility
use decreasedI’ve been trying to collect my thoughts several days: I’m so very sad to hear of Bram Moolenaar’s passing. The author and largest maintainer of Vim for something like 30 years, his charitable work and dedication to his software was incredible. We lost a titan.
Wrapping up at #DrupalCon and I’ve had a blast. Great to meet new friends and reconnect with old ones. Also—Pittsburgh is a great town. See everyone online!
I limited all my blog’s RSS/JSON feeds to 50 items, after realizing Hugo defaults to no limit. Hopefully that will be kinder to parsers.
TIL that dynamic JavaScript imports are widely supported!
TIL the From
HTTP request header. Now used for crawlers, it was originally intended for allow human users to supply their email address to the server. Innocent times.
Checked the micro.blog logs, and the cert “expired” at 10:06 a.m. (EST), so the site was down for almost 2 hours. Previously.
Site was down for a few minutes (that I know of) due to an expired cert error. I checked the server and my cert was not expired. I rebooted nginx and it’s working again. 🤷
Twitter’s public implosion has redoubled my interest in blogging, and I’ve been posting more.
Through Micro.blog, I syndicate my posts to the fediverse, namely Mastodon at @chrisd@micro.blog
. I have a separate Mastodon account I created years ago at @bronzehedwick@mastodon.social
. I’m not yet sure how I ultimately want to structure these identities, but at the moment I have micro.blog content appear on the mastodon.social as well.
While Mastodon is great, it is not easy to migrate your content between instances, something which blogging excels at. So I wanted to import my mastodon.social archive into my blog, at least for better data ownership, but also in case I decide to make micro.blog my only active identity.
To that end, I downloaded my mastodon.social archive, and wrote a little Node script to output non-reply posts (I don’t currently have a self-hosted reply mechanism) as appropriately formatted markdown files for my Hugo site.
#!/usr/bin/env node
import { readFile, writeFile } from 'node:fs/promises';
import { Buffer } from 'node:buffer';
import { striptags } from 'striptags';
try {
const filePath = new URL(
'/Users/chris/Downloads/mastodon-archive/outbox.json',
import.meta.url
);
const contents = await readFile(filePath, { encoding: 'utf8' });
const data = JSON.parse(contents);
data.orderedItems.forEach(item => {
if (item.object.inReplyTo || !item.object.content) return;
const unixTimestamp = Math.floor(
new Date(item.published).getTime()/1000
);
const template = `+++
title = "Note on ${item.published}"
slug = "${unixTimestamp}"
date = ${item.published}
layout = 'post'
+++
${striptags(unescape(item.object.content), {allowedTags: new Set([
'a',
'strong',
'em',
])})}`;
const fileData = new Uint8Array(Buffer.from(template));
const controller = new AbortController();
const { signal } = controller;
writeFile(
`/Users/chris/Sites/Personal/chrisdeluca/content/note/${unixTimestamp}.md`,
fileData, signal
);
});
} catch (err) {
console.error(err.message);
}
That worked great, and I published my full mastodon.social archive yesterday!
Turns out, that even though all these items are well in the past, they were still ingested as new items, and five years of posts got spammed across the fediverse at once. Oops!
I don’t know how to avoid that in this situation, but apologies to everyone who had to withstand that fire hose. I don’t plan on doing it again, since I already have my archive, but any recommendations on how to avoid this for others would be really helpful.
Thanks for reading!
You may know that you can open man pages in a Neovim buffer with :Man
.
However, you can also configure your shell to open manual pages in a
Neovim buffer when called from the command line.
First, if you’re unfamiliar, Neovim ships with the great :Man
command, which opens man pages in a nicely formatted buffer. These
buffers are normal Vim buffers, so come equipped with syntax
highlighting, can be easily searched, and links to other manual pages
can be followed with C-].
" Open the git manual page.
:Man git
You can also open man pages invoked inside Neovim’s terminal emulator using this same man buffer with a little configuration.
# This opens a man buffer?
man git
The man
command can be configured to render pages with any program,
controlled by the $MANPAGER
environment variable.
We could set $MANPAGER
to nvim
, but that would cause nesting Neovim
instances if called from inside a Neovim :terminal
.
To work around this, we’ll need help from the neovim-remote
project (at least until Neovim core adds --remote
back). With that installed, we can call nvr
inside
a Neovim terminal buffer to open the given file in the same Neovim
instance.
I personally would rather not launch a whole Neovim instance just
to render a man page if I’m not already inside Neovim, so for this
tip we’ll add some detection code to only set the $MANPAGER
value inside Neovim. We can do this by checking the value of the
$NVIM_LISTEN_ADDRESS
environment variable, which will only be set
inside an instance of Neovim.
We’ll use the -o
flag to open the man page in a new split, to help
retain the context of what you’re working on.
In your bash/zsh config file:
if [ -n "${NVIM_LISTEN_ADDRESS+x}" ]; then
export MANPAGER="/usr/local/bin/nvr -c 'Man!' -o -"
fi
Or for the fish shell:
if test -n "$NVIM_LISTEN_ADDRESS"
set -x MANPAGER "/usr/local/bin/nvr -c 'Man!' -o -"
end
And that’s it. Happy RTFM!
I’m working on a short novel, which got me thinking about typesetting (naturally). There’s as many ways to style a book as there are a webpage [citation needed], but there’s rules and conventions just the same. I’m a web developer by trade, so I decided to try my hand at styling a novel in CSS.
many standard print conventions. The goal here is to have something that looks good out of the box, is a solid base to build on top of if need be, and most importantly, looks like a novel.
Something pleasant I discovered is that browser defaults are largely what you want for a novel, which is unsurprising when you think about the history of the web platform starting in academia.
I’ve annotated my code with explanations for each rule, below; you can download a copy of the code here, and you can see the results in Codepen.
/*! [chrisdeluca.me/article/b...](https://chrisdeluca.me/article/base-css-to-style-a-novel)
License: MIT
*/
body {
/**
* Starting with the hard stuff: fonts. I chose Palatino because it's a
* classic serif font that's loaded on many systems by default. This would
* be the first thing to change to customize the look and feel. The rest of
* the rules are written using font-relative units, so changing the font
* family here wont change the other sizing ratios.
*/
font-family: Palatino, serif;
/**
* The standard base font size for print is 12pt, which equals 16px, the
* browser default. I increased the value by 2 points since Palatino runs
* small, so this should probably go back to 12pt if the font family is
* different.
*/
font-size: 14pt;
/**
* Increase the line height to 1.6 times the font size. The browser default is
* much smaller, about 1.2 times the font size on desktop browsers, which
* feels cramped—more like a textbook than a novel. The value 1.6 is
* somewhat arbitrary, so play around with it, but do make sure the value
* is unitless. See:
* [css-tricks.com/almanac/p...](https://css-tricks.com/almanac/properties/l/line-height/#aa-unitless-line-heights)
*/
line-height: 1.6;
/**
* Novels are the definition of a long read, so I' doing everyone's eyes a
* solid by optimizing for legibility, aka glyph clarity, rather than
* render speed or correctness, which are either secondary or irrelevant,
* depending on the medium.
*/
text-rendering: optimizeLegibility;
/**
* Remove any top margin and center the content. Our headings will take
* care of any needed vertical spacing, and making sure content is centered
* is a nice win for readability on large browser windows.
*/
margin: 0 auto;
/**
* The ideal line length for readability is, depending on the study you're
* citing and the font, between 50–75 characters. I'm more or less
* splitting the difference. I use the lesser known ch unit—equal to the
* width of the font's "0" character—for horizontal spacing like this, as
* it feels natural and is easy to reference without doing extra math.
* Since the zero character is usually one of, if not the, widest character
* in a font, the exact number of characters per line will be slightly more
* than the given value, which is perfectly acceptable. I'm also setting
* this as a max-width instead of a width, since we want our content to
* wrap nicely if the screen size is smaller than the desired value, rather
* than being cut off.
*/
max-width: 65ch;
}
/**
* Use the semantic <header> tag to mark up the title page. This text is always
* centered, and, if we're outputting to printed media, we want this to be on
* it's own page.
*/
header {
text-align: center;
break-after: page;
}
/**
* Headings, aka for chapters or sections, should be spaced apart from all
* other content, but should be closer to the content they define. For example,
* the heading "Chapter 2" should be closer to the content of chapter 2 than to
* chapter 1's content, to show its association. Setting these values in ems
* lets us define all the headings in one sweepingly consistent declaration.
*/
h1, h2, h3, h4, h5, h6 {
margin-top: 2em;
margin-bottom: 1em;
}
/**
* A new chapter should always start on a new page in printed media. Since the
* title of the book would be an <h1>, I'm assuming only <h2>s are chapter
* titles. Any heading below an <h2> would be a section title of some sort, and
* not warrant a new page (and probably wouldn't appear in a novel, anyway,
* being relegated to more formally structured content like textbooks).
*/
h2 {
break-before: page;
}
/**
* Browsers default to "web style" paragraph delineation, aka whitespace
* between each one. Most books are not set this way, instead using text
* indentation (see below). Get rid of that space between.
*/
p {
margin: 0;
}
/**
* "Book style" paragraph delineation uses indentation for each paragraph after
* the first. Presumably, you don't need the indentation for the first
* paragraph, since there's a chapter heading or whitespace or just not text
* above it to indicate that what you're reading is, indeed, a paragraph. The
* same is true for any other non-paragraph element, aka a list or an image.
* Using the adjacent sibling combinator, we match every paragraph that follows
* another paragraph, and add our text indent. This is conceptually different
* but has the same effect here as giving every paragraph a text indent, and
* then removing it for :first-of-type. For the indent itself, a value of about
* two characters is pretty common in print, but occasionally I've seen four is
* used as well.
*/
p + p {
text-indent: 2ch;
}
/**
* The horizontal rule element, now appropriately re-appropriated as the
* thematic break element. Thematic breaks take many forms in different
* printings—a plain horizontal line, a decorative, squiggly line, a glyph
* that looks like a cross between a heart and a radish (this thing: ❧)—but
* the most simple display, and one that will often be present even within
* books that feature a fancy break as well, is plain old whitespace. We could
* achieve this with empty paragraph tags, or a couple of <br> tags, but that's
* not semantic, is hard to type in markdown, and makes me want to vomit.
* We can do better.
*/
hr {
border: none;
margin-bottom: 2em;
}
/**
* What if we want a "harder" thematic break than just plain whitespace? As
* mentioned above, you'll see this in novels a fair amount, mixing both
* whitespace and a visual thematic break. Again, airing on the side
* of simplicity and commonness, I' using the "three stars" break, which is
* three star symbols centered at a slightly larger font size. This could have
* been implemented with a class, but I opted for a data attribute so I could
* only style <hr> elements (enforcing semantics) while not increasing the
* specificity too much.
*/
hr[data-break="hard"] {
/**
* Increase the font size a bit, using ems to keep ratios consistent. A
* visible thematic break should be, well, visible, and having it at the
* same size as the text diminishes it somehow.
*/
font-size: 1.25em;
/**
* Add spacing above and below. I set with ems, as this inherits the larger
* font size we set above, giving this a little bit more room.
*/
margin: 1em auto;
/**
* Browser defaults for the <hr> element are a grey color, receding the
* content into the background, but we want our break to stand out (it's a
* break after all), and we want it to work in print. Although this will
* probably always be black, I use the currentColor keyword to pick up
* whatever the text color is as future proofing (dark mode, anyone?)
*/
color: currentColor;
/**
* Center that thematic break. The stars are added using pseudo content,
* which default to inline display, so we don't need anything fancier than
* text-align here.
*/
text-align: center;
}
/**
* Actually add the stars in, using pseudo content.
*/
hr[data-break="hard"]::before {
content: "* * *";
}
There are plenty of fuzzy search solutions for Neovim, most notably Telescope, but sometimes you just want something fast and simple.
Enter fzy
, a fast command line program with a slick search algorithm.
It is a good unix citizen, operating on newline delimited lists passed through
stdin
, making it easy to integrate into all sorts of tools,
including editors.
It’s own documentation shows an example integration with Vim. However, that
implementation relies on the system()
function to display the
fuzzy finder, which no longer works for interactive commands in
Neovim.
Yes, there is a fzy plugin for neovim, but why not take the opportunity to learn some Neovim Lua, and write an implementation ourselves.
Along the way, we’ll learn how to load and test Lua files, invoke floating windows, handle interactive terminal inputs, create flexible functions, and add mappings.
This guide assumes some familiarity with Vim/Neovim, as well as a basic understanding of Lua. If you’re unfamiliar with Lua, I’d recommend reading Learn Lua in 15 minutes before starting. If that sounds fun, fire up your terminal and follow along. Otherwise, skip to the end for the final script.
Neovim picks up Lua files to include in the lua
folder, so we’ll create a
file there called fuzzy-search.lua
.
mkdir -p "${XDG_CONFIG_HOME:-$HOME/.config}/nvim/lua"
nvim "${XDG_CONFIG_HOME:-$HOME/.config}/nvim/lua/fuzzy-search.lua"
We’ll need a function for our fuzzy searching, so let’s add one with a
debug value to test. We need to access this function from anywhere, so
we’ll make it global by omitting the local
keyword. By convention,
global variables in Lua start with an uppercase letter.
FuzzySearch = function()
print('Hello, search!')
end
Neovim provides some handy methods for loading Lua files and functions.
We’ll use luafile
to load our fuzzy-search.lua
into
Neovim’s memory, and the lua
command to then call our
newly added FuzzySearch
command while we’re testing.
:luafile % " Interpret the current file as lua.
:lua FuzzySearch() " Should print 'Hello, search!' in the message area.
We’ll need to re-run those two commands every time we make a change to see their effects.
We can no longer use the system()
hack to interact with terminal
programs inside Neovim, but we have access to something better: floating
windows! We could make it a split buffer, but since a search interface
is an ephemeral UI component that is fine to overlap existing content
and should be dismissed the moment a selection is made, a floating
window seems ideal.
To do this, Neovim provides the nvim_open_win()
API method,
which we can access from the vim.api
Lua table. This method takes 3
arguments:
{buffer}
, for which buffer to display, by buffer ID.{enter}
, boolean for whether to enter the window or not.{config}
, a table of options.For {buffer}
, we ultimately want to display a new terminal buffer with the
search, so we’ll need to create one here. We’ll use the
nvim_create_buf
API method to create a fresh buffer, and
we’ll start a terminal session inside it in a later step. nvim_create_buf
returns the ID of the buffer it just created, so it can be passed to
nvim_open_win()
directly. It has 2 boolean arguments; the first for whether
the buffer will be “listed” by commands like :ls
, and the second for if
it should be treated as a “scratch” buffer, which sets some options common to
throw-away work. Since this is a temporary window, we’ll want to set this to
unlisted and scratch.
For {enter}
, we want to start typing our search as soon as the popup
window is invoked, without having to do C-w C-l or whatever,
so we’ll set this to true
.
So far, our function should now look like this:
FuzzySearch = function()
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{}
)
end
Finally, for {config}
, we’ll be setting several options here,
largely to position the window. There are five required properties,
relative/external
, width
, height
, col
, and row
, so let’s set
them first.
Every Neovim window requires either the relative
or external
key
to be set. external
is only relevant for external GUI applications,
so we’ll keep it simple and only set relative
. relative
controls
where the window is positioned relative to, aka, where it’s x/y position
originates from. Our window can be relative to the editor, the current
window, or the cursor position. This is a global search, so we’ll set
relative
to editor
. This means that our new window’s 0/0 x and y
position starts at the 0/0 x and y values of the entire editor.
Width and height are simple: how many rows, for height, and columns, for
width, does our window occupy? Let’s keep this straight forward for now,
and set width
to 10 and height
to 5.
col
and row
control where on the grid the window should appear from.
This is our starting x and y values. Again, let’s keep this simple and
set each to 0.
Our function should now look like this.
FuzzySearch = function()
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
width = 10,
height = 5,
col = 0,
row = 0,
}
)
end
Now, if you run luafile
on your fuzzy-search.lua
file again, and
then lua FuzzySearch()
, our floating window should appear over the top
right of your editor!
Type :bd
to close it.
Great, we have a floating window, but it’s not going to be very helpful looking like a postage stamp in the upper left. Let’s adjust the size, and center the window.
To center the window, we’ll need to calculate the mid-point for our window’s horizontal and vertical edge based on the window size and the size of Neovim itself, with our good friend Math.
We can get the width of the editor via the columns
global
option, exposed in the vim.o
options table, and the height via
lines
, exposed in the same.
Let’s start with the width. Our formula is pretty simple: subtract the
width of the popup from the total columns in the editor (the width),
and divide that by two to get the midway point. We need to subtract the
popup’s width, since it would be pushed too far to the right without
compensating for the space it takes up. We’ll finish by wrapping the
whole expression in the Lua built-in math.min
, since col
expects
whole numbers.
math.min((vim.o.columns - 10) / 2)
We’ll do something almost identical for row
(aka height), but instead of
using vim.o.columns
, we’ll use vim.o.lines
.
math.min((vim.o.lines - 5) / 2 - 1)
Notice that we’re also adding an extra subtraction by one. This is
because vim.o.lines
returns the total lines in the current window,
including the status line and the message area. That’s an extra two
lines to account for. Since we want to center the popup vertically, to
find how much to compensate by, we divide the extra lines by two, giving
us one to subtract.
Our function should now look like this.
FuzzySearch = function()
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
width = 10,
height = 5,
col = math.min((vim.o.columns - 10) / 2),
row = math.min((vim.o.lines - 5) / 2 - 1),
}
)
end
Looking over this code, there’s some repetition causing maintenance
overhead: we’re writing literals for the width and height twice. We’ll
need to change these values soon, so let’s refactor to use local
variables for these values. Add a variable for width
and height
at the top of the FuzzySearch
function, since we’ll want them to be
available throughout the scope. Our code should now look like this:
FuzzySearch = function()
local width = 10
local height = 5
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
width = width,
height = height,
col = math.min((vim.o.columns - width) / 2),
row = math.min((vim.o.lines - height) / 2 - 1),
}
)
end
If you test this code, you’ll get something like this.
Not much to look at, but at least it’s centered. But why is it only one line high, instead of five? Well, it actually is five lines high, but we can’t tell because our window has no outline style or contents. Let’s fix the former, then move on to the latter.
border
has several built in options, as well as an option to define
your own border characters (this is what Telescope does). Feel free to
play around with the options, but for the purpose of this guide we’ll be
using "shadow"
. I like this style because it’s visually uncluttered,
and makes clear that this window is “above” others.
While it’s not styling, let’s take a moment here to set the
noautocmd
option to true
. This disables buffer events for the
window, since we won’t be using them and it’s a good practice to limit
the scope of our programs as much as sensible. Feel free to set this to
false
later if you do end up using these methods.
Our function should now look like this.
FuzzySearch = function()
local width = 10
local height = 5
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
style = 'minimal',
border = 'shadow',
noautocmd = true,
width = width,
height = height,
col = math.min((vim.o.columns - width) / 2),
row = math.min((vim.o.lines - height) / 2 - 1),
}
)
end
Test this code and you should get something like this.
Looking good. Or, at least like a stylish postage stamp. Alright, let’s move on to the contents of the window.
There are several ways Neovim offers for creating a new terminal
instance, but we’ll be using the termopen()
function,
since it offers the most API control.
We can ask it to provide a “standard” interactive terminal session, or
to launch running a specific command. We’ll call it after our floating
window setup code, using a basic command to gather files for fzy
to search, taken from their documentation, that should work on most
systems.
vim.fn.termopen('find . -type f | fzy')
The find
command will grab every regular file in your current
directory tree, and pass it to fzy
. Testing this code will produce a
result similar to this.
Hooray! You should be able to search for a file, move up and down in the list via C-n and C-p, and select a file with Enter. However, you may be noticing some slight issues.
[Process exited 0]
message, making you press Enter again before continuing.Solving the second issue is dead simple: we call
startinsert before running termopen()
via
nvim_command
.
vim.api.nvim_command('startinsert')
We’ll address each of the other issues, but let’s tackle the window size first, so we can better see what we’re doing.
Alright, back to window sizing. We can improve the display by taking
full advantage of the amount of space we have available to us. Since we
already re-factored our width
and height
to single variables, we
simply modify them where they are declared.
Wouldn’t it be nice to stretch the width of the popup window to however
large the Neovim instance is? Easy. We change the width
variable to
equal vim.o.columns
, minus four. The number four is arbitrary; it
gives two columns of space between the edge of the Neovim instance and
the popup window, which feels right to me. Feel free to experiment with
your own values.
local width = vim.o.columns - 4
For setting the height, we want to show all the results that fzy
shows, or, in other words, we want our popup window to be as tall as the
fzy
output. fzy
defaults to displaying ten search results at a time.
This number can be controlled via the --lines
option, but changing
that will be left as an exorcise for the reader. For now, we’ll redefine
height
to be equal to 11
, which is the default 10 results fzy
displays, plus an extra line for the search prompt.
local height = 11
We now have an adaptive display window that shows our searches more clearly.
But what happens on very large screens? Our window will stretch all the way across, packing the results at the left, and wasting space on the right. We can spend a moment fixing this by setting a max width for the window. The window will still center, so the eye won’t have to travel all the way to the edge to see results. The standard max line length for Vim is a sensible 80 columns, so we’ll stick to that for our window.
Since we’re subtracting four from the total width, and we want to trigger the max after we would naturally reach 80 columns, we’ll set the width at 85 columns.
After our local variable declarations, we’ll add our conditional.
if (vim.o.columns >= 85) then
width = 80
end
Now the entirety of our function should look like this.
FuzzySearch = function()
local width = vim.o.columns - 4
local height = 11
if (vim.o.columns >= 85) then
width = 80
end
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
style = 'minimal',
border = 'shadow',
noautocmd = true,
width = width,
height = height,
col = math.min((vim.o.columns - width) / 2),
row = math.min((vim.o.lines - height) / 2 - 1),
}
)
vim.fn.termopen('find . -type f | fzy')
end
Let’s move on to solving the third and fourth problems mentioned above—not actually being able to open the file searched for!
We want to perform an action—edit a file—when the terminal process
for fzy
exits, which happens after the file is selected. We know from
the fzy
man page that on exit the currently selected
item is printed to stdout
, which is how we can detect which file is
selected.
The termopen()
function takes a table of event-driven callbacks as
it’s second argument. We’ll be using the appropriately named on_exit
.
vim.fn.termopen('find . -type f', {on_exit = function()
-- code goes here.
end})
Let’s get rid of the extra Enter press. Inside the on_exit
callback, we’ll call bdelete
, meaning that once the
terminal process exits, we’ll automatically delete the buffer. We’ll add
the !
option, which will delete the buffer even if there are changes
to it. This buffer should never have meaningful changes, so we never
want that safety (otherwise, if there were changes, bdelete
would
produce an error).
vim.api.nvim_command('bdelete!')
If you test the function, the popup window should immediately dismiss after a file is selected. Excellent!
Now we can move on to opening the file searched for. We know that fzy
prints the path to the selected file to {stdout}
. Maybe there’s an
argument that Neovim passes {stdout}
to the terminal event callbacks?
However, the on_exit
callback only receives the job
id, the exit code, and the event type, which in this case is
always “exit”.
There must be a better way to solve this, but how I’ve figured
it out is to write the contents of {stdout}
to a file as part
of the fzy
pipeline, then read the file contents back in the
on_exit
function. If you know of a better method, hit me up on
Twitter.
Since the file we’re creating is totally throw-away, you could say
temporary, we’ll use Neovim’s tempname()
function to
generate a unique temporary file name in a clean path.
local file = vim.fn.tempname()
Then we can save the output fzy
(which is {stdout}
) to our file with
simple Unix redirection and Lua concatenation.
'find . -type f | fzy > ' .. file
Back inside our on_exit
callback function, and after our bdelete
call, is where we can access the file we wrote. Lua provides a robust
filesystem API which we can use to open a stream to the file and
read the contents into a variable. We’ll open the file stream as read
only, keeping the principle of only asking for what we need.
local f = io.open(file, 'r')
local stdout = f:read('*all')
We should also clean up after ourselves, removing the temporary file from disk and closing the file stream.
f:close()
os.remove(file)
Now we have the file path stored in the stdout
variable; we can use
the nvim_command
Neovim API method to :edit
it!
vim.api.nvim_command('edit ' .. stdout)
Our whole function should now look like this.
FuzzySearch = function()
local width = vim.o.columns - 4
local height = 11
if (vim.o.columns >= 85) then
width = 80
end
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
style = 'minimal',
border = 'shadow',
noautocmd = true,
width = width,
height = height,
col = math.min((vim.o.columns - width) / 2),
row = math.min((vim.o.lines - height) / 2 - 1),
}
)
local file = vim.fn.tempname()
vim.fn.termopen('find . -type f | fzy > ' .. file, {on_exit = function()
vim.api.nvim_command('bdelete!')
local f = io.open(file, 'r')
local stdout = f:read('*all')
f:close()
os.remove(file)
vim.api.nvim_command('edit ' .. stdout)
end})
end
Test the function; selecting a file should open it. Yay! We have a fully working solution.
Wouldn’t it be nice to be able to access our function outside of our
fuzzy-search.lua
file? Say, in our init.vim
or init.lua
file?
Lua includes a simple yet powerful module system, which we can leverage with only a few changes to our file.
All we need to do is return
our function, and that will expose it
to require
statements. However, to make it possible to add further
exportable functions to this file in the future, and to adhere to
convention, we’ll add our function to a table.
local M = {}
M.FuzzySearch = function()
-- all our code.
end
return M
We name the returned variable M
, again, to follow convention.
This adds fuzzy-search
as a module to the Neovim environment. In a Lua file within the Neovim context, we could add our function to the environment with:
local fs = require'fuzzy-search'
fs.FuzzySearch()
Notice there’s no .lua
extension or leading lua
directory name in
the require
—Neovim/Lua handles this for us so we don’t have to type
all that.
Now, in our init.vim
or init.lua
file, we can create a mapping to
this function by requiring our search file inline, and parsing it with
the built-in lua
command.
Say we wanted to map <leader>f, we would add, for init.vim
:
nnoremap <leader>f <cmd>lua require'fuzzy-search'.FuzzySearch()<CR>
Or for init.lua
:
vim.api.nvim_set_keymap('n', '<leader>f', '<cmd>lua require"fuzzy-search".FuzzySearch()<CR>')
We did it. Here’s our completed code.
-- ~/.config/nvim/lua/fuzzy-search.lua
local M = {}
M.FuzzySearch = function()
local width = vim.o.columns - 4
local height = 11
if (vim.o.columns >= 85) then
width = 80
end
vim.api.nvim_open_win(
vim.api.nvim_create_buf(false, true),
true,
{
relative = 'editor',
style = 'minimal',
border = 'shadow',
noautocmd = true,
width = width,
height = height,
col = math.min((vim.o.columns - width) / 2),
row = math.min((vim.o.lines - height) / 2 - 1),
}
)
local file = vim.fn.tempname()
vim.fn.termopen('find . -type f | fzy > ' .. file, {on_exit = function()
vim.api.nvim_command('bdelete!')
local f = io.open(file, 'r')
local stdout = f:read('*all')
f:close()
os.remove(file)
vim.api.nvim_command('edit ' .. stdout)
end})
end
return M
This script is just a starting point. Here’s some ideas for improvements.
fd
or my personal favoriate, git ls-files
.I implemented some of these in my own dotfiles.
That’s it! Thanks for reading.
I’ve been trying to find a good workflow for drawing vector artwork that has the spontaneity and roughness of hand drawn images.
Since my digital medium of choice is the web, that means SVG is the only output format in town. Which is perfect, since like everybody, I never liked choice anyway.
There’s some interesting experiments with giving programmatically drawn SVGs a hand drawn quality, but I want to have the artistic freedoms that come with actually drawing the images by hand, not just the feel of being hand drawn.
I need a vector drawing program, but that’s where everything falls apart. In my experience, there’s no drawing tool that combines the expressiveness of drawing freehand that also natively handles SVG, or even reliably outputs it.
Sure, Inkscape and Illustrator provide a lot of power, but they feel more like drafting tools, not drawing tools. I want the experience of drawing something free-hand, without constraints or planning or thinking about the geometry.
The situation has me pining for the bad old days of Macromedia Flash, which, everything else aside, still had the best vector drawing tool for my money.
You could draw all the regular geometry that modern apps offer, but you could also draw free hand. If you drew a closed shape free hand, the tool would recognize the shape, allowing you to apply transforms or color it in with a click.
Yet as much as I look back at the time of Flash with rose-colored glasses, it also sucked. For one, you had to draw with a mouse, since this was before the days of good, relatively cheap touch screens. That made the “free-hand” not quite free; mouse-hand, more like. The lines didn’t look the same.
These days I have access to a iPad with an Apple Pencil, so I could draw my lines there, then convert them to SVG. In fact, that feature is built into several popular drawing applications.
Still, I was curious about a fully hand drawn approach. I’m also a software developer, so I decided to try a workflow that at the same time involved a lot less digital technology and more command line. Both sides were happy.
I drew my image traditionally, with a pencil on paper, in black and white, took a picture of it, did some image processing, converted that to SVG, then imported it into an SVG editing application for coloring.
So far, I don’t hate this process.
I started by drawing a lot of doodles using a drawing pencil and a “light touch”. By light touch, I mean that I tried not to bore down on the pencil stroke, and keep my lines light and “exploratory”, as my wife put it. This let me keep figuring out the drawing as I drew, without being married to anything other than my wife.
I kept my doodles unrelated, just drawing whatever I felt my lines were creating already.
Once I was happy with that, I used one of my wife’s fancy inking pens to make a clear, dark line over my pencil strokes. The pencil was still visible underneath the ink, but there was no mistaking where the “real” line was.
I took that drawing, and photographed it on my iPhone 7 camera with as bright light as I could manage. In fact, it’s the same image (at a higher resolution) as seen below.
I took that image and copied it to my Mac Mini, then into Acorn, which is a streamlined, Mac-centric version of Photoshop that I find easier to use. I adjusted the color levels to omit as many of the smudgy greys from the paper and the pencil strokes as possible, while keeping the ink lines relatively dark.
Then, I switched the graphic to black and white (as distinct from gray
scale; every color is either black or white). I played with those
“sharpness” levels until all the remaining pencil lines were white, and
the ink lines were pure black. I saved the result, below, as a .bmp
file, for reasons that will become momentarily apparent.
Remember that big .bmp
mystery from just now? I mean, why save it as
such an archaic file type? Because it’s an uncompressed image type, so
no data is lost, but more importantly, it’s what Potrace,
the bitmap to vector conversion tool, expects (it can also ingest .pnm
files as well, but honestly I have no idea what those are and I haven’t
been bothered to look them up).
Potrace is an open source tool used at the command line, that is also part of other commercial and non-commercial software, such as the afore mentioned Inkscape. I actively enjoy the command line, so I don’t mind the “unglamorous” interface.
potrace --backend svg my-image.bmp --output my-image.svg
I told the program to output SVGs (it can also do EPS and Postscript), and it worked flawlessly. It only operates on black and white images, but I already knew that so everything was going just fine.
Next step, color. I opened my new SVG in Boxy SVG, which is a simple program that natively handles SVGs.
Since I had lots of doodle “subjects” in my initial drawing, I decided to focus on the character in the upper left with the cape, which I called “Super Wisp”.
If this was Flash, I could have just used the paint bucket tool to fill
in each contiguous shape. However, with all the SVG tools I’ve used,
the shape has to use <polygon>
, <ellipse>
, or the like under the
hood to fill in shapes with a click (the programs just attach a fill
property to the element). Auto-generated SVG code, like the output of
Potrace, almost always draw with simple <path>
elements, which are
hard if not impossible to fill with color.
Instead, my plan was to draw shapes of the same dimensions as the lines it is meant to fill in, give them a background color, then move them behind the lines in the layer view.
I started coloring by making a green ellipse for the head. The head I drew was not a perfect ellipse, so the background color poked out on the edges. However, I felt it added a certain rough feel, like in cheap comics where the color printing is not that exact.
The sloppy, overflowing color look was all well and good, but at the risk of being anti-creative, could I color inside the lines?
Turns out: yes. I used the polygon tool to draw points in an angular version of the shape I wanted to color in.
Once I put the finished shape behind the lines, the lines obscured the pointy edges of the polygon, creating the illusion of continuous color.
Unfortunately, my results weren’t perfect on the first drawing—it was hard to tell if my polygon edge would fall exactly on the line, especially near the top—as illustrated, below.
Fortunately, Boxy SVG provides excellent shape transform tools. I was able to adjust geometry points until the shape fell inside the lines how I wanted.
Once I was happy with my color, I selected only my colored-in Super
Wisp character, and exported it. For a drawing with the kind of
irregularities of hand drawn, I was expecting a semi-large file size,
but it ending up weighing in at only 64k
on disk. This is before optimizations and minification, so the raw SVG code looked like this:
<?xml version="1.0" encoding="utf-8"?>
<svg viewBox="444.783 273.522 749.717 1341.237" xmlns="http://www.w3.org/2000/svg">
<g>
<title>Super Wisp</title>
<g>
<title>Color</title>
<polygon style="stroke: rgb(0, 0, 0); fill: rgb(248, 106, 106);" points="601.655 994.655 551.522 1042.12 540.507 1115.09 559.784 1190.82 630.653 1268.49 761.935 1345.18 883.477 1369.06 954.659 1425.66 971.182 1500.33 960.166 1544.39 984.7 1600.03 1075.78 1605.45 1156.2 1557.99 1190.15 1467.87 1175 1431.5 1148.84 1372.3 1162.53 1287.74 1145.36 1232.1 1072.32 1233.48 984.195 1260.366 912.535 1209.34 870.519 1125.57 886.312 1071.31 909.719 1036.88 900.081 1017.6 798.189 1001.08 765.143 966.659"/>
<!-- The real code keeps going, but one can only stare at so much XML. -->
I ran the file through Svgcleaner, and it cut the file
size by more than half, down to 31k
! Now the code looked like this:
<svg viewBox="444.783 273.522 749.717 1341.237" xmlns="http://www.w3.org/2000/svg"><path d="m601.655 994.655-50.133 47.465-11.015 72.97 19.277 75.73 70.869 77.67 131.282 76.69 121.542 23.88 71.182 56.6 16.523 74.67-11.016 44.06 24.534 55.64 91.08 5.42 80.42-47.46 33.95-90.12-15.15-36.37-26.16-59.2 13.69-84.56-17.17-55.64-73.04 1.38-88.125 26.886-71.66-51.026-42.016-83.77 15.793-54.26 23.407-34.43-9.638-19.28-101.892-16.52-33.046-34.421z" fill="#f86a6a" stroke="#000"/>
<!-- Enough of this; the only thing worse than starting at too much XML is starting at too much minified XML. -->
Here it is, the finished SVG, embedded directly into this page, in all it’s sloppy glory.
There’s still lots of imperfections, which really stretch the limits of what can be called artistic roughness, which are especially noticeable at larger screen sizes. Yet ultimately, I found the test successful. To me, the image has charm and personality, and as my psychologist warned, came directly from me.
The workflow isn’t perfect, but it’s very workable. The whole process took me only a couple of hours, and this was my first time doing it. I’m excited to keep exploring.
I had some trouble after upgrading GPGTools to version 2020.2 on macOS Big Sur, where it would ignore my Yubikey smart card and I couldn’t unlock my stuff.
Every time I tried to use gpg (Yubikey inserted), I would get this error:
gpg: decryption failed: No secret key
This sent me into a wild rage, and after spending far too much time trying to debug with no results, I switched tactics; remove GPGTools and install gpg myself. While it’s still early days, and I am by no means a gpg expert (who is?), everything seems to be working fine.
Here’s how I did it.
I downloaded the uninstaller from the GPGTools website; that’s right, it is not included in the standard GPGTools installation. Another reason to ditch it.
https://gpgtools.tenderapp.com/kb/faq/uninstall-gpg-suite
I used homebrew to install the required packages.
brew install gpg pinentry-mac # pinentry-mac is needed for smart cards.
I also added the two packages to my Brewfile.
diff --git a/Brewfile b/Brewfile
index 683e138..9b0d988 100644
--- a/Brewfile
+++ b/Brewfile
@@ -13,6 +13,7 @@ brew "fzy"
brew "git"
brew "git-delta"
brew "git-standup"
+brew "gpg"
brew "hugo"
brew "imagemagick"
brew "isync"
@@ -27,6 +28,7 @@ brew "pandoc"
brew "par"
brew "pass"
brew "pianobar"
+brew "pinentry-mac"
brew "rename"
brew "ripgrep"
brew "rust"
The gpg installation added a .gnupg/
configuration directory to my
home folder. After some research, I added a few lines to gpg.conf
and
gpg-agent.conf
.
# ~/.gnupg/gpg.conf
ask-cert-level
use-agent
auto-key-retrieve
no-emit-version
default-key D81A4957BAF06BCA6E060EE5461C015E032EF9CB # use your key
# ~/.gnupg/gpg-agent.conf
pinentry-program /usr/local/bin/pinentry-mac
default-cache-ttl 600
max-cache-ttl 7200
debug-level basic
log-file $HOME/.gnupg/gpg-agent.log # helpful for debugging
I was making progress, but when I tried to use gpg I would get this error:
gpg: OpenPGP card not available: No SmartCard daemon
This one took some time to figure out. I checked
my homebrew installation, and scdaemon existed at
/usr/local/Cellar/gnupg/2.2.25/libexec/scdaemon
.
I eventually figured out I needed a scdaemon configuration file, and I needed to pass in the name of my smart card there.
macOS comes with a command line tool for testing smart cards (PC/SC), which I used to get the machine name of my smart card.
I inserted my Yubikey and ran pcsctest
, which gave me this output:
MUSCLE PC/SC Lite Test Program
Testing SCardEstablishContext : Command successful.
Testing SCardGetStatusChange
Please insert a working reader : Command successful.
Testing SCardListReaders : Command successful.
Reader 01: Yubico YubiKey OTP+FIDO+CCID
Enter the reader number :
The “Reader” line is what we’re interested in. I copied the name of my
smart card, killed pcsctest
with a Ctrl-c, and pasted to a
file called scdaemon.conf
.
# ~/.gnupg/scdaemon.conf
reader-port "Yubico YubiKey OTP+FIDO+CCID"
I had to restart gpg agent before my change would take effect.
killall gpg-agent
gpg-agent --daemon --homedir $HOME/.gnupg
And that’s it, things are working for me again, and I got to replace a large dependency (GPGTools) with a slightly smaller one (GPG).
Some links I found helpful in my journey to figuring this out.
I’ve been working on a graphing project for the Astoria Digital volunteer group in collaboration with Muckrock. The app will visualize data around New York’s 50a police disciplinary record requests.
My first task was a proof of concept - to put some very basic data in a bar chart. The data has a single axis: how much each request cost. We only care about unique costs for the purposes of this visualization, so the list ended up being only 19 values.
[
"$47,504.00"
"$30,815.00"
"$20,510.00"
"$5,000.00"
"$846.16"
"$250.00"
"$244.66"
"$200.00"
"$56.15"
"$36.00"
"$28.00"
"$27.25"
"$19.39"
"$17.50"
"$14.75"
"$13.25"
"$8.75"
"$7.50"
"$3.00"
]
The goal is to show each value in relation to each other with a good old fashioned bar chart. Since the project was already using React, I decided to go with nivo as the graphing library, which uses d3 under the hood.
That’s when things started to get interesting, and by interesting I mean unexpected, and by unexpected I mean annoying.
Nivo has lots of documentation, with a really slick display, interactive widgets and lots of examples on how to solve complex problems. However, noticeably lacking is much documentation around how to solve simple problems, like implementing a dead simple bar chart. Equally missing is a robust getting started document, or a explanation of the architecture of the library, key concepts, assumptions, and the like.
Irritating, but fine. I’ve been doing web development long enough not to be scared away. I dug in, and while it took me longer than I would have liked to piece together what I actually needed for my little bar chart, I had a working widget.
<Bar
width={800}
height={500}
layout="horizontal"
data={prices}
/>
Well, working was a bit of an overstatement. While it rendered, it looked, well, off.
Was my data wrong? I double and triple checked it, throwing
console.log
s all over the place. Everything seemed fine.
After some careful eyeballing, it seemed like the issue was not that the data was wrong, but that the display was being cut off.
I checked the nivo documentation - maybe I missed a vital configuration? Some CSS file I needed to include? Where’s the option for “render correctly”?
Unfortunately all I found was very fancy how-tos on how to render multi-colored bars with multi-layered data and lots of animations. Cool stuff, but not helpful here.
I was starting to get frustrated. How hard could it be to render a dead simple bar chart?
I tried extending the width
and height
attributes on the chart,
but that only made the SVG canvas bigger, leaving in tact the cut-off
values.
Finally, I broke down and started digging into nivo’s code.
While I couldn’t find any documentation on it, maybe there was a CSS file I needed to include? I poked around in the SVG source that nivo generates, and noticed that the SVG elements did not include any classes or ids. That hinted that there was no default CSS file to include, but to check my assumption, I searched the entire node package for any CSS file.
find ./node_modules/@nivo/ -type f -name "*.css"
That returned no results, so I could confidently cross off “forgot to include library CSS file” from my debugging checklist.
If there was no external CSS library expected, the chart would have to be styled another way. SVGs have lots of styling effects via the use of properties; maybe something in there was amiss? Seemed like a stretch that there was a bug in the library for rendering such a simple graphic, but I didn’t have any better ideas, so I started combing through the SVG markup.
I didn’t notice anything strange about the SVG properties, but I did needs more styling power than pure SVG, and since it isn’t using an
That made me challenge a core assumption I had been making: that the
library was smart enough to fit the data given inside the SVG canvas.
After checking the API documentation again with my new hypothesis
goggles, I found the margin
property, with this description:
Chart margin.
Not exactly the clarification I was hoping for, but my new theory was docs search found no method of passing in custom CSS class/id values) and values being cut off is essentially an offset issue, I figured that
I slapped that sucker on the component, and sure enough, it fixed the
issue. My assumption was (unfortunately) correct; you have to explicitly
set margin
s for your simple bar chart, otherwise nivo might not
display it properly.
This would have been really helpful to have outlined in a core concept document, and was rather disheartening, because it meant that component configuration is tied directly to the shape of the data, adding a maintenance headache. However, it worked, and that issue was for future Chris to deal with.
Unfortunately, I wasn’t completely done. Since my chart was so simple, containing only one value column, I wanted to simplify the display. By default, the nivo chart outputs x and y axis labels, as well as a label overlaying the bar itself. This is great for complex, multi-axis data, but for a simple chart like this was entirely redundant, and also caused overlapping values because of the vast differentials between them.
My ideal state was to hide the x and y axis displays, and just use the label overlaying the bar, since the data isn’t relational. However, nivo does not present an option for this. It does provide an option for hiding the overlay labels, so I went with that.
That fixed the overlapping data issue, yet the y axis was still redundant and potentially confusing.
Again, nivo offers no way to hide this display, so I turned to the dark arts: hacky CSS.
Since nivo doesn’t supply unique identifiers for elements, nor does it supply a method to pass in your own, I resorted to this monstrosity:
.headline__costs + svg > g > g:nth-child(2) {
display: none;
}
Eagle-eyed CSS-folk will have noticed that this code is brittle as hell; if the nivo library changes how it outputs it’s SVG markup, say, moving the y axis to be the first child instead of the second, this code will hide the wrong element.
That being said, this works for now, and provides some real value by making the data visualization more clear.
And that’s my journey with nivo so far; an impressive library with lots slick functionality, but some strange omissions around dynamically rendering data shapes and a lack of basic documentation.
margin
property for each dataset)Thanks for reading!
I find PHP’s boolean casting rules strange and hard to remember. I know I’m not alone.
To help myself out, and anyone else that happens to stumble upon it, I wrote a commented script that demonstrates how each kind value is cast to booleans.
You can download the script to run it yourself, or view it below (or both!).
You can also skip to the results of the script (aka, what booleans are cast to!)
I added the comment “Uh oh!” where I thought the result was unexpected or inconsistent.
<?php
class fooClass {
function do_foo() {
echo "Hello, world";
}
}
$fooNull = NULL;
$fooFalse = FALSE;
$fooTrue = TRUE;
$fooFalseyString = "";
$fooTruthyString = "hello";
$fooFalseyArray = [];
$fooTruthyArray = ["hello", "everyone"];
$fooFalseyNumber = 0;
$fooTruthyNumber = 5;
$fooTruthyObject = new fooClass;
$fooFalseyObject = (object) array();
$fooNegativeNumber = -2;
$fooNegativeZero = -0;
echo "PHP Version: " . phpversion() . "\n";
echo "\"Uh oh!\" added after every logically unexpected result.\n\n";
// NULL {{{
if ($fooNull) {
echo "Default if evaluates null to true // Uh oh! \n";
} else {
echo "Default if evaluates null to false\n";
}
if (isset($fooNull)) {
echo "isset() if evaluates null to true\n";
} else {
echo "isset() if evaluates null to false // Uh oh! \n";
}
if (empty($fooNull)) {
echo "empty() if evaluates null to true\n";
} else {
echo "empty() if evaluates null to false // Uh oh! \n";
}
echo "\n";
// }}}
// Unset variable {{{
if ($fooUnset) {
echo "Default if evaluates an unset variable to true // Uh oh! \n";
} else {
echo "Default if evaluates unset variable to false\n";
}
if (isset($fooUnset)) {
echo "isset() if evaluates an unset variable to true // Uh oh! \n";
} else {
echo "isset() if evaluates unset variable to false\n";
}
if (empty($fooUnset)) {
echo "empty() if evaluates an unset variable to true\n";
} else {
echo "empty() if evaluates unset variable to false // Uh oh! \n";
}
echo "\n";
// }}}
// FALSE {{{
if ($fooFalse) {
echo "Default if evaluates false to true // Uh oh! \n";
} else {
echo "Default if evaluates false to false\n";
}
if (isset($fooFalse)) {
echo "isset() if evaluates false to true\n";
} else {
echo "isset() if evaluates false to false // Uh oh! \n";
}
if (empty($fooFalse)) {
echo "empty() if evaluates false to true\n";
} else {
echo "empty() if evaluates false to false // Uh oh! \n";
}
echo "\n";
// }}}
// TRUE {{{
if ($fooTrue) {
echo "Default if evaluates true to true\n";
} else {
echo "Default if evaluates true to false\n";
}
if (isset($fooTrue)) {
echo "isset() if evaluates true to true\n";
} else {
echo "isset() if evaluates true to false // Uh oh! \n";
}
if (empty($fooTrue)) {
echo "empty() if evaluates true to true // Uh oh! \n";
} else {
echo "empty() if evaluates true to false\n";
}
echo "\n";
// }}}
// Falsey string {{{
if ($fooFalseyString) {
echo "Default if evaluates a falsey string to true // Uh oh! \n";
} else {
echo "Default if evaluates a falsey string to false\n";
}
if (isset($fooFalseyString)) {
echo "isset() if evaluates a falsey string to true // Uh oh! \n";
} else {
echo "isset() if evaluates a falsey string to false\n";
}
if (empty($fooFalseyString)) {
echo "empty() if evaluates a falsey string to true\n";
} else {
echo "empty() if evaluates a falsey string to false // Uh oh! \n";
}
echo "\n";
// }}}
// Truthy string {{{
if ($fooTruthyString) {
echo "Default if evaluates a truthy string to true\n";
} else {
echo "Default if evaluates a truthy string to false // Uh oh! \n";
}
if (isset($fooTruthyString)) {
echo "isset() if evaluates a truthy string to true\n";
} else {
echo "isset() if evaluates a truthy string to false // Uh oh! \n";
}
if (empty($fooTruthyString)) {
echo "empty() if evaluates a truthy string to true // Uh oh! \n";
} else {
echo "empty() if evaluates a truthy string to false\n";
}
echo "\n";
// }}}
// Falsey array {{{
if ($fooFalseyArray) {
echo "Default if evaluates a falsey array to true // Uh oh! \n";
} else {
echo "Default if evaluates a falsey array to false\n";
}
if (isset($fooFalseyArray)) {
echo "isset() if evaluates a falsey array to true // Uh oh! \n";
} else {
echo "isset() if evaluates a falsey array to false\n";
}
if (empty($fooFalseyArray)) {
echo "empty() if evaluates a falsey array to true\n";
} else {
echo "empty() if evaluates a falsey array to false // Uh oh! \n";
}
echo "\n";
// }}}
// Truthy array {{{
if ($fooTruthyArray) {
echo "Default if evaluates a Truthy array to true\n";
} else {
echo "Default if evaluates a Truthy array to false // Uh oh! \n";
}
if (isset($fooTruthyArray)) {
echo "isset() if evaluates a Truthy array to true\n";
} else {
echo "isset() if evaluates a Truthy array to false // Uh oh! \n";
}
if (empty($fooTruthyArray)) {
echo "empty() if evaluates a Truthy array to true // Uh oh! \n";
} else {
echo "empty() if evaluates a Truthy array to false\n";
}
echo "\n";
// }}}
// Falsey number {{{
if ($fooFalseyNumber) {
echo "Default if evaluates a Falsey number to true // Uh oh! \n";
} else {
echo "Default if evaluates a Falsey number to false\n";
}
if (isset($fooFalseyNumber)) {
echo "isset() if evaluates a Falsey number to true // Uh oh! \n";
} else {
echo "isset() if evaluates a Falsey number to false\n";
}
if (empty($fooFalseyNumber)) {
echo "empty() if evaluates a Falsey number to true\n";
} else {
echo "empty() if evaluates a Falsey number to false // Uh oh! \n";
}
echo "\n";
// }}}
// Truthy number {{{
if ($fooTruthyNumber) {
echo "Default if evaluates a Truthy number to true\n";
} else {
echo "Default if evaluates a Truthy number to false // Uh oh! \n";
}
if (isset($fooTruthyNumber)) {
echo "isset() if evaluates a Truthy number to true\n";
} else {
echo "isset() if evaluates a Truthy number to false // Uh oh! \n";
}
if (empty($fooTruthyNumber)) {
echo "empty() if evaluates a Truthy number to true // Uh oh! \n";
} else {
echo "empty() if evaluates a Truthy number to false\n";
}
echo "\n";
// }}}
// Truthy object {{{
if ($fooTruthyObject) {
echo "Default if evaluates a Truthy object to true\n";
} else {
echo "Default if evaluates a Truthy object to false // Uh oh! \n";
}
if (isset($fooTruthyObject)) {
echo "isset() if evaluates a Truthy object to true\n";
} else {
echo "isset() if evaluates a Truthy object to false // Uh oh! \n";
}
if (empty($fooTruthyObject)) {
echo "empty() if evaluates a Truthy object to true // Uh oh! \n";
} else {
echo "empty() if evaluates a Truthy object to false\n";
}
echo "\n";
// }}}
// Falsey object {{{
if ($fooFalseyObject) {
echo "Default if evaluates a Falsey object to true // Uh oh! \n";
} else {
echo "Default if evaluates a Falsey object to false\n";
}
if (isset($fooFalseyObject)) {
echo "isset() if evaluates a Falsey object to true // Uh oh! \n";
} else {
echo "isset() if evaluates a Falsey object to false\n";
}
if (empty($fooFalseyObject)) {
echo "empty() if evaluates a Falsey object to true\n";
} else {
echo "empty() if evaluates a Falsey object to false // Uh oh! \n";
}
echo "\n";
// }}}
// Negative number {{{
if ($fooNegativeNumber) {
echo "Default if evaluates a Negative number to true\n";
} else {
echo "Default if evaluates a Negative number to false // Uh oh! \n";
}
if (isset($fooNegativeNumber)) {
echo "isset() if evaluates a Negative number to true\n";
} else {
echo "isset() if evaluates a Negative number to false // Uh oh! \n";
}
if (empty($fooNegativeNumber)) {
echo "empty() if evaluates a Negative number to true // Uh oh! \n";
} else {
echo "empty() if evaluates a Negative number to false\n";
}
echo "\n";
// }}}
// Negative Zero {{{
if ($fooNegativeZero) {
echo "Default if evaluates a Negative Zero to true // Uh oh! \n";
} else {
echo "Default if evaluates a Negative Zero to false\n";
}
if (isset($fooNegativeZero)) {
echo "isset() if evaluates a Negative Zero to true // Uh oh! \n";
} else {
echo "isset() if evaluates a Negative Zero to false\n";
}
if (empty($fooNegativeZero)) {
echo "empty() if evaluates a Negative Zero to true\n";
} else {
echo "empty() if evaluates a Negative Zero to false // Uh oh! \n";
}
echo "\n";
// }}}
/* vim: set foldmethod=marker */
Here’s what the above script evaluates to (on my machine).
PHP Version: 7.3.11 "Uh oh!" added after every logically unexpected result. Default if evaluates null to false isset() if evaluates null to false // Uh oh! empty() if evaluates null to true Default if evaluates unset variable to false isset() if evaluates unset variable to false empty() if evaluates an unset variable to true Default if evaluates false to false isset() if evaluates false to true empty() if evaluates false to true Default if evaluates true to true isset() if evaluates true to true empty() if evaluates true to false Default if evaluates a falsey string to false isset() if evaluates a falsey string to true // Uh oh! empty() if evaluates a falsey string to true Default if evaluates a truthy string to true isset() if evaluates a truthy string to true empty() if evaluates a truthy string to false Default if evaluates a falsey array to false isset() if evaluates a falsey array to true // Uh oh! empty() if evaluates a falsey array to true Default if evaluates a Truthy array to true isset() if evaluates a Truthy array to true empty() if evaluates a Truthy array to false Default if evaluates a Falsey number to false isset() if evaluates a Falsey number to true // Uh oh! empty() if evaluates a Falsey number to true Default if evaluates a Truthy number to true isset() if evaluates a Truthy number to true empty() if evaluates a Truthy number to false Default if evaluates a Truthy object to true isset() if evaluates a Truthy object to true empty() if evaluates a Truthy object to false Default if evaluates a Falsey object to true // Uh oh! isset() if evaluates a Falsey object to true // Uh oh! empty() if evaluates a Falsey object to false // Uh oh! Default if evaluates a Negative number to true isset() if evaluates a Negative number to true empty() if evaluates a Negative number to false Default if evaluates a Negative Zero to false isset() if evaluates a Negative Zero to true // Uh oh! empty() if evaluates a Negative Zero to true
I’ve vaguely known about CSS’s general sibling combinator for a while, but I have never found a practical use case for it, until now. And let me tell you, the results are, get ready for it, underwhelming.
First, a quick review of what the general sibling combinator is. Or, skip ahead to the next section if this is old hat for you.
The general sibling combinator—invoked with ~
between two selectors—
allows you to select every sibling element following the first selector
that matches the second selector.
The general sibling combinator differs from the more common adjacent sibling combinator—invoked with +
between two selectors—in that it selects every following element despite interceding elements, while the adjacent sibling combinator only matches the first sibling after the first selector that also matches the second.
To better illustrate the general sibling combinator, this CSS:
h2 ~ p { /* styles */ }
Will match the following elements in this markup.
<p>Eyebrow</p>
<h2>My title</h2>
<p>First paragraph</p><!-- matches -->
<img src="/my/image.jpg" alt="My image">
<p>Second paragraph</p><!-- matches -->
<blockquote>Give me quotes, or give me death.</blockquote>
<p>Third paragraph</p><!-- matches -->
While the same code using adjacent sibling combinator:
h2 + p { /* styles */ }
Will only match <p>First paragraph</p>
.
The behavior of this combinator always felt too greedy to be helpful, or too easily replaced by simply targeting a group of classes, and that’s still the case most of the time; this is a rare pull in the tool belt.
However, I was doing work for the astoria.digital website, a hacker group I volunteer with, and found it useful when working with CSS Grid.
The design called for a border along the grid content columns. If I were using Flex, I could nest each column inside a wrapper and add the border to that.
<!-- …header markup… -->
<div class="wrapper">
<div class="column">
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
</div>
<div class="column">
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
</div>
<div class="column">
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
</div>
</div>
<!-- …footer markup… -->
.column {
border-right: 3px solid #000;
}
This way, I could set display: flex
on .wrapper
, and each .column
becomes a flex item with the proper border.
Similarly with grid, only direct children of .wrapper
become grid
items, aka items that can be laid out using grid. And since I wanted the
header, content columns and footer to all align, they all needed to be
grid items. My code changed to this.
<div class="wrapper">
<!-- …header markup… -->
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
<!-- …footer markup… -->
</div>
Now my grid worked, but I could no longer target .column
for the
border, since it’s real hard to target an element that doesn’t exist
(believe me, I’ve tried).
I could simply target each class in question…
.title,
.item {
border-right: 3px solid #000;
}
And that works! The only problem is that there’s a border surrounding
.wrapper
, and we don’t want the last column’s border to “double up”
with it.
I could have re-factored my styles so that each grid cell abutting an
edge provides the border the single .wrapper
border does, but that
requires more effort to maintain, and was looking like a lot of effort
to do in the first place, given that the actual code is more complicated
than the simplified examples I’ve contrived for this post.
So what about adding classes? A classic technique, just slap a .last
class on every element you don’t want the final border, then zero it out
with border-right: none
.
And yes, that works, but I’ve never liked utility classes like that, since they need to be applied to each element they operate on, which makes code more cumbersome to maintain.
Instead, I often like to use one of the pseudo-classes like :last-child
or :last-of-type
, or the lobotomized owl selector, which uses our more popular friend, the adjacent sibling combinator.
Yet the lack of nesting was really crimping my options. I could have made it work with the above methods, but what I really wanted to do was to cancel the border for the last column, not some random selector.
Yes, it is the moment you’ve all been waiting for. I used… The General Sibling Combinator!
It’s about time this sucker showed up, because this post is almost over.
Why does it work? Since each column starts with a heading, and is an otherwise unique element (h2
), we can target the final one with :last-of-type
. That just targets the title, however, so we add the general sibling combinator to target all the following items.
Putting it all together, the code looks like this:
.title:last-of-type,
.title:last-of-type ~ .item {
border-right: none;
}
And only targets this:
<div class="wrapper">
<!-- …header markup… -->
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
<h2 class="title"></h2>
<p class="item"></p>
<p class="item"></p>
<p class="item"></p>
<h2 class="title"></h2><!-- targets this -->
<p class="item"></p><!-- targets this -->
<p class="item"></p><!-- targets this -->
<p class="item"></p><!-- targets this -->
<!-- …footer markup… -->
</div>
And there you have it. One long-winded explanation for a very specific, opinionated use. However, if you’ve read this far, first off, Cheese Frogs; that will be our secret code to identify each other.
Second, I do think it’s worth having this one in my tool box, as I have a feeling it’ll come more in handy as I delve deeper into grid.
Thanks for reading.
A lot has changed in the world since I last posted.
I have been extremely lucky during this pandemic. I am still employed, I can work from home, and I have my wife to shelter with. I do not take these things for granted.
And yet.
While my work life has not changed as drastically, my personal life has. Most of the things I did outside work before the pandemic were in person. Can’t do that right now. So, it gave me some time to work on home-bound projects that I pushed back on the shelf.
To that end, I’m very excited to introduce PCG, or Point and Click Game engine, an adventure game creation utility for the open web.
I did a talk about it three years ago (ouch), so this project has certainly been a long time coming.
PCG is very much in active development, but I think I’ve made encouraging progress, which I’ll explore in detail later.
But first, what am I talking about?
If this is old hat to you, skip ahead to the next section.
For those not familiar, a point and click adventure game is a style of narrative, story-based games where progress is made primarily through puzzle solving, rather than violence or reflexes, something I appreciate more and more as I age.
While their popularity peaked in the early 90s for mainstream gaming cultural, they have thrived in the indie space over the past decade or so.
Mechanically, many games in the genre use a system of verbs to interact with the world. You click a verb from a menu, for example “push”, and then the person or object in the game you want to apply it to, such as “crate”. Perhaps there would be a trap door below the crate, and a new area is unlocked.
Another method some games employ is to do away with the specific list of verbs, having pre-determined actions when interacting, or relying on the levers that must be switched in the right order.
Almost all have you collecting various esoteric items, having the player apply those items to people or objects in the game, or combining them with each other.
A relatively simple system, from a game mechanics perspective, but one that hides a lot of depth, story-telling potential, and that particular player satisfaction from figuring out a puzzle.
Most innovation in the web game space is around the <canvas>
element and Web Assembly, which allows developers to “start from
scratch” and create entirely custom rendering divorced from any of
the preconceptions of the web.
This works well for action games or games with pixel-pushing graphics. However, the goal here is always to emulate a native application, and since games written for the browser cannot by definition ever be native, the best they can be is a close approximation.
While close might be good enough, this always felt like a missed opportunity to me. We spend all these resources trying to get the web to be more like native applications, but hardly any on what new and interesting experiences we can create that are unique to the web. As Marshal McLuhan wrote, an author I’m proud to say I got a few pages into, the medium is the message.
I started thinking about what kind of games would work well inside the traditional web context - aka, HTML, CSS and JavaScript (and SVG) rendered into a DOM tree.
After some thought, I settled on point-and-click adventure games.
My reasons being:
In short, I thought I could re-create many of the different point and click adventure paradigms on the web, while taking full advantage of the things that make the web the web.
Some of the unique things that are attractive about the web are:
The ultimate goal of PCG is to foster a open, welcoming, and creative community around making point and click adventure games on the web.
In game engine terms, the goal is to create a flexible, modular, and pluggable system of components that can be combined to create most if not all the point and click varieties mentioned above (and many that were not), as well as opening up the possibility for new and unique games only possible in the web format.
After a lot more thought, writing, re-writing, trial and error, and leveraging embarrassingly earned career experience, I settled on some design principles for PCG.
The thought of even having design principles was something hard earned, but one I strongly believe in: a north star for how you go about making something out of nothing.
This is a very high level introduction to the ideas surrounding the PCG project. I plan on writing posts going in-depth on each component of the system as they’re built and as updates are made. These posts will hopefully serve as a living progress report.
While I’ve spent a lot of time on PCG already, it is still in the beginning stages. It is very much a leap of faith.
I can’t predict what kind of community it will attract, if any, or what this project may or may not evolve into.
But I am excited to find out.
You can check out the Github repository or the documentation site for PCG, both very much in progress. If you have any feedback or would like to contribute, please don’t hesitate to reach out.
If you’d like to see what PCG is capable of currently (as much as I cringe to reveal the multitude of missing features) my friend made a tiny, rough demo game, and I made a little demo showcasing the text box component.
Thanks for reading all the way to the end, hope you and yours are safe and healthy, and I’ll catch you on the next adventure.
Here’s a little web component I came up with to produce typewriter text, similar to old school SNES RPGs.
Allow Satan into your life: chmod 666 my_soul.txt
. The permissions of the beast.
Today I downloaded all the data Facebook has on me, and started poking through it. Since it’s been the focus of every privacy scandal, I went straight to the ad data. I found two items.
advertisers_who_uploaded_a_contact_list_with_your_information.json
Now, Facebook made more than 4 billion dollars last year, and their business model is built entirely on selling user behavior to advertisers, so it seems impossible that not only is this all the data they have on me, but that it’s also this simplistic. Be that as it may, let’s take it at face value.
interests. I’ve been on Facebook since 2010, but was never a heavy user, and have cut back significantly in the past several years, so the data is pleasantly skewed.
Here’s the whole list.
Without giving too much away (because why give advertisers a handout?), is honestly a pretty good (for them) error rate for the kind of dragnet campaigns advertisers run on the web.
Yet some of these keywords are bonkers. Like “Studio”. Studio? What ad exec in their right mind wakes up and says, “You know who we need to target? People with a keen interest in ‘Studio’.”
Here be the glories of the algorithm.
There are 1,349 “advertisers who uploaded a contact list with your information.” There’s no indication on how many times this data was uploaded, or where it was uploaded, or even what data was uploaded. It’s just a list of companies.
Some of the companies I recognize, like Airbnb and AARP (zing?), and others I had to look up, like Ebates and ConversioBot (A shady cash loan service and a shady chat bot service, respectively).
And some companies seemed to want to really cover their bases, like:
I sifted through the companies looking for anything overtly political, since that’s the really big focus of Facebook’s scandals. All I found was a single, lonely company:
I’m not entirely sure how I feel about it, but I can pretty safely say they’re barking up the wrong tree.
But remember, this data isn’t just mine (as much as any of this data is ours). It’s yours, too, because these data talks about where I showed up in a contact list, which includes however many other people.
Scary? Don’t worry, things feel so much better from behind the wheel of a brand new Toyota!
I wrote a small git pre-commit
hook to prevent committing certain files.
There are more words to this, but if you’re impatient, you can skip right to
the goods.
At work, we have some configuration files tracked in git that we modify locally to enable debugging options. We don’t want to ignore these files and have to manage them in a different system outside of git, but we also don’t want the debugging options checked in.
So we keep the files tracked in git, and modify them on our local systems, and try to remember not to check in those debugging options.
After the debugging changes ended up in a pull request of mine, I had an idea: since I’m a computer programmer, what if I could use my computer to save myself from myself? It was just crazy enough to work.
What I really wanted was for git to prevent me from committing changes to these files, physically if necessary. The answer: git hooks.
Git hooks are custom scripts that run inside your local repository when one of several actions is taken, like committing, and merging, and the like. They’re very powerful, since they can be any script that runs in a shell, but like most things in computer science, they still can’t throw a punch. That meant my script would need to throw an error instead to keep me from committing those debugging changes.
A few minutes later I had cobbled together a git pre-commit
hook script that
prevents any of the unwanted files from being changed. The pre commit hook
runs, as the name heavily implies, before the commit happens, so if one of
the no-no files is in the changeset, I get a nice big error when I run git commit
.
Here’s what I came up with:
#!/bin/sh
#
# This script prevents specific file modifications from taking place.
# We want certain config files checked into git so that builds work on a clone,
# *and* we need to modify these files locally to enable debug options.
# This leads to a scenario where we can accidentally check in the config files
# with our local debug options checked in. This script prevents that.
# Get current revision to check against.
if git rev-parse --verify HEAD >/dev/null 2>&1
then
against=HEAD
else
# Initial commit: diff against an empty tree object
against="$(git hash-object -t tree /dev/null)"
fi
# Redirect output to stderr.
exec 1>&2
# Test staged files against the files we don't want to check in,
# and abort if found.
git diff --cached --name-only "$against" | while read -r file;
do
if test "$file" == "path/to/my/unchanagble/file.yml";
then
echo "Don't check in file.yml. Aborting!"
exit 1
fi
if test "$file" == "some/other/file.php";
then
echo "Don't check in file.php. Aborting!"
exit 1
fi
# Repeat pattern as necessary.
done
The magic sauce is near the end; I loop over the output of git diff --cached --name-only
, which shows the name of each staged file, and check if the file
name matches one of the files I don’t want to commit. If the file matches,
exit
with a non-zero status, and git will happily prevent me from making that
commit. Hooray!
I took a week off from work to join a deep dive study group on machine learning. It was an incredible experience and I want to tell you about what I learned.
I learned with very smart and interesting people, and I encourage you to check out their work. You can see a list of everyone at the bottom of this article.
This is a birds-eye-view article, so hopefully it will be decipherable by laypersons, while still remaining valuable to anyone who doesn’t know what this field is about.
How should we delve into this incredibly complex topic? How about using the journalistic method of asking the classic five “W” questions: who, what, when, where, and why.
That seems like a successfully pointless structure, so let’s dive in.
Machine learning is a broad term referring to training a computer using one data set to make predictions about a different dataset, without further human input.
For example, say you had a breakdown of sales for the past decade from a particular Orange Julius. Using machine learning, you might be able to have the machine predict when the most popular sales days will be for the mango pineapple smoothie in the upcoming year.
Why do I say might? In short, because it all comes down to the quality, and quantity, of your data.
In the above example, maybe the weather, or the stock market, or the cycle of the moon influence whether people want a mango pineapple smoothie or a raspberry one.
You need to have as much relevant data as you can to make the system work. If there’s a correlation between people wanting strawberry banana smoothies and the groundhog mating season, and you don’t have that data, guess what? You can’t make that connection.
However, if there is a connection between two data points—aka, groundhog mating and strawberry banana smoothies—you, the human, need to tell the machine about it to make the system go.
At the end of the day, computers are dumb. They literally don’t know their ass from their elbow. Contrary to every movie featuring robots, computers are self aware much the same way a brick is.
A computer is so dumb, it can’t figure out the relationship between data in both directions. This means that if you think there’s a relationship between Guatemalan migratory bird patterns and an uptick in blueberry smoothie sales, and you’re wrong, the computer won’t know! If you plug in that relationship, the machine will happily spit out a number. Never mind that it’s meaningless, the computer did its job.
So it’s our job, as the humans, to really make sure our thought quality is good before ascribing meaning to things. That’s probably a good call in general.
Oddly enough, in all these Orange Julius examples, nobody wants orange.
Machine learning exists for the same reason we do anything: first, curiosity, then art, and then finally money. Currently, all three reasons exist in perfect harmony tension.
All over the place. Here are just a few examples.
Some of these applications use algorithms like neural networks, deep learning, and others that we didn’t get into in this article. While the specifics (read: math) differ greatly, the basic principles still apply.
Now, and into the foreseeable future.
In short: not many people. It’s a specialty inside (at least one other) specialty. Practitioners need an understanding of programming, statistics, calculus, plus the vagaries of whatever aspect of life they are trying to make predictions on. That’s on top of machine learning specifics.
That’s one of the reasons I wanted to study it. The more people know about machine learning the more we as a society will be able to deal with its consequences.
At least that’s the prediction; I haven’t proved that in a model yet 🤓.
I’d like to give a big shout out to the super smart and interesting folks I worked with over the week. Here they are, along with links to what they’re up to. I highly encourage you to check them out.
This is not meant to be a take down piece, and I realize I’m not providing any direct solutions. I wouldn’t be writing about Drupal if I didn’t care about it. My hope is that my thoughts will prove useful in a much larger discussion.
I’ve been working with Drupal professionally since around 2011, and had experimented with it on and off for a few years before that. I was introduced to it on a fluke. I asked a programmer friend what I could use to help my mom build a website, and he suggested Drupal.
I had no real programming experience, and was teaching myself HTML and CSS through crappy tutorial sites and trial and error. The other options I had explored for website building were various horrendous website builders bundled with shared hosting providers, and Wordpress, which, at the time, seemed only suited for blogs. Other options existed, but they all required a lot more technical knowledge than I had, and I was scared off.
Drupal was a revelation. Without writing any code, I could snap together modules to create complex functionality. I remember reading the documentation on Drupal’s modular architecture, and feeling like I was interacting with a serious tool. I only understood about half of it, even after a few readings, but that contributed to the mystique that I was touching real power.
I built that first iteration of my mom’s website in Drupal 5, using modules for calendars and email notifications. No CCK, no Views; nothing much that a modern Drupal site would recognize, but it worked, and I was happy. I was hooked.
As I started my career in software at a small mobile games company, I advocated for Drupal for their company website, replacing Joomla. Again, I was able to single-handedly build their site by snapping together modules. At that point I had some programming skills, but I never touched Drupal’s code. I didn’t need to.
Eventually I went on to work at various media-centric companies that were already using Drupal in production, and had teams of experienced Drupal developers. While mainly focusing on the front end, I started having to write Drupal code as a matter of necessity to interact with the theme layer, and since the teams were small, I was given more responsibility in writing more “backend” Drupal code. Although, to my eye, “backend” Drupal code, and “frontend” Drupal code looked pretty similar…you find the place you want to hook into, write logic to modify the behavior, and send the data along.
This is when I first started chafing against Drupal. I had no concept of “developer experience” or any thoughts on programming architecture, but I could hear the senior devs talk about it, and they seemed fairly annoyed with Drupal. I figured my pains with it were just me learning to program, but hearing them talk made the down sides more “real”. Ironically, the issues with Drupal stemmed from the very things that drew me to it in the first place: the ability to do amazing things without touching code. Turns out, that comes with a price once you do start touching code.
The main problems I encountered, as compared to other platforms or solving problems outside a framework/CMS were:
Let me explain.
Drupal’s documentation is spotty at best. I am blowing nobody’s mind saying this. It is a well worn problem area. However, the effects are still tangible. It can be really frustrating to figure something out.
Imagine you’re a new developer and you’re trying to build a simple site, and you’re evaluating CMSs. You have the most need for documentation to help you figure it all out. If that doesn’t pan out, since the site is relatively simple, it could be easier just redo everything in Wordpress. This decision will likely shape the rest of your career.
One of my biggest complaints with the documentation is that even when it is accurate and up to date (and exists at all), it lots of times only explains what something does, not why. That second part is critical to developing a deeper understanding of any system.
Drupal is insanely complex. That is often touted as an advantage, and there certainly are upsides, but it also hurts adoption and developer experience, and probably is a major reason why the documentation is sub-par.
Drupal has been around for a long time, and as a general purpose CMS, is bound to pick up a vast technical scope. This doesn’t change how painful it can be to try and program an otherwise simple mechanism, but now has to be routed around a byzantine labyrinth of sub-systems. You end up finishing your trivial feature, but not without half-jokingly dreaming of redoing the site in hand-coded HTML.
This problem has gotten somewhat better with Drupal 8, as it aligns with the PHP community more generally, but there’s still so much domain knowledge associated with coding for Drupal, things you just have to know.
More broadly, the kind of problems you solve in Drupal are too often not fun. This might seem trivial, but I think this is hugely important. Without a good developer experience, eventually folks flee the system. This is partly a problem with frameworks in general, but with something as venerable as Drupal that tries to be all things to all people, it’s especially problematic.
If I were coding in raw PHP, using a library, or something else less opinionated than an entire framework, the problem you have to solve is the thing itself.
For example, if you’re tasked with outputting an HTML table from an uploaded CSV file, starting from nothing, you have some interesting problems to solve. How will you accept user input? How will you parse the CSV file? What method will you use to render the eventual output?
While Drupal makes all of this much easier, the actual task you as a developer are left with is glue code. If there’s a problem, the problem will be with Drupal; getting the right hook, the right syntax, the right incantation for Drupal to be happy with what you’ve glued together. That’s not a fun problem.
Obviously, once you level-up a bit in your Drupal programming knowledge, you start writing modules that provide a more logically direct solution, however this takes a good amount of investment to get to that point. Even then, much of your day-to-day will still be glue code.
I have hopefully expressed myself well, and it comes across that I have a lot love for Drupal and the community, and that my criticisms are taken as fair and in the spirit of constructive feedback.
I look forward to working with Drupal now and in the future. Thanks for reading.
I had the priviledge of giving a talk at QueensJS last week. QueensJS is a wonderful JavaScript meetup, with a real diverse and welcoming crew of folks attending.
I was pretty nervous about doing the presentation; I’ve been on stage a lot before, but this was the first time everything I talked about had to make sense, and, one would hope, be valuble to technical folks. It ended up going fine, and people seemed to enjoy it.
My talk was on a point and click adventure game I am working on for the web. You can check out my slides here.
Welcome, Internet folks! If you’ve been following along, this is the final post in my three Internet security and privacy. In this article, we’re going to get into improving and—just as importantly—understanding Internet privacy.
If you’re unsure about what privacy on the Internet is, please read my first
the longest and least focused. This “cram-it-all-in-part-three” approach will be familiar to anyone who watched The Lord of the Rings: The Return Of the King. While this article is similar to that movie in almost every respect, I will try to keep false-endings and ghost armies to a minimum.
Don’t have time to read a long-ass article? Skip straight to the collected resources.
Why should you care about Internet privacy? Unless you’ve been burned by your loose digital info, it’s hard to feel like it’s important. Especially when ignoring it is so easy and rewarding: you get real benefits from the services you surrender your data to.
However, there is also real harm that comes from lack of privacy. Even if you think you have nothing to hide, you probably don’t mean that.
If nothing else, much of the personally identifying information collected about you is done invisibly. That disenfranchises you, the user, by removing your control over your data.
After that Orwellian last paragraph, what can we actually do to take back our online privacy? Below is an explanation of some methods to do just that, as well as the types of invasions they protect against.
The first step to better privacy is understanding the lay of the land. It’s nothing fancy, just common-sense economics.
If you’re not paying for the product, you are the product.
That is to say, no for-profit company is giving away their service. If you aren’t paying them directly, they still need to make their money. Given how valuable your data is, it’s very likely that any service you get for free is reselling your personal data.
Keep that in mind as we delve into the specifics.
Almost every website contains hidden code that tracks your interactions, collecting an incredible amount of personal data. For example, the most popular web analytics tool by far, Google Analytics, collects all your usage habits from across every site it is installed to create a centralized profile of who you are and what you like.
The only way to prevent this type of data collection is a good blocker. A blocker can also give you insight into what types of scripts are trying to track you, shining a light on the previously hidden.
There are a lot of blockers on the market, some of the most popular being AdBlock Plus and Ghostery. Do not use these blockers. While reasonably well functioning tools, they are also businesses that offer free services. What are you giving up to use these tools?
AdBlock Plus let’s advertisers pay to unblock their ads, bringing with them whatever tracking they include, a common business model among ad blockers. Ghostery actually resells the data it collects about you to pay it’s bills, pretty much defeating it’s whole purpose.
This section is called “Recommended blockers”, not “Watch out for shitty blockers”, so which ones should you actually use?
Privacy Badger is a browser plugin with a large community run database of trackers to block. Free to use, with a simple interface, it is run by the non-profit Electronic Frontier Foundation, a reputable Internet advocacy group.
However, it is only available for Chrome and Firefox, on desktop and Android. If you use a different browser, see the below options.
posts that when using Privacy Badger you should enable enable "Do Not Track" to make sure everything is blocked.
uBlock Origin is another browser plugin that blocks many different types of trackers, including ads. It is open source and run by a group of volunteers, and has the advantage of running on all major desktop and Android browsers.
The downside to this plugin is that it's interface isn't as simple as Privacy Badger's.
posts that there is another plugin confusingly named uBlock, which has nothing to do with uBlock Origin. Use uBlock Origin.
Because they are a little buried, here's the download links for Chrome, Firefox, Edge, and Safari.
For iOS, Firefox Focus is hands down my recommended blocker.
Run by the non-profit Mozilla Foundation, Firefox Focus is both a private browser by default (but see above), and ad/tracking blocker.
While the private browsing is nice, the blocker is the real reason to download. Even if you don't use the Firefox Focus browser itself, the blocker can be integrated into Safari, so you won't have to change your browsing habits.
Yes and no.
Yes, in the short term, blocking ads will hurt a business’s bottom line. However, the tech industry is a place where failing to innovate is a death sentence. If a company cannot adapt to shifting user demands (in this case, not being tracked), they weren’t going to make it much longer, anyway.
That’s right, the private browsing, or incognito mode, in your browser isn’t really private. What is this foul deception, you cry? Have you been lied to? Not really, but the name only tells part of the story.
Private browsing does not record history, and has a separate database for cookies and other web storage. This does not prevent your data from being collected by outside parties. It just means other people who use that same browser on that same computer won’t be able to see what sites you’ve visited.
A quickly growing number of websites use an encrypted connection, what’s called HTTPS. This makes it very difficult for someone to snoop on the data you send back and forth from your browser to a website while it’s in transit.
Many browsers (but not IE/Edge and Safari) display a lock icon when the connection uses HTTPS.
While not all websites support HTTPS on every page, you can force HTTPS by using a browser plugin called HTTPS Everywhere.
HTTPS deals only with connections, so while it protects your data while it’s being sent, it’s encryption ends once your data reaches it’s destination (either your computer or the server). While you at least have control over your own device, you need to trust the site your sending your data to that they are protecting your data properly.
Have something sensitive you want to send, like a credit card number, or your old live journals? Don’t use email; it’s the Internet equivalent of sky-writing. Not to mention, popular email solutions like Gmail read all your email to build a consumer profile on you to deliver to buyers and governments.
Instead, use a encrypted messaging platform. The most secure at the time of this writing is an application called Signal, which comes highly recommended. Signal is run by a non-profit, and uses open source, regularly audited security practices.
WhatsApp is also a fairly good alternative, and it actually uses some of Signal’s technology under the hood. However, it’s not a private as Signal, as WhatsApp shares some data with it’s parent Facebook.
Free public Wifi usually is free because they track everything you do on them and then sell that information. Library networks are also monitored by the FBI. Unless you subscribe to a VPN or use the Tor browser, don’t do anything on a public Wifi network you don’t want every corporation to know. Yeah, even that one you hate. You know the one.
If you have any questions on any of this, please don’t hesitate to reach out to me on any of the social networks I use.
I’d like to take a moment to thank Ania Stypulkowski, who has provided all the lovely artwork you’ve seen throughout these posts. Please visit her interwebs places.
Surprise!
👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻👻
Ghost army.
You have a list of items that you need to render with comma separation, and an “and” at the end.
For example:
Cookies, rice, and farts
.
This display is traditionally done in business logic, creating more complexity than this simple output warrants.
We can use pure CSS, using well supported pseudo-classes and pseudo-content. Behold!
<ul class="list">
<li class="item">Cookies</li>
<li class="item">rice</li>
<li class="item">farts</li>
</ul>
/* Boilerplate to inline the list. */
.item {
list-style-type: none;
display: inline-block;
}
/* Add a comma after each item. */
.item::after {
content: ', ';
}
/* Add the word "and" between the last two items. */
.item:last-of-type::before {
content: ' and ';
}
/* Remove the comma after the last item. */
.item:last-of-type::after {
content: '';
}
The beauty of this solution is that it’s simple, idiomatic, and declarative, and has the added benefit of taking a hard stand on oxford commas.
I use this solution here on this site for the “Topics” display. If it worked for my little blog, it’ll work for anything else!1
The opinions expressed on my blog are not necessarily endorsed by me. ↩︎
Welcome internet traveler, you’ve found your way to my guide on how to improve your online security. Already feel confident in your security abilities? Then get lost! I don’t need that kind of arrogance around here, especially when I’m trying to sound smart. The rest of you nice, humble people, read on.
No discussion of security can be complete, so I have focused on the most common threats that the majority of folks will encounter and easy and effective solutions for them. However, despite what diet commercials tell us, nothing worthwhile comes without effort. If you’re serious about improving your security, it will take at the very least some focus and attention.
privacy. Collect ’em all! 🐛
Are you a person? If so, chances are good that your idea of what’s going to kick your ass online is wrong. How’s that for an introduction? Always lead with insulting your audience, I always say (to an empty room).
But I digress.
Many folks implicitly believe that they just need to stay ahead of a sweaty teenage anarchist who’s real good at guessing passwords, and their accounts will be safe. Or, if you’re on the more paranoid side, you might think you need to keep away from some shady hacker collective that has a personal vendetta against you.
Both of these scenarios, and most likely that other one you’re thinking of right now, doesn’t really happen much, if at all. No one is sitting there guessing your password (no human, at least).
So what does happen?
The vast majority of the time, hacks to your online life come in three forms:
In the first two cases, the attacks rely on a high degree of automation, using a computer to randomly guess millions of passwords until they find yours. Most passwords can be cracked in this way in seconds. Attackers aren’t targeting you specifically, instead hitting thousands, millions, or even more than millions of accounts at once. Even in the third case, the attacker is trying countless people with the same trick, looking for that perfect sap. Yes, even when you get hacked, you’re just a number.
If you already knew that, please see my introductory paragraph (geez).
Okay, here it is, what you’ve been waiting for: actual steps to make your digital life more secure. Before we dive in, a few words on how to implement these new strategies, taken from experience.
I recommend making improvements incrementally, strategy by strategy, account by account. It can be tempting to try to “solve” security all at once, however this approach is usually overwhelming, and lots of times leads to burn out. Ultimately, this leads to less security, not more.
A more measured, incremental approach is many times more successful, and more accurately reflects that security isn’t an end-goal, but rather a continuing process. Thems the breaks.
Two factor authentication just might be the biggest bang for the buck in terms of boosting your internet security.
Two factor authentication is an additional layer of security on top of your password. It works by requiring a code sent to another device—like your phone—via text, an app, or voodoo magic (still in beta). This code regenerates every few minutes, making it very hard to hack without your second device.
This means that if an attacker cracks your password, they still need to get that time-sensitive code from your phone to get into your account. This is usually a big enough deterrent that the attacker stops right there.
Many popular sites support two factor authentication, but you might want to start with the biggest ones, and any site that handles your money.
The big tech giants all offer comprehensive guides on how to setup two factor on their platform, the links for which are listed below, because I’m nice.
You can see an extensive list of services that provide two factor authentication over at 2fa.directory.
Admit it, your password sucks. It uses words from the dictionary, doesn’t use special characters, and is just your social security number. Well, good news; you’re not alone.
A data analysis company, SplashData, reviews publicly leaked databases for the most popular passwords every year, and every year, there’s a common trend. See if you can spot it. Here’s the top five of 2016:
Yep, they’re all terrible. And lazy. And can be cracked instantly. Why? Partially because they’re simple for a computer to guess, but mostly because they’re public knowledge to attackers, too. Their programs can guess these first.
The problem is, a strong password that takes way too long for a computer to guess for it to be worth it for a hacker (think years), is really hard for a human to remember. So what’s the solution, other than replacing passwords which will happen but not for a while?
I can hear the groans already.
“I don’t wanna use a password manager, it’s so much woooork!” You scream hypothetically, writhing in privilege.
“Besides,” you whine, “I already turned on two factor authentication, I’m fine!”
Well, congratulations, person-I-made-up-for-convenience, for turning on two factor authentication, but if two factor is the only thing you have reasonably protecting your account, it’s only single factor authentication, which defeats the purpose. Passwords, however unfortunately, still matter.
How do password managers improve your security? They leverage the same thing attackers use to crack your accounts to protect them: a computer. Can’t remember 200 unique and secure passwords for all your online accounts? I don’t blame you. So let the computer do what it does best: hard, boring crap humans are bad at. Like generating and remembering lots of passwords.
Don’t want to take the plunge into a password manager, or want to know how to write a better password for your password manager? There are many different strategies out there, but the one that is the most human-friendly while balancing strong security is not using a password at all, but rather a passphrase.
A passphrase is just what it sounds like, a whole sentence instead of a word. While longer than a password, they’re easier for a human to remember than some random string of characters, and many times easier even than a bad password. We think in sentences, not in individual words.
It should be pretty long, and, if you can manager it, contain upper and lower case letter, numbers and special characters, but it can be more natural to remember.
For your next password, instead of something like, password
, try
something like, This is my really gr8t passphrase, you weirdo goof balls!
1
Think you’ve been the victim of a hack? Suspicious that your private details might be leaked to the general public? Read this far and are somehow not concerned about this?
Regardless, how can you really be sure if you have been hacked, or which accounts have been broken into? That’s where a very useful personal project by a Microsoft engineer comes in handy: Have I been Pwned.
The site searches through all the publicly leaked databases from big sites that have been hacked. You just enter your email address, and it will come back with all the compromised sites that that email appears in. In all likelihood, there will be multiple results (there were around 9 for me).
This should give you a better idea what passwords to change first.
You can even sign up for email alerts if your email address is found in future data breaches. It’s all around a great service, and it’s free (although the author accept donations).
Please posts that this site does not hack sites, it just aggregates data that has already been publicly leaked. As such, it is very ethical, but also will not know about a security breach that isn’t publicly available, which can sometimes take years after the event itself.
Software developers and attackers are in a constant arms race to find holes in software, and depending on which side you’re on, plug them or poke through them. In that way, it’s a lot like the E.R., except in addition to the doctor patching victims up, there’s another guy poking their wounds. Not my best analogy, I admit.
Anyway, it’s important to stay on top of software updates to your operating system (Windows, macOS), and your other installed applications. Luckily, this is now pretty painless. Just click the button. Yay, you won!
The last important thing to do to improve your security is to be careful of being tricked into giving up your account information.
There are countless scams out there, and many of them are easy to see through, like a message from someone you don’t know asking for your login to help their sick grandpa (somehow, someone keeps falling for this kind of stuff…).
However, some scams have become more sophisticated, usually masquerading as a legitimate source, like Google or your Uncle Steve. Sometimes, it’s easy to tell, like if they outright ask for your password (a Google or an Uncle Steve would never do this), but others are trickier.
One scam going around was a page that looked exactly like the Google login page, which tricked users into entering their password and even their two factor authentication code to a malicious dingwad. How can you tell the difference?
Consider these safe examples:
https://www.google.com
https://whoops.google.com
And these unsafe examples:
https://google.whoops.com
https://whoops.com/google
What makes them different?
Any words coming before the domain you expect, in this case, google
, separated
by dots, is a subdomain, and is owned by the same company
that owns the domain (e.g. google
). The domain is the word that comes right
before the top-level domain, e.g. .com
, .net
, or .pizzarat
. Using this logic, we can highlight the main domain in each of our examples.
Highlighted safe examples:
https://www.google.com
https://whoops.google.com
Highlighted unsafe examples:
https://google.whoops.com
https://whoops.com/google
And voilà! Now you too can play Whoops vs Google! It’s great at parties.
Perfect! That means you don’t already know all this stuff, and I stand a good chance of sounding smart.
I don’t host comments on this site, but feel free to contact me from the social media networks listed via the icons below, or on my about page.
Obviously, don’t actually use this passphrase. ↩︎
Update: The next article in this series is now published.
Confused about what privacy versus security means in the context of the internet? Haven’t heard these terms before, but are intrigued nonetheless? Not interested in any of this, but are inexplicably still reading? Whatever your deal is, welcome—you’ve found the laypersons guide to all this crap!
But first, a big, pink cat.
First, let’s start with security. We all have a decent grasp of what security means in the physical world—keeping bad folks out of your stuff—but what does it mean online?
The basic principles are the same; security means defense against unwanted—and usually criminal—intrusion. In the physical world, that would mean access to your house or apartment (or pockets), to steal your cash, jewelry, credit cards, or vintage My Little Pony collection. On the Internet, it would mean access to your various online accounts, to pilfer your emails, photos, banking information, or your really embarrassing emails, photos, and banking information, like the time you bought that vintage My Little Pony collection.
In a nutshell, Internet security are measures to prevent unwarranted access to your digital life.
Okay, so we know what Internet security is (as long as I did an okay job of explaining above), but what about privacy? How is that different?
Internet privacy is the defense against unwanted—and usually legal— intrusion by trusted entities.
Behind that jargon is this simple explanation: companies profit from your personal information, the depth and breadth of which you might not realize is being collected. This usually comes in the form of trackers hidden on most sites which aggregate personal information about you. This information will usually be re-sold any number of times.
In the physical world, this would be like if General Motors hired a private investigator to trail you everywhere you went, and report back every piece of information about you that they could. And that private eye had a personal cloaking device. And then sold that data to Comcast, the federal government, and that Free Home Loan guy. And then everyone else.
Privacy concerns are arguably more important than security concerns, simply because they are more invisible to the general public.
In both cases, for security and privacy concerns, the answer is the same: they are after data. This is the information age; data is the most valuable commodity on the Internet (and everywhere else, for that matter).
Anyone who can get a piece of your data can make a profit, and lots of times that payout can be immense. This gives both legitimate, legally-operating corporations, as well as criminals, the same powerful incentive: collect as much of your data as possible.
No, not necessarily, but it depends on your comfort level. Criminal activity is obviously always a bad thing, but you may be okay with certain uses of your private information by corporations or others. Whatever you decide, choosing to share you data should be just that: your choice.
Yep!
Over the next few days, I will be publishing two new articles, each dealing with simple but effective ways to improve your online security and privacy, respectively.
These articles also will get into why security and privacy are important, although hopefully this is already self-evident. In addition, they will cover in more detail the types of attacks, schemes and dangers that effect the common person, especially as the Trump administration ramps up.
I will update this article with the new links as they are published, so if you’re reading this in the future, you can click them now! Otherwise, just chill.
Update: It is the future—the next article in this series is published.
Great! I’m happy to answer any questions you might have. I don’t host comments on this site, but please feel free to reach out on any of my social networks listed via the icons below, or on my about page.
A quick, fun tip for Mac and command line users who are fans of The Lord of the Rings;
Mac OS X ships with the BSD command line calendars, which show important dates in history for various interests. Among the likely candidates of holidays, famous birthdays, and music, there’s a little easter egg for all the fantasy nerds out there (and let’s be honest, if you know how to work a terminal, chances are you’ve read a fantasy novel or two). That’s right, important dates in The Lord of the Rings.
Someone must have spend a good amount of time in the appendixes to figure out these fourty-plus events. To see if today is one of those events, just run this command.
cat /usr/share/calendar/calendar.lotr | grep $(date +"%m/%d")
cat /usr/share/calendar/calendar.lotr | grep (date +"%m/%d")
Or maybe you’d like to setup a function that calls the command for easy use, or to add to your login message? While we’re at it, why don’t we throw in a few more calendars as well.
Update:
I added -A 0
to the calendar
command, which limits the display to only events that happened on todays date, instead of tomorrow and yesterday, as well.
today() {
calendar -A 0 -f /usr/share/calendar/calendar.birthday
calendar -A 0 -f /usr/share/calendar/calendar.computer
calendar -A 0 -f /usr/share/calendar/calendar.history
calendar -A 0 -f /usr/share/calendar/calendar.music
calendar -A 0 -f /usr/share/calendar/calendar.lotr
}
function today
calendar -A 0 -f /usr/share/calendar/calendar.birthday
calendar -A 0 -f /usr/share/calendar/calendar.computer
calendar -A 0 -f /usr/share/calendar/calendar.history
calendar -A 0 -f /usr/share/calendar/calendar.music
calendar -A 0 -f /usr/share/calendar/calendar.lotr
end
The other day I had some time in between work and an improv show I was doing that night, so to pass the time, I created a simple web experiment in CSS.
I call it, Day and Night.
The quick pitch is that you can control the sun with your mouse, and if you bring the sun down behind the hills in the foreground, the scene changes from day to night.
The experiment is written entirely in HTML, CSS, and JavaScript. You can check out the code on GitHub.
The HTML is very simple. There’s a div for the sun, and both hills, and the body serves as the sky.
<body id="sky" class="sky">
<div id="sun" class="sun"></div>
<div class="hills-wrapper">
<div id="hill-background" class="hill-background"></div>
<div id="hill-foreground" class="hill-foreground"></div>
</div>
</body>
Next, I stuck the hills to the bottom of the page.
.hills-wrapper {
position: relative;
}
.hill-background,
.hill-foreground {
position: fixed;
width: 100%;
bottom: 0;
}
I made use of border-radius
to make the hills look like, well, hills, instead of color swatches, in addition to the sun, to make it look round instead of dumb.
I wrote a color scheme for day and night, taking advantage of the compass shade
mixin as much as looked good. I also made use of adjoining classes to define the night colors (stay tuned for how they’re utilized in a moment), to a) keep things clean and easy to read, and b) to flip a big ol’ bird to IE6 (seriously, why does CSSLint still consider them a bad practice? Let IE6 die!).
/* variables */
$night_sky : #001f3f;
$night_clouds : shade($clouds, 70%);
$night_grass : shade($grass, 70%);
/* classes */
.sky.night { background-color: $night_sky; }
.clouds.night { background-color: $night_clouds; }
.hill-background.night { background-color: darken($night_grass, 3); }
.hill-foreground.night { background-color: $night_grass; }
Moving on the the JavaScript, I first attached the sun div to mouse movement. Hello addEventListener
firing on mousemove
.
// Mouse move event listener.
document.addEventListener('mousemove', function(e) {
var top = e.clientY - 100,
left = e.clientX - 100,
aboveTheHills;
// Stick the sun to the cursor.
sun.setAttribute('style', 'left:' + left + 'px;top:' + top + 'px');
Finally, I needed to detect if the sun was below the hills. The method I came up with isn’t perfect, since the sun can still be peaking out from behind the rounded hills, but it would be way to hard to teach the DOM how to “see” if the sun was visible.
However, my solution does consistently change from day to night, no matter the screen size, with the power of math!
// Detect night.
aboveTheHills = window.innerHeight - hillBg.clientHeight + (sun.clientHeight / 2);
if (e.clientY > aboveTheHills) {
// It's night.
sky.className = 'sky night';
hillBg.className = 'hill-background night';
hillFg.className = 'hill-foreground night';
}
else {
// It's day.
sky.className = 'sky';
hillBg.className = 'hill-background';
hillFg.className = 'hill-foreground';
}
Finish it off by adding a background-color
transition to all the effected elements, and you get to be the god of the sun!
Jekyll is a tool for static site generation, and it’s what powers Github pages, both of which generate and host this site, respectively.
It’s a fantastic tool for hackers to create simple, and fast (see static site, above) blogs using minimal configuration, the Liquid templating engine for layout, and markdown for posts.
I had previously used Wordpress as my blogging platform of choice, like so many others, and came to Jekyll because of it’s simplicity, it’s speed, and it’s hackability, not to mention that as a Github user, you are entitled to a free Jekyll (or static HTML) site hosted on their blazing fast setup.
One minor pain point for me, is when creating a new post in Jekyll, you need to create a new file in the following convention: title-of-your-post-year-month-day.md
. In addition, each post needs some meta data at the top of the markdown file.
Now, this could all be typed out each time without too much time being eaten up, but we’re programmers, damn it, so why do things manually when they can be reasonably automated? To the shell!
#!/bin/bash
# Create a new post for a Jekyll blog.
cd /path/to/your/jekyll/repo/_posts
FILETILE=$(echo "$1" | tr " " "-" | tr '[:upper:]' '[:lower:]')
POSTDATE=$(\date +"%Y-%m-%d")
POSTNAME=$POSTDATE-$FILETILE.md
POSTBODY="---
layout: post
title: $1
date: $POSTDATE $(\date +"%H:%M:%S")
summary:
categories:
---"
cat <<EOF >> $POSTNAME
$POSTBODY
EOF
open "$POSTNAME"
This code is also available as a gist.
To use the script, just download it, modify the path on the cd /path/to/your/jekyll/repo/_posts
line to point to your Jekyll installation’s _posts
directory, and save the script somewhere in your PATH
.
Now, if you saved the file as new_post
for example, you can call it like, new_post "My Sick Blog Post"
, and an appropriately named file with most of the meta data filled out will appear in your _posts
directory.
Not only that, but it will immediately launch the default editor for markdown files, so you can start writing straight away!
Enjoy.
When a developer moves to a new company, one of the biggest transitions is adapting to the new code base. While most companies will be understanding of newcomers making their way into the rigorous complexities of unknown machine-speak, that slack isn’t limitless, and it’s important to understand the code quickly. It is, after all, the point of the developer’s employment.
However, understanding anything, much less all the working parts of a production level code base, takes time. You get there through practice; “front loading” your brain by reading all the code up front may work for some - and let me be clear, it certainly can’t hurt - but for me, I only get a top-glaze of understanding. Real knowledge comes from doing. That is, working with the code.
But what happens when you are still building your knowledge of the code base, and you have to fix something? Or build a new feature? Or take something out? Thankfully, we work with computers, and we have many great tools at our disposal to quickly find the code we’re looking for. And, as an added bonus, these tools can be used any time, even when you’re not new. If that’s not future proof, I don’t know what is.
Spanfeller Group is a Drupal shop, so some of the examples are specific to that platform. We also use git and mostly run Unix-like platforms (Mac, Linux), and all the examples use the Unix Command Line. If you’re on Windows, I would highly suggest installing cygwin. If the thought of using the command line gives you the creeps, I would humbly ask that you read one or two examples before running for the hills; you may have a change of heart. If not, the hills will always be there.
So without further adieu, let’s dive into some killer ways to find what you’re looking for, fast.
Anyone who has been even briefly introduced to their inner CLI nerd probably knows about grep, the swiss army knife of searching within multiple text files. With a fairly straightforward invocation, you can search recursively through directories, and get line numbers to boot.
grep -rn [pattern] .
However, it’s 2013, and we’ve made some serious progress as a file-searching society. Enter git grep. As you might think, it only works inside git repositories, but it has many distinct advantages over traditional grep. For starters, it’s faster. A lot faster. Because it uses git’s internal file index, searches are blisteringly speedy. It also does not search inside .git
directories, or any patterns matched by .gitignore
by default, which really cuts down on junk results. Also by default, git-grep is recursive, so you won’t have to throw in an -r
switch, and it comes with color results turned on, so you can more easily see you matches.
So to search your repository for a preprocess function called “mytheme_preprocess_node”, you would run the following:
git grep -n "mytheme_preprocess_node"
The -n
switch turns on line numbers for the search matches. To turn them on permanently, run this command from your terminal:
git config --global grep.lineNumber true
Now you can run the git grep
command above without the -n
switch, and still get line numbers. Some other useful switches are -i
, which makes your search case insensitive, -I
, which ignores binary files, and -p
, which shows the function name of the match. This last example is smart, knowing if the match is the function name, or if the match is inside a function, and prints the information accordingly.
git grep
is great, but what if you need more flexibility? Or you need to find something outside of a git repo? You could go back to grep
, or you could use ack. Ack is grep, but built for programmers. Like git grep
, it is recursive and uses colors by default, and also uses the Perl regex engine, which many people find more powerful and intuitive than the standard grep regex. It also ignores .git
, .svn
, .hg
directories, and binary files, among other unwanted data that would normally be searched. While not quite as fast as git grep, it is still much faster than regular grep.
Installing ack is easy. You can install from CPAN, through Homebrew or MacPorts on the Mac, or through the package manager on major Linux distributions. A caveat for Debian and derivitive distributions (Ubuntu, Mint, etc), the package is called ack-grep, so to keep your fingers typing less, you can rename the package locally with this command:
sudo dpkg-divert --local --divert /usr/bin/ack --rename --add /usr/bin/ack-grep
Using ack is simple:
ack "mytheme_preprocess_node"
The results will be returned in color, separated by file, and with line numbers by default.
Unlike grep or git grep, ack searches work off of a file type white list, only searching the files that appear in that list. Ack ships with pretty sane defaults, including a wide range of programming languages. However, to tell it about Drupal-specific file types, as well as some relatively newer types such as sass (and python, for some reason), you’ll need to do a little configuration. Nothing heavy, just copy the below into a new file, and save it as .ackrc
in your home directory.
# Custom types and abbreviatinos.
--type-set=py=.py
--type-set=sass=.sass
--type-set=coffee=.coffee
--type-set=module=.module
--type-set=info=.info
--type-set=inc=.inc
You can follow that pattern for any new file types you may want to add. Just like git grep, you can add the -i
switch to ignore case, and -l
just prints the file names containing the matches. The one ack option I use the most without question, however, is the ability to limit search by file type. This is incredibly powerful. Frequently, I find myself needing to search for a CSS class I found in the source code, but I don’t want style definitions, just where it’s being printed in the template. No problem! Just use ack to filter out CSS results.
ack --no-css "myclass"
You can also limit your search to a single file type.
ack --type=css "myclass"
Saves me tons of time, all the time.
So far, we’ve focused solely on searching inside files, and while that’s hugely important, it’s hardly the whole story. This is where folks not running Drupal can tune out.
If you do work with Drupal, and aren’t familiar with drush, you owe it to yourself to check it out. Most of it’s extensive feature set is outside the scope of this post, but in very brief summery, drush is a tool to maintain and manipulate your Drupal installation from the command line.
To install on Unix-like systems, you can use pear.
pear channel-discover pear.drush.org
pear install drush/drush
On the Mac, you can also use Homebrew, and it is in many Linux distribution’s package managers. There is also an installation guide for Windows.
Once you have drush installed, you can find valuable information about your Drupal instance with a few keystrokes. Navigate inside your Drupal installation (it doesn’t matter where), and run:
drush st
That will give you a high-level view of your Drupal installation, including Drupal version, database information, and currently enabled theme.
How about if you need to find out what modules are installed? Normally, you would have to sift through the modules page, looking for enabled checkboxes. With drush, you can cut to the chase.
drush pml | ack "Enabled" | less
The pml
parameter lists all the modules currently available, and piping it to ack sorts on all that are enabled. The final pipe to less just gives us a nice pager if are results are long. You can also sort on “Disabled” and “Not installed”.
The sites at Spanfeller run a lot of modules, some of which we probably don’t need. To get a count of how many modules are in the code base, I ran the following command:
drush pml | wc -l
Or, for only enabled modules:
drush pml | ack "Enabled" | wc -l
wc
stands for “Word Count”, and the -l
parameter counts lines. Subtract 4 from the final result, since the drush output adds some lines for formatting and information.
You can also get more detailed information about the modules you have in your code base quickly. Say you wanted to get information about the views
module, you would run:
drush pmi views
For the module release information, use:
drush rl views
And for the module release notes, run:
drush rln views
One last convenient drush trick, is using it to connect to your database CLI. Say you’ve figured out all you can from the code, and you need to poke around in the database. Normally, you would less
or vi
the settings.php
file, find the database details, then run your connection command. Something like myusql -umyuser -pmypass mydatabasename
, for mysql.
With drush, you could run drush st
to save you from manually looking at settings.php
, but we can take it a step further.
drush sql-cli
Run that baby, and you will be automatically dropped into a sql command line using the credentials from settings.php
. Pretty sweet, right?
That’s about it. Hopefully these tips and tricks are useful to someone, and all the text-based command line goodness has convinced the GUI inclined of it’s merits. If not, the hills are calling.
Yep, I’m going to do it. Why? Well because I don’t feel comfortable with their privacy policies anymore. When I first joined back in the spring of 2004, facebook had a strict sense of user privacy. Here’s a press release circa 2004,
No personal information that you submit to Thefacebook will be available to any user of the Web Site who does not belong to at least one of the groups specified by you in your privacy settings.
Contrast that to their current policy, as of April 2010,
When you connect with an application or website it will have access to General Information about you. The term General Information includes your and your friends’ names, profile pictures, gender, user IDs, connections, and any content shared using the Everyone privacy setting. … The default privacy setting for certain types of information you post on Facebook is set to “everyone.” … Because it takes two to connect, your privacy settings only control who can see the connection on your profile page. If you are uncomfortable with the connection being publicly available, you should consider removing (or not making) the connection.
If this is how facebook feels, I must then choose to not make the connection. To me, their policies are clearly in service of the corporations they sell my information to; none of these “expanded” privacy policies help me. I do believe one of the primary functions of the internet is sharing, and it is a wonderful tool to do so. However I don’t believe that this sharing should, and, as facebook would have us believe, must come at a sacrifice in privacy.
My interests and opinions are now converted directly into objects suitable only for dataminer’s and advertiser’s consumption. You can read more about this policy in this article.
Now, many may argue that privacy settings can still be set on facebook, and while I respect that opinion, I offer this refutation: first, facebook can change your privacy settings at any time without notice. Not only can they, but they have. Repeatedly. This may seem like a minor inconvenience, a simple trip to the settings page, but without notice, data can be open for the period that you were unaware of the change. I know it happened to me. It’s also greatly discomforting to me to be a part of a company that treats me and my data in this way.
I recently did a privacy check of my facebook account with a tool called ReclaimPrivacy. It found most of my settings secure, however friends could still “accidentally” share my information with corporations. I was unable to fix this problem. However, even if my facebook page had come back with a clean bill of health, I would still have quit on principle and for fear of the future: I cannot get behind a company that has such disregard for my privacy, and whose track record suggests a continual “big brother” decline. Conjuring Orwell may seem prosaic, but I cannot help but feel it is deserved.
So that’s my reasoning. Facebook is very convenient and useful in many ways, not to mention addicting, however it’s also really evil. There, I said it. If you feel the same way, consider quitting facebook too.
As for alternatives, unfortunately there is nothing directly like facebook to switch to that has the same level of popularity. However, the Diaspora Project looks very promising, and there will be a release at the end of the summer. (Thanks to Jack Donovan for introducing me to it!) Also, there’s always twitter, which is very open about being public, so you know exactly what you’re sharing.
Yep, I’m going to do it. Why?
Well because I don’t feel comfortable with their privacy policies anymore. When I first joined back in the spring of 2004, facebook had a strict sense of user privacy. Here’s a press release circa 2004,
No personal information that you submit to Thefacebook will be available to any user of the Web Site who does not belong to at least one of the groups specified by you in your privacy settings.
Contrast that to their current policy, as of April 2010,
When you connect with an application or website it will have access to General Information about you. The term General Information includes your and your friends’ names, profile pictures, gender, user IDs, connections, and any content shared using the Everyone privacy setting. … The default privacy setting for certain types of information you post on Facebook is set to “everyone.” … Because it takes two to connect, your privacy settings only control who can see the connection on your profile page. If you are uncomfortable with the connection being publicly available, you should consider removing (or not making) the connection.
If this is how facebook feels, I must then choose to not make the connection. To me, their policies are clearly in service of the corporations they sell my information to; none of these “expanded” privacy policies help me. I do believe one of the primary functions of the internet is sharing, and it is a wonderful tool to do so. However I don’t believe that this sharing should, and, as facebook would have us believe, must come at a sacrifice in privacy.
My interests and opinions are now converted directly into objects suitable only for dataminer’s and advertiser’s consumption. You can read more about this policy in this article.
Now, many may argue that privacy settings can still be set on facebook, and while I respect that opinion, I offer this refutation: first, facebook can change your privacy settings at any time without notice. Not only can they, but they have. Repeatedly. This may seem like a minor inconvience, a simple trip to the settings page, but without notice, data can be open for the period that you were unaware of the change. I know it happened to me. It’s also greatly discomforting to me to be a part of a company that treats me and my data in this way.
I recently did a privacy check of my facebook account with a tool called ReclaimPrivacy. It found most of my settings secure, however friends could still “accidentally” share my information with corperations. I was unable to fix this problem. However, even if my facebook page had come back with a clean bill of health, I would still have quit on principle and for fear of the future: I cannot get behinda company that has such disregard for my privacy, and whose track record suggests a continual “big brother” decline. Conjuring Orwell may seem prosaic, but I cannot help but feel it is deserved.
So that’s my reasoning. Facebook is very convienent and uuseful in many ways, not to mention addicting, however it’s also really evil. There, I said it. If you feel the same way, consider quitting facebook too.
As for alternatives, unfortunately there is nothing directly like facebook to switch to that has the same level of popularity. However, the Diaspora Project looks very promising, and there will be a release at the end of the summer. (Thanks to Jack Donovan for introducing me to it!) Also, there’s always twitter, which is very open about being public, so you know exactly what you’re sharing.
Good God, #Drupal7 alpha 4 is amazing! Cannot wait for a stable release so I can upgraden start going out!!!
Original post: <twitter.com/chrisjohn…>
Folks, I have stumbled upon something so nerdy, so impressive, and so potentially useless that I had to share it.
That thing is ASCII Star Wars. Let me explain. Someone has animated the movie Star Wars Episode IV: A New Hope, entirely with ASCII characters. In other words, with punctuation like @$&()_?”:}|
etc. There is no sound, so all the dialog is shown in the form of subtitles. How cool is that???
Even if you don’t like Star Wars (you should all be ashamed of yourselves), how impressive is it that one guy had THAT much time on his hands to animate an entire movie with text? I am personally not acquainted with the animation methods of arranging text symbols into recognizable shapes and making them move, however I’m sure it must be tedious. I have not seen the whole thing (I intend to one of these days), but I hear that the movie ends when princess Leia is rescued. However, the guy is apparently still working on the thing, so maybe in a couple years we’ll have the whole thing, and if he’s REALLY committed, all the sequels. This might start a new movie-to-ASCII trend; thousands will be working tirelessly to convert high budget, multi-million dollar Hollywood classics into text (imagine Titanic made completely out of $’s). Or maybe it’s a totally isolated incident, and no one else would spend years of their life doing this. It’s a toss up. Predictions aside, if you would actually like to SEE ASCII Star Wars, here’s how:
telnet
. This will open the telnet program, which looks the same as a DOS prompt.open towel.blinkenlights.nl
.telnet
(this will start a telnet prompt).open towel.blinkenlights.nl
telnet
open towel.blinkenlights.nl
…or you could just click here for the web version, but it’s not as cool.
And there you have it; type it, watch it, love it (or maybe just the first two, I don’t know).
Rejoice in the nerd!