In the early to mid-90s, I ran a BBS out of my bedroom. It wasn’t very popular, but I did have a lot of time on my hands and I spent a lot of that time modding it. Initially, I wrote mods for myself, but eventually I started releasing them to other people. As a teenager, this was the first software that I wrote that other people used. This was my contribution to “the scene” and now my only claim to fame from that time is that I’m part of the final member list of ACiD Productions under BBS Modifications.

During this era, I asked one of my users to create some ANSI art for me:

me asking someone to create ansi art

And they agreed!

user agreeing to create ansi art

11 days later, they uploaded the ANSI art to my BBS.

delivering the ansi art

You may be wondering what the deal with these Windows XP-era screenshots is. Well, I used to backup my BBS with a Colorado Tape Backup drive. Here’s a picture of one I found on the Internet:

colorado tape backup

In 2001, I found a 250 MB tape that was the last backup of my old BBS and restored it to my computer. Feeling nostalgic, I logged in to it locally and took a few screenshots. When I first logged-in to it, it displayed the ANSI art that my user had made for me:

screenshot of ansi art

It scrolled by pretty fast, so I took a series of screenshots of it and then used Photoshop to combine them. I made a little webpage for it so I could look at it every now and then and went on with my life.

Shortly after this, my hard drive died. I think it was an IBM Deathstar. I have no idea what happened to the 250 MB tape, but I think it’s safe to say that even if I do find it, it probably isn’t going to work.

Luckily, thanks to the power of the web, I still have these screenshots. Unluckily, I never actually uploaded the original ANSI file, so I don’t have that. Every now and then I’ll search for it on the Internet, but I’ve concluded that it never made it into an artpack and thus I had the only copy of it, until I didn’t.

It’s not the most beautiful ANSI art, but it is something that someone made for me and I’ve always been a little bummed that I can’t look at it in one of the many ANSI viewers (or DOS emulators) that exist today.

Let’s fix that!

The first thing to understand about ANSI art is that it combines two things: characters from the IBM Code page 437 and ANSI escape sequences that do things like change colors and move the cursor around.

I found this extremely handy page that shows all the different characters in Codepage 437.

Codepage 437

Characters #0 - #31 are control characters, and #127 is DEL, so we can ignore those. The rest of them are used in ANSI art, although the shade blocks and half blocks are predominately used.

The way that you would type one of these weird characters on an IBM PC is that you could hold down ALT and type the ASCII code on your numpad. When you released ALT, the character would show up. But most artists would use a program like TheDraw or ACiDDraw to design their art.

Speaking of TheDraw, let’s take a look at the color selection screen from it:

TheDraw color selection screen

There are sixteen foreground colors and eight background colors. Ignore 16-31, I captured this screenshot mid-blink.

Changing the foreground and background colors and writing the characters from Codepage 437 produces the ANSI art that we know today:

LU-Holiday.ans by Luciano Ayres from Blocktronics Blockfury
LU-Holiday.ans by Luciano Ayres from Blocktronics Blockfury

To accurately display ANSI art, it’s important to use an appropriate IBM PC font, like this one. This will ensure that your art looks the way that it was intended by the artist.

The strategy for conversion that I came up with was: split the screenshot into individual characters. For each character, generate every possible permutation of background color, foreground color, and character from Codepage 437 and compare it to the character in the screenshot, then pick the one that is the most similar.

There’s probably a lot of different ways to do this, but I figured the easiest would be to make a webpage and use the Canvas API.

As a test case, I used a program called ansilove to take an existing .ANS file and generate a PNG of it. It even came with an example ANS file:

example input

The image is 640x464. Assuming that we have a 8x16 font, this means that it contains 80x29 characters.

We create a canvas and load the image into it:

const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d", { willReadFrequently: true });

const img = new Image();
img.addEventListener("load", (e) => {
    ctx.drawImage(img, 0, 0);
});

img.src = 'input.png';

Next, we have to have a list of the foreground and background colors. I found that iTerm2 has a color scheme for CGA that looks accurate, so I loaded it into iTerm and then extracted the hex codes from it, double-checking against the TheDraw color picker. This gave me two lists:

var fgColors = [
    "000000",
    "aa0000",
    "00aa00",
    "aa5500",
    "0000aa",
    "aa00aa",
    "00aaaa",
    "aaaaaa",
    "555555",
    "ff5555",
    "55ff55",
    "ffff55",
    "5555ff",
    "ff55ff",
    "55ffff",
    "feffff"
];

var bgColors = [
    "000000",
    "aa0000",
    "00aa00",
    "aa5500",
    "0000aa",
    "aa00aa",
    "00aaaa",
    "aaaaaa",
];

Now we need all the characters in the Codepage 437 to loop over. Luckily, Unicode provides a text file that translates all CP437 codes to their UTF-8 equivalents.

I took this text file, removed the control characters and DEL, and created an array of their UTF-8 counterparts.

Then I created another canvas, looped over each background color, foreground color, and character, and wrote them to the canvas using the IBM PC font.

const char1canvas = document.getElementById("char1");
const char1ctx = char1canvas.getContext("2d", { willReadFrequently: true });

char1ctx.fillStyle = "#000000";
char1ctx.fillRect(0,0,8,16);

var imgData = [];

for (var i = 0; i < bgColors.length; i++) {
    for (var j = 0; j < fgColors.length; j++) {

        if (bgColors[i] == fgColors[j]) {
            continue;
        }

        for (var k = 0; k < chars.length; k++) {

            char2ctx.fillStyle = "#" + bgColors[i];
            char2ctx.fillRect(0,0,8,16);

            char2ctx.fillStyle = "#" + fgColors[j];

            char2ctx.font = "16px xx437";
            char2ctx.fillText(chars[k], 0, 12);

            if (!imgData[i]) {
                imgData[i] = [];
            }

            if (!imgData[i][j]) {
                imgData[i][j] = [];
            }

            imgData[i][j][k] = char2ctx.getImageData(0,0,8,16);

        }
    }
}

For some reason I had to offset the fillText by 12 pixels to get it to correctly write it to the canvas as I would expect. I have no idea why, but CSS has never been a strength of mine. After each time that we write the character, we store the image data of the result in a lookup table, by background color, foreground color, and character.

Next, we loop over each character section of the original image’s canvas and extract the image data of this section:

for (var y = 0; y <= 24; y++) {
    for (var x = 0; x <= 79; x++) {
        char1ctx.drawImage(canvas, x*8, y*16, 8, 16, 0, 0, 8, 16);
        var imgChar1 = char1ctx.getImageData(0,0,8,16);
    }
}

I found a library called pixelmatch that allows you to compare two sets of imagedata. It returns the number of mismatched pixels and if you want it, a diff of the two.

var result = pixelmatch(imgChar1.data, imgChar2.data, null, 8, 16, {
    threshold: 0.1,
});

So then we can loop over every permutation of background color, foreground color, and character and compare it to the image’s character and pick the one that has the lowest number of mismatched pixels - ideally 0.

Next we create an output canvas and using the identified combinations for each character, write back the same ANSI art to that canvas:

example comparison

The image on the left is the input image and the image on the right is the generated ANSI art. It looks pretty good! The heart in the middle of ANSI and LOVE has been converted to a rectangular bullet - possibly because I excluded the control characters and the heart happened to be in them.

But this is only useful if we can generate the ANSI art file. Let’s go back to the ANSI escape codes:

\033[0m resets everything back to normal.

\033[31m will set the foreground to red. The possible colors are 30-37.

\033[44m will set the background to blue. The possible colors are 40-47.

In order to get the bright foreground colors, we simply have to set the bold attribute, which we can use \033[1m to set. We just have to remember to use \033[0m to reset it after we’re done.

So for each character, we just always reset it, set the background color, set the foreground color, and optionally set bold. Then we write the character.

But if we just write this to a string, this will give us a file that contains ANSI escape codes but UTF-8 characters - we need to convert it to a DOS format.

There’s a useful package called iconv-lite that can encode different character encodings, so we can do something like:

iconv.encode(ansiTxt, 'cp437');

I reverse engineered how this CP437 converter works to output the file and I thought the way that it downloaded it was pretty clever, so I incorporated it as well:

var c = document.createElement("a");
c.href = "data:text/plain;base64," + iconv.encode(ansiTxt, 'cp437').toString('base64');
c.download = 'output.ans';
document.body.appendChild(c);
c.click();
document.body.removeChild(c);

This automatically started downloading the ANS file with the correct encodings.

Finally, it was time to send my screenshot through it!

The screenshots that I had taken in 2001 were one screen at a time, or 80x25 characters. This was the first one:

first screenshot

The dimensions were 560x300. But that means that each character was 7x12 instead of 8x16. And this is when I remembered that when I took the screenshots, I had taken them in a DOS window in Windows, which used whatever font that Windows used for DOS prompts. What if we just… resized the image so that each character was 8x16? I resized the image to 640x400, making sure to use “Nearest Neighbor” to try to keep the pixels correct, and it sort-of worked:

first comparison

The text detection is laughably bad but it seemed to get the solid and half blocks - it was really mostly struggling with the shade blocks. There are only 3 shade blocks, 4 if you count the completely solid one.

I wrote an ANS file to output just three different shade blocks and exported an image of it using ansilove and it was clear - whatever font Windows was using for the DOS prompt had a completely different idea of what a shade block looked like than what the IBM PC font did.

Then I had an idea - what if I just taught my program what the weird Windows shade blocks looked like?

I zoomed in on the shade blocks in the screenshot and extracted one of each of the different types. From left to right, these represent light, medium, and heavy:

shade blocks

I loaded each one into a canvas and looped over each pixel. When I encountered red, I stored this as a 1 in a multi-dimensional array, and when I didn’t find red, I stored this as 0. Using this mapping, I again generated every permutation of background color, foreground color, and now these new shade blocks and stored their image data in a lookup table. When the image character matched it, I selected the appropriate shade block character.

And that worked! Here you can see the difference between the screenshot shade blocks and the actual shade blocks is quite stark:

second comparison

Apparently I hadn’t seen what this ANSI art correctly looked like since the 90s. The text was still wrong, but I figured I could easily recreate it from the screenshots. So I started processing the other screenshots, which was working great until I got to this one:

third comparison

The shading on the flames was completely gone, which meant that it more closely identified the character with the solid block than the shade block. That’s weird. I zoomed in closer to the block on the original screenshot. Here it is in the middle next to two of the original shade blocks that I mapped.

new shade block

It seemed to be very similar to the one of the left, but flipped. I’m not sure how it got flipped, maybe something to do with the screen capture or resizing process, but I used the same procedure to map it, storing a 1 for each red pixel and generating all the permutations, finally storing it in the lookup table, and I ran it again. And.. it got the same result.

I was starting to feel like I was going crazy when I remembered the TheDraw color selection screen:

TheDraw color selection screen

There are two foreground colors, brown and yellow. Yellow is the bold version of brown. But there is only one background color, which happens to be brown. It dawned on me that what I was looking at was not a red shade block on yellow, but a yellow shade block on red. It really makes you appreciate the constraints of the 90s ANSI artist. After retraining it to look for yellow pixels, it correctly identified the block.

The text was still a garbage fire, but when I combined the ANSI files in Pablodraw, I just retyped it.

Here’s what it looked like after I cleaned it up:

final output

And here is the ANSI file: mp-h&c.ans.

You can view in something like nfov or Pablodraw, or even iconv -f 437 mp-h\&c.ans if you’ve got the correct colors and font set up in your terminal.

I don’t intend on losing it again.