Categories: nodejs

RSS - Atom - Subscribe via email

Getting Mermaid JS and ob-mermaid running on my system - needed to symlink Chromium for Puppeteer

| emacsconf, nodejs, org, emacs

I wanted to use Mermaid to make diagrams, but I ran into this issue when trying to run it:

Error: Could not find Chromium (rev. 1108766). This can occur if either
 1. you did not perform an installation before running the script (e.g. `npm install`) or
 2. your cache path is incorrectly configured (which is: /home/sacha/.cache/puppeteer).
For (2), check out our guide on configuring puppeteer at https://pptr.dev/guides/configuration.

It turns out that I needed to do the following:

sudo npm install -g puppeteer mermaid @mermaid-js/mermaid-cli --unsafe-perm
# Cache Chromium for my own user
node /usr/lib/node_modules/puppeteer/install.js --unsafe-perm
sudo npm install -g mermaid @mermaid-js/mermaid-cli
ln -s ~/.cache/puppeteer/chrome/linux-117.0.5938.149 ~/.cache/puppeteer/chrome/linux-1108766
ln -s ~/.cache/puppeteer/chrome/linux-117.0.5938.149/chrome-linux64 ~/.cache/puppeteer/chrome/linux-117.0.5938.149/chrome-linux

(The exact versions might be different for your installation.)

Then I could make a Mermaid file and try it out with mmdc -i input.mmd -o output.svg, and then I could confirm that it works directly from Org with ob-mermaid:

sequenceDiagram
    input ->> res: original.mp4;
    res ->> backstage: reencoded.webm;
    backstage ->> laptop: cached files;
    laptop ->> backstage: index;
    input ->> res: main.vtt;
    res ->> backstage: main.webm;
    backstage ->> laptop: cached files;
    laptop ->> backstage: index;
    input ->> res: normalized.opus;
    res ->> backstage: main.webm;
    backstage ->> laptop: cached files;
    laptop ->> backstage: index;
    backstage ->> media: public files;
media-flow.png

Compiling selected blog posts into HTML and EPUB so I can annotate them

| blogging, 11ty, nodejs, supernote

[2023-01-04 Wed] Added a screenshot showing annotation.

I was thinking about how to prepare for my next 10-year review, since I'll turn 40 this year. I've been writing yearly reviews with some regularity and monthly reviews sporadically, and I figured it would be nice to have those posts in an EPUB so that I can read them on my e-reader and annotate them as I do my review.

I use the 11ty static site generator to publish my blog as HTML files, since I currently can't keep more than Emacs Lisp, Javascript, and Python in my brain. (No Hugo or Jekyll for me at the moment.) I briefly thought about getting 11ty to create that archive for me, but I realized it might be easier to just write it as an external script instead of trying to figure out how to get 11ty to export one thing conditionally.

One of the things I've configured 11ty to make is a JSON file that includes all of my posts with dates, titles, permalinks, and categories. It was easy to then parse this list and filter it to get the posts I wanted. I parsed the HTML out of the _site directory that 11ty produces instead of fetching the pages from my webserver. I got the images from my webserver, though, and I made a local cache and rewrote the URLs. That way, the EPUB conversion could include the images.

Download blog.js

blog.js
const blog = require('/home/sacha/proj/static-blog/_site/blog/all/index.json');
const cheerio = require('cheerio');
const base = '/home/sacha/proj/static-blog/_site';
const fs = require('fs');
const path = require('path');

function slugify(p) {
  return p.permalink.replace('/blog', 'post-').replace(/\//g, '-');
}

async function processPost(p) {
  console.log('Processing '+ p.permalink);
  let $ = cheerio.load(fs.readFileSync(base + p.permalink + 'index.html'));
  $('#comment').remove();
  let images = $('article img');
  await Promise.all(images.map((i, e) => {
    let url = $(e).attr('src');
    const outputFileName = 'images/' + path.basename(url).replace(/ |%20|%23/g, '-');
    $(e).attr('src', outputFileName);
    $(e).attr('style', 'max-height: 100%; max-width: 100%; ' + ($(e).attr('style') || ''));
    $(e).attr('srcset', null);
    $(e).attr('sizes', null);
    $(e).attr('width', null);
    $(e).attr('height', null);
    if (!fs.existsSync(outputFileName)) {
      console.log('fetch', outputFileName);
      return fetch(url).then(res => res.arrayBuffer()).then(data => {
        const buffer = Buffer.from(data);
        return fs.createWriteStream(outputFileName).write(buffer);
      });
    } else {
      console.log(outputFileName, 'exists');
      return null;
    }
  }));
  console.log('Done ' + p.permalink);
  let slug = slugify(p);
  $('article h2').attr('id', slug);
  let header = $('article header').html();
  let entry = $('article .entry').html();
  return `<article>${header}${entry}</article>`;
}

let last10 = blog.filter((p) => p.date >= '2013-08-01');
let posts = last10.filter((p) => p.categories.indexOf('yearly') >= 0)
    .concat(blog.filter((p) => p.title == 'Turning 30: A review of the last decade'))
    .concat(last10.filter((p) => p.categories.indexOf('monthly') >= 0));

let toc = '<h1>Table of Contents</h1><ul>' + posts.map((p) => {
  return `<li><a href="#${slugify(p)}">${p.title}</a></li>\n`;
}).join('') + '</ul>';

let content = posts.reduce(async (prev, val) => {
    return await prev + await processPost(val);
  }, '');
content.then((data) => {
  fs.writeFileSync('archive.html',
                   `<html><body>${toc}${data}</body></html>`);

});

This created an archive.html with my posts, using the images/ directory for the images. Then I used my shell script for converting and copying files to convert it to EPUB and copy it over.

On the SuperNote, I can highlight text by drawing square brackets around it. If I tap that text, I can write or draw underneath it. Here's what that looks like:

20230104_090739.png
Figure 1: Writing an annotation

These notes are collected into a "Digest" view, and I can export things from there. (Example: archive.pdf)

2023-01-04_09-23-57.png
Figure 2: Here's what that digest is like when exported.

(Hmm, maybe I should ask them about hiding the pencil icon…)

Anyway, I think that might be a good starting point for my review.