{ "version": "https://jsonfeed.org/version/1", "title": "Tom Oliver's Blog - Articles Only", "home_page_url": "https://www.tomoliver.net/", "feed_url": "https://www.tomoliver.net/feeds/articles/feed.json", "description": "Personal blog about programming, Linux, and much more", "icon": "https://www.tomoliver.net/logo.svg", "author": { "name": "Tom Oliver" }, "items": [ { "id": "https://www.tomoliver.net/posts/coding-on-eink", "content_html": "
It was the spring of 2020, I had just received my アベノマスク (Abe masks) and I could finally breath easy. Only the people of Japan know the reassurance provided by barely 2 millimetres of coarsely stitched linen, loosely sliding across your nose and mouth as you board a socially-distanced train. Not a single corona virus would make it through Abe's impregnable shield, and the country knew it.
\nA month later the (ultimately to resign and be assassinated) prime minister of Japan had wisely chosen to capitalize on the collective cough of relief brought about by his thought-leading mask policy. He decided to give each and everyone of us ten thousand of his finest yens in order to stimulate the economy. And how should I choose to do my bit for the greater good? I made my first e-ink monitor purchase, namely the Dasung paperlike HD-FT
. Although I may have stimulated the Chinese economy instead of Japan's, its always the thought that counts when it comes down to emergency macro-economic stimulus packages.
You've used an e-reader before, right?
\nWell its basically that but bigger and more responsive.
\nE-Ink monitors reflect the ambient light in the room in the same way a page in a book does, thus no backlight is required and much less eyestrain. The "less eyestrain" thing is actually the only benefit to e-ink monitors, they are inferior to conventional monitors in every other way (except maybe power consumption, but honestly who cares?).
\nBecause I have no choice\n...And due to the downsides that I'll get to later, neither should you for you to seriously considering buying one.
\nI get a headache and my eyes begin to hurt when I look at a conventional screen for too long. If its a really bright screen this might be a really short time like less than 5 seconds. I'm not sure exactly why this happens but its definitely something to do with having photons blasted into my eyes. I have always been this way since I was a child (in those days we had a CRT). Upon reflection, this may have been a sign from the universe not to pursue a career that involves staring at a computer for 90% of my time. Oops... Luckily e-ink monitors exist and they are a total game changer for people like me. If you feel like you have eyestrain problem then I would definitely recommend trying one out. If you're lucky you can pick up e-ink monitors for cheap on this subreddit.
\nDasung
has recently prototyped a color monitor so this may change soon.Unfortunately, Dasung
, the current industry leader when it comes to e-ink monitors, seems to not grasp the reason people would ever consider buying an e-ink monitor in the first place. The gimmicky features these monitors are being crammed with often do nothing for the average eye strain sufferer, in fact a few of them actually make eyestrain worse. Take a look at this feature from a monitor they call the Dasung 253 Dark Knight Version.
Ahh yes, the shimmering neon light on this monitor is just what my eyestrain needed!
\nUnless you have a lot of natural light in your room all-year round then you are going to struggle to use your monitor without a lamp of some sorts. Ambient lights are not enough! Some e-ink monitors come with integrated LED frontlights which I advise against using because they:
\nInstead I recommend picking a desk lamp that is tall enough such that it can shine down on the monitor, otherwise you are going to get a lot of reflections. I would also recommend getting a light bulb with a warm hue to minimize blue light exposure (which tends to aggravate eyestrain). Although you can't really buy halogen bulbs anymore, it is possible to find LED bulbs that are pretty close substitutes (minus the heat).
\n\nBecause messing around with the physical buttons every 5 seconds is obviously a waste of time, you need to keep that to a minimum. To do that, you need to stay on the same settings for as long as possible. For me, I try to stay in the "text" mode most of the time (think high contrast + low refresh rate). To do this I almost exclusively use my keyboard to interact with my computer. This is because accurately positioning a mouse cursor on something you want to click on in a low refresh rate setting is like aiming while peeing drunk. So I try to just use my keyboard. Which naturally leads to us to my preferred e-ink stack:
\nIn an ideal world we should be able to use software to control the settings of e-ink monitors so that we don't have to mess around with any of the physical buttons. Unfortunately my experience with the vendor provided software is pretty bad. I can however recommend this awesome tool that lets you set the contrast, light and mode on the Dasung paperlike
. There is currently an issue open regarding support for the Dasung 253
...
E-ink does not have good enough contrast for you to discern the differences between many colors the terminal uses. Any text that isn't black on white will be difficult to read so I recommend setting your terminal to a monochrome theme. White on black text also has decent contrast but the ghosting is much worse, apparently this can cause the E-ink panel to wear out quicker too. My experience was that at first, the lack of syntax highlighting greatly increased the cognitive load of reading code. After a while though I began to get used to this and now actually prefer it. That is why you may notice that the code snippets on my site do not have any syntax highlighting.
\nTo some it up, if you are wondering whether you can be a software developer if you suffer from eyestrain, the answer is yes. Its not always straight forward and can be frustrating at times, but with e-ink its definitely possible. There probably aren't many professions that are as text heavy and therefore well suited to e-ink as ours, so it makes sense for adoption to start with us. Who knows where will be in a decade or two...
\nGood luck!
", "url": "https://www.tomoliver.net/posts/coding-on-eink", "title": "3.5+ Years of Coding on E-Ink", "summary": "Tips I learned programming on e-ink.", "image": "https://www.tomoliver.net/img/dasung-paperlike-hdft.jpg", "date_modified": "2024-01-13T15:44:44.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "linux", "e-ink", "lifestyle", "japan" ] }, { "id": "https://www.tomoliver.net/posts/counting-occurrences", "content_html": "Often we have a list of things and would like to know how many there are of each thing in the list.\nThis is one of those handy one-liners that will give you an object mapping an item to the number of occurrences for a given list of things.
\nconst getFreqMap = (list) => \n list.reduce((acc, cur) => \n ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }), {})\n\ngetFreqMap(['a', 'a', 'c', 'b', 'd', 'd', 'd']) \n// {a: 2, c: 1, b: 1, d: 3}\ngetFreqMap([0, 1, 2, 2, 6, 2, 1, 0, 6]) \n// {0: 2, 1: 2, 2: 3, 6: 2}\n
\nOkay so maybe its not just one line.
\nBut that's because I formatted it nicely!
\nAt least its not as bad as this:
const getFreqMap = (list) => {\n const res = {}\n for (let i = 0; i < list.length; i++) {\n if (res[list[i]] !== undefined){\n res[list[i]]++\n }\n else{\n res[list[i]] = 1\n }\n }\n return res\n}\n
\nWow, what a waste of lines...
\nAnyway, lets rewrite our one-liner in typescript because its better.
So lets just change the file type to .ts
.
\n...And immediately we get a bollocking like so:
const getFreqMap = (list) =>\n list.reduce((acc,cur) => \n ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }), {})\n
\nSo lets add a type annotation.
\nconst getFreqMap = (list: string[]) =>\n list.reduce((acc,cur) => \n ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }), {})\n
\nLooks like we need to tell typescript that the type of the second argument of reduce is a an object mapping type string
to type number
.
\nWe can do this using as
.
const getFreqMap = (list: string[]) =>\n list.reduce(\n (acc, cur) => ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }),\n {} as { [key: string]: number }\n )\n
\nAnd this works just fine.
\nBut.. Wouldn't it be nice if it worked for any kind of list?
\nSo lets do that then...
\nUsing generics of course.
const getFreqMap = <A>(list: A[]) =>\n list.reduce(\n (acc, cur) => ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }),\n {} as { [key: A]: number }\n )\n
\nSo we just replaced string
with a generic type A
.
\nBut alas this is not correct...
\nWe get an error saying we can't use something with type A
as an index on {}
.
\nSo lets see about the Record
type instead.
const getFreqMap = <A>(list: A[]) =>\n list.reduce(\n (acc, cur) => ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }),\n {} as Record<A, number>\n )\n
\nOmfg another error, srsly???
\nOh. Looks like our generic type is a little too generic..
\nLets take a look at the type definition for the Record type.
type Record<K extends keyof any, T> = {\n [P in K]: T;\n};\n
\nIt says that the type of K
(the key of the Record) must be a key of something.
\nWhich gets reduced to string
, number
, symbol
or any
as stated by the error message above.
\nOk. So lets just add that type constraint to A
.
const getFreqMap = <A extends keyof any>(list: A[]) =>\n list.reduce(\n (acc, cur) => ({ ...acc, [cur]: (acc[cur] ?? 0) + 1 }),\n {} as Record<A, number>\n )\n
\nAnd here is the finished product!
\nHow beautiful!
But wait, what if we had a list of something more complicated, like a list of objects?
\nAn exercise for the reader perhaps...
The web application that I am currently working on uses several 3rd party auth providers as part of its authentication flow. Manually testing the application I could see that everything was working. However, when running Cypress tests I was getting a strange warning mark on the header of the response from my auth provider. This header was asking my browser to set the session cookie but it was refusing to do so.
\nNaughty browser.
\nThe SameSite
attribute is basically a measure taken to mitigate CSRF attacks. It tells the browser how it should treat cookies across multiple domains.\nThe SameSite
attribute can have one of three values:
None
\nLax
\niframe
. Top-level simply means there is no parent context. For example, a tab is a top-level browsing context whereas an iframe
exists within a tab (as a child browsing-context) and so cannot be considered top-level.Strict
\nNote: Should the SameSite
attribute header be omitted, a default of Lax
will be applied by most browsers.
During the test, redirecting to the auth provider does not change the address in the URL bar of the Cypress test runner. This is because the entire application being tested, including any external redirects, is contained within an iframe
. This does not constitute a top-level browsing context and so violates the SameSite=Lax
restrictions. Outside of Cypress there is no iframe
and so the redirect takes place without breaking the rules.
Luckily Cypress provides the cy.intercept
function that will allow us to intercept and modify all responses including redirects. We can use it to rewrite the headers of any response to: SameSite=None
, thus allowing the browser to set our cookies.
We can abstractify this to a Cypress command so we can call it from any test.
\n\n// commands.ts\nCypress.Commands.add("rewriteHeaders", () => {\n cy.intercept("*", (req) =>\n req.on("response", (res) => {\n const setCookies = res.headers["set-cookie"]\n res.headers["set-cookie"] = (\n Array.isArray(setCookies) ? setCookies : [setCookies]\n )\n .filter((x) => x)\n .map((headerContent) =>\n headerContent.replace(\n /samesite=(lax|strict)/gi,\n "secure; samesite=none"\n )\n )\n })\n )\n})\n\n\n\n
\nWe can now use our custom command before each test run begins.
\n// mytest.cy.ts\ndescribe("Logs in", () => {\n beforeEach(() => {\n cy.rewriteHeaders()\n })\n it("should log in without errors", () => {\n cy.contains("LOGIN").click()\n ...\n
\nAnd thats all! Hopefully this saves someone a few hours of debugging!
", "url": "https://www.tomoliver.net/posts/cypress-samesite-problem", "title": "Why your auth provider isn't working in Cypress", "summary": "Auth provider not working in Cypress? Cookies not being set properly inside the Cypress test runner? In this guide we explain the cause and solution.", "image": "https://www.tomoliver.net/img/samesite-small.png", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "nextjs", "react", "cypress" ] }, { "id": "https://www.tomoliver.net/posts/japanese-people", "content_html": "\n(日本人) (Japanese People) by 橘 玲 (Akira Tachibana)
\nFirst a disclaimer...
\n"Why read a book in Japanese and then write about it English?"
\nFor fun!
\nThe author presents this book as a new theory on what makes Japanese people unique and why. Previous books have attempted to answer the same question, however, the author explains that these are unduly influenced by orientalist (prejudiced outsider-interpretations of the Eastern world) thought.
\nThis is not a review!!! It is just some musings on things I found interesting while reading this book...
\nThe book starts off with a chapter on how in 2011, some power plant technicians appeared to be smiling as they delivered some very bad news to the people of Japan. This news being that there was a triple nuclear reactor meltdown happening at Fukushima. Of course they were not smiles of happiness but smiles of a more complex nature (nervousness/guilt/dread...). The book explains how this is not a behaviour unique to Japan but found in lots of Asian countries, most notably in Thailand where there are 13 different categories of smile! This idea is further expanded to show that a lot of what is thought to be typical Japanese behaviour is actually common to many Asian countries with Japan usually being the most watered down version. Somehow, Japan has been convinced (probably due to western influence) that "Japanese" attributes such as 空気を読む (Reading the air) are unique, not to be found anywhere else in the world. But how could this happen...
\nTime for some trivia!
\nThe current national sport of Japan is to compare Japan to the west (specifically America) and lament at how Japan will never catch up to it.
\nWhy does it compare itself to the west and not its neighbours? It might be because until very recently Japan was pretty much the only modern developed economy in Asia and having no worthy adversary in its home court, it naturally looked to the west as a benchmark. During the period I lived in Japan, when meeting someone for the first time I was often asked "Why did you come to Japan?" and then I would give the same old cringe response "Erm... I like Anime and stuff..." and then they would sort of ask in a much more roundabout way than this: "The future is bleak here. Will you be moving back one day?". After a while I sort of interpreted this as Japanese people not realising that the west is in reality very flawed and Japan does a lot of things way way way better than anywhere else. I hope that as Japan inevitably compares its pandemic response with that of the west's it can feel proud of itself for once.
\nThe book mentions that a family unit quite often can be under the same roof but living very separate lives. It is not unusual for members of a family to eat separately and at different times without the sort of group communication that western families would consider normal. There is also the phenomenon of the ワンルームマンション (studio flat). This is the default choice for anyone single without kids. Even university students tend not to share a flat. In general, when you are poor and single in the west you economise by sharing rent with housemates. In Japan you just get a smaller studio flat. Personally this suited me just fine, but it does make me wonder if the average foreigner is much more susceptible to loneliness than the average Japanese...?
\nFacebook never really made the kind of impact in Japan as it did on the west during the late-naughties. Upon registering, Facebook encouraged you to submit your photo, real name, workplace, university, and even political persuasion in order to connect online with your friends. We have gotten a lot more squeamish about privacy these days and the general use case of Facebook has since changed, but at the time a lot of people in the west were more than happy to publicise all this juicy personal information without any coercion. The Japanese (probably wisely) seem to have much more of a natural aversion to making their personal lives public, and, as a consequence Facebook never enjoyed widespread adoption. That is not to say they don't use social media, Twitter in particular is pretty popular, however, the vast majority of its users register with a pseudonym. The author makes the case that in the west, the reward of social credit, whether that be reviews on Amazon, retweets, Instagram followers etc. outweighs the risks involved with participating on the internet with one's real name. The reverse is true in Japan. The (perceived) risk of humiliation, stalking, or some kind of harassment is too great and so using a nickname became the default.
\nThe author ended the book by saying that Japan is often at the forefront of societal changes that other countries will eventually have to catch up to (aging population, loneliness, slowing economy...), and while this obviously presents massive challenges, it does also mean that Japan has the unique potential to create a new future, one nothing like anything that has come before it.
", "url": "https://www.tomoliver.net/posts/japanese-people", "title": " (日本人) (Japanese People) ", "summary": "How do the Japanese see themselves? Is it an accurate reflection of reality or just a western delusion? ", "image": "https://www.tomoliver.net/img/japanese-people-book.jpg", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "book", "japanese", "japan" ] }, { "id": "https://www.tomoliver.net/posts/nextjs-docker-public-env-vars", "content_html": "\nPicture the scenario: you have finished developing a web app using your favourite react framework.
\nIts a pretty simple web app, all it does is display a greeting message.
\nThis greeting message may change depending on which environment you deploy it in, for example, in your development environment you want it to say:
\n"Hello from development!"
\n...and when its deployed in your staging environment you want it to say:
\n"Hello from staging!".
\nSo naturally you put the greeting message in an environment variable so that you do not need to build a different version of the code for each environment.
\n// .env\n...\nNEXT_PUBLIC_GREETING="Hello from development!"\n...\n
\nYou, being the good soydev you are, decide to deploy it using docker.
\nSo you write a simple dockerfile to build an image but quickly realise something.
\n...\nRUN npm i\nARG NEXT_PUBLIC_GREETING="Hello from development!"\nRUN next build\n...\n
\nSince we have to set this environment variable when we are building the next app, our image will forever have this value hardcoded into its very essence.
\nThis means that even if we try to set the value of the variable at runtime like so:
\ndocker run -e NEXT_PUBLIC_GREETING="rekt m8" my-image/latest\n
\nIt won't work because NEXT_PUBLIC_GREETING
has already been inlined to be Hello from development!
in the compiled code.
So what can we do?
\nLike many a brave soul that came before me, my approach to tackling this problem was to set the value of the environment variable to a placeholder that can be switched out at runtime.
\nTo achieve this I created a shell script that can be broken into two parts. The first part is to create the sed commands needed to replace each placeholder with the correct value:
\n\n# Get all the environment variables currently loaded\nprintenv | \\\n # Filter for ones that start with NEXT_PUBLIC\n grep '^NEXT_PUBLIC' | \\\n # Replace the = sign with a space\n sed -r "s/=/ /g" | \\\n # Feed as arguments to sed so they can be used in a find and replace command\n xargs -n 2 bash -c 'echo "sed -i \\"s#APP_$0#$1#g\\""' \n\n
\nLets test it!
\n~>> export NEXT_PUBLIC_GREETING=HELLO!\n~>> export NEXT_PUBLIC_THEME=Dark\n~>> printenv | \\\n grep '^NEXT_PUBLIC' | \\\n sed -r "s/=/ /g" | \\\n xargs -n 2 bash -c 'echo "sed -i \\"s#APP_$0#$1#g\\""'\n\nsed -i "s#APP_NEXT_PUBLIC_GREETING#HELLO!#g"\nsed -i "s#APP_NEXT_PUBLIC_THEME#Dark#g"\n
\nAs we can see it spits out a sed
search and replace command for each environment variable we have.
Now for the second part, to execute these commands against the compiled code files.
\n#!/usr/bin/env bash\n\n# The first part wrapped in a function\nmakeSedCommands() {\n printenv | \\\n grep '^NEXT_PUBLIC' | \\\n sed -r "s/=/ /g" | \\\n xargs -n 2 bash -c 'echo "sed -i \\"s#APP_$0#$1#g\\""'\n}\n\n# Set the delimiter to newlines (needed for looping over the function output)\nIFS=$'\\n'\n# For each sed command\nfor c in $(makeSedCommands); do\n # For each file in the .next directory\n for f in $(find .next -type f); do\n # Execute the command against the file\n COMMAND="$c $f"\n eval $COMMAND\n done\ndone\n\necho "Starting Nextjs"\n# Run any arguments passed to this script\nexec "$@"\n
\nOk cool, we have our script ready! Now how do we use it with docker?
\n...\nARG NEXT_PUBLIC_GREETING=APP_NEXT_PUBLIC_GREETING\nRUN next build\n# Copy our script somewhere into the image\nCOPY entrypoint.sh .\n# Make it executable\nRUN ["chmod", "+x", "/app/entrypoint.sh"]\n\nEXPOSE 3000\n\nENTRYPOINT ["/app/entrypoint.sh"]\n\nCMD npm run start\n
\nLets remind ourselves that we are literally doing a search and replace on some compiled code so don't expect everything to be plain sailing.
\nOne thing I have discovered is that because NextJS splits your code into chunks during a production build, and we are simply doing a find and replace on those chunks, obviously the names of those chunks will not change. This means the browser can't tell that you have changed the chunks so it will continue to serve the cached version unless you explicitely clear the cache in your browser. I haven't had the time to see if there is a workaround for this but presumably just renaming the chunk files without breaking everything somehow would fix this problem.
\nAnd voila! Bit of a hacky solution but what can you do?\nIf you know of a less gross horrible hacky way of doing this then please tell me!
", "url": "https://www.tomoliver.net/posts/nextjs-docker-public-env-vars", "title": "How to set NEXT_PUBLIC_* environment variables in Docker", "summary": "In this post we explain how it is possible to set NEXT_PUBLIC_* environment variables baked into docker images", "image": "https://www.tomoliver.net/img/wojac-next-docker-small.png", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "docker", "nextjs", "react" ] }, { "id": "https://www.tomoliver.net/posts/nextjs-styled-components-without-js", "content_html": "Wait, did you say without JavaScript?
\nWe are React developers, why should we care about making sites that work without JavaScript?
TLDR;
\nIf you are using Next.js (pages router), then its likely that the excellent SSG and SSR features were one of the reasons that made you consider it for your project. Specifically I would like to talk about SSG in this post.
\nBasically if you don't know, when you run next build
, Next.js will try to pre-render as much of each page as possible. This is what gets sent to your users when they first access the site. Since there is already some content thats been pre-rendered, React only needs to rehydrate whats already there. This means there is less work done by your browser which translates to quicker execution times and better user experience.
But there is a much bigger reason that SSG pages appear to load quicker. The secret lies in the sheer magnitude of the JavaScript payload that most sites require. This bundle will include:
\npackage.json
The size of this bundle will dwarf the pre-rendered HTML. As an example of bundle sizes, for any given page on this blog you are reading, the JavaScript bundle will be roughly 10 times the size of the static HTML.
\nIn a traditional client rendered SPA, the index.html
file contains almost zero markup. Usually it will just have an empty <div id="root"></div>
that React will append DOM nodes on to. This means most (if not all) of the JavaScript needs to be downloaded before anything can be rendered at all. By pre-rendering the HTML, the browser can display something without having to wait for any JavaScript, this can improve the first contentful paint (FCP). Using this tactic Next.js lets us get away with having large bundle sizes whilst delivering the user a decent experience.
Although for the most part Next.js takes care of SSG for us, it does require some careful thought when we implement responsive design. One thing to consider with SSG is that everyone gets sent the same static HTML. Whether you're on desktop or mobile you will be given the same initial DOM. This can be a problem if you rely on JavaScript to make your website responsive. Lets say your site uses a hook like useWindowSize()
, and depending on the width of the window you will render either the desktop layout or the mobile layout. Well this can only happen after the JavaScript bundle has been fetched and React has rehydrated the page. By which time the user will have already seen the bare static HTML. And so will potentially experience a brief flicker as the correct layout is rendered (probably with a few rehydration errors). One solution could be to simply not render anything until rehydration has finished. But of course that defeats the object of SSG in the first place.
My advice is to rely solely on CSS for your responsivity (via media queries). This way both the desktop layout and mobile layout will be present in the initial HTML that gets sent out to every device.
\nBut wouldn't this make my HTML file bigger?
\nYes. But not by that much. You have to remember that the JavaScript bundle will usually be at least an order of magnitude bigger.
\nSaying all that, if you are a fan of rather creative designs that require dynamically calculating the positions of elements (like using getBoundingClientRect()
for example), you must also further consider how your designs will work without the presence of JavaScript. For example on this blog I often use an annotation component that draws a line connecting a sliver of code with a message. For this I need to use JavaScript to get the position of elements as they depend on the screen size of the end device and therefore are impossible to know beforehand. To solve this, I have another design which does not require any JavaScript to work properly. I simply assign a number in the top left corner of each message bubble that corresponds to the number given to the code segment it refers to. This is the design that the static HTML ships with before ultimately it gets rerendered on the client (assuming JS is enabled!). If I simply excluded the annotations entirely from the initial SSG'd HTML, when the JavaScript eventually did load they would be freshly inserted into the page causing layout shift. This is bad for UX, SEO and all manner of things. Not to mention, some people (who can blame them) browse with JavaScript disabled so they won't be able to see anything at all! For these reasons its worth thinking of a design that can handle both scenarios reasonably well, lets call it progressive design enhancement.
As a general rule, when JavaScript is not available always try to show something, and preferably something that takes up the same amount of vertical space so as not to cause a layout shift.
\nWhen Next.js performs SSG it does not execute any effect hooks in your code.\nIf a useEffect
hook executed at least once then this means JavaScript must be loaded. We can add a CSS class to the root node to indicate this.
// _app.jsx\nfunction MyApp({ Component, pageProps, router }) {\n const [mounted, setMounted] = useState(false)\n useEffect(() => setMounted(true), [])\n return (\n <Component className={mounted ? "has-js" : "no-js"} {...pageProps} />\n )\n}\n
\nFrom anywhere in our App we can show and hide components easily depending on if there is JavaScript.
\nexport const ShowWhenJSLoaded = styled.span`\n display: none;\n .has-js & {\n display: contents;\n }\n`\nexport const ShowBeforeJSLoaded = styled.span`\n display: none;\n .no-js & {\n display: contents;\n }\n`\n\nconst MyComp = () => (\n <>\n <ShowBeforeJSLoaded>\n <Placeholder />\n </ShowBeforeJSLoaded>\n <ShowWhenJSLoaded>\n <SomeComponentThatNeedsJS />\n </ShowWhenJSLoaded>\n </>\n)\n
\nTo manually disable JavaScript in the developer tools type Ctrl+Shift+p
to open the command pallet and then search for disable JavaScript
. If you now reload the page you will be seeing what non JavaScript users and search engine crawlers see.
If you use Styled Components, its worth checking that all your links function at this point. When using the next/link
component around a styled component you will find that the passHref
must be passed for the links to work without JavaScript. This is important for SEO because search engine crawlers use links on your site to discover its pages.
// Example\nexport const Comp = () => {\n return (\n <Link passHref href={`/index`}>\n <MyStyledComponent>Click Me!</MyStyledComponent>\n </Link>\n )\n}\nconst MyStyledComponent = styled.a`\n padding: 20px;\n`\n
\nBe aware that when using yarn dev
you are not seeing the finished product.\nThis is because SSR/SSG is not done in development mode.\nTo be able to test your app locally you need to run yarn build && yarn start
which will generate the production code.\nUnlike other workflows, with Next.js and any other SSR/SSG supporting framework, production code does not just mean "optimised" or "minified", the way your code is delivered and performs will be fundamentally different. In particular, hydration related bugs will only be apparent in the production build.
First lets make using media queries nicer.
\nLets create a file that exports a media
object with all the media queries we will need.
// media.ts\nexport const desktopWidth = 670\n\nconst mq = (strings: TemplateStringsArray, width: number) =>\n `@media only screen and (${\n strings.slice(0, strings.length - 1).join("") +\n width +\n strings[strings.length - 1]\n })` as const\n\nexport const media = {\n desktop: mq`min-width: ${desktopWidth}px`,\n mobile: mq`max-width: ${desktopWidth - 1}px`,\n}\n
\nImport the media
object into the file of your component and use like so:
// myComponent.tsx\nimport { media } from "./media"\n\nconst MyComponent = styled.span`\n ${media.desktop}{\n background-color: green;\n }\n ${media.mobile}{\n background-color: pink;\n }\n`\n
\nMuch nicer than typing out a media query every 5 seconds right?
\nWhat about hiding/showing components depending on the device size?
\nLets add some more to the media.ts
file.
// media.ts\n...\nexport const hideOn = (size: keyof typeof media) =>\n `display: contents;\n ${media[size]} {\n display: none;\n }\n `\n\nconst hideOnDesktop = hideOn("desktop")\n\nexport const HideOnDesktop = styled.div`\n ${hideOnDesktop}\n`\n
\nNow we can wrap our components ensuring they are only rendered on the appropriate device:
\nimport { media, HideOnDesktop } from "./media"\n...\n<>\n <HideOnDesktop>\n <MobileOnlyContent />\n </HideOnDesktop>\n</>\n...\n\n
\nFor more tips on implementing responsive design with styled-components and NextJS, see this article.
", "url": "https://www.tomoliver.net/posts/responsive-design-with-styled-components", "title": "Responsive Design with Styled Components", "summary": "In this tutorial we will create some useful shortcuts to make implementing responsive designs easier to implement.", "image": "https://www.tomoliver.net/img/responsive.png", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "css", "nextjs", "react", "styled-components" ] }, { "id": "https://www.tomoliver.net/posts/small-commit-shortcut-for-vim", "content_html": "Sometimes I just change 1 or 2 lines and want to commit & push straight away.\nIts times like these where I don't really want to have to check the diff and stage each file etc...\nBut I still want to enforce some kind of commit message convention, namely:
\n<BRANCH_NAME> <COMMIT_TYPE> <MESSAGE>\ne.g. main fix: made background white\n
\nSo <COMMIT_TYPE>
is fix
in this case.
\nBecause I use vim, I want to introduce a shortcut into my workflow in order to make this process easier.\nSo my goal is to press a key combination in vim and the below command to populate the command input box:
:G commit -am "<BRANCH_NAME> fix: <cursor here>"\n
\nIt should show up like this:
\n\nFirst of all lets decide what the keybinding will be.
\nI'm a simple man, I'm gonna bind it to ,f
so I remember it.
\n(because its f
for fix
XD)
noremap ,f ...\n
\nNext we need to create the git command.\nI am using :G
that comes from the vim-fugitive plugin by tpope. Its basically just a shortcut for :!git
so feel free to do that instead.
Next is commit -am
. Most people know about -m
but fewer have heard of -am
.
\nIt basically is the same as -m
but it automatically stages all the changes you have in the repo.
noremap ,f :G commit -am ""\n
\nNow we just need to generate the commit message, and for that we need to know what the name of the current branch is. Usually we can do that with git branch --show-current
\nhowever, since git is external to vim we need some special syntax.
To execute an external command we can use the system
function like so:
..."<C-R>=system("git branch --show-current")<CR>\n
\nFor some reason ^@
gets stuck to the end of whatever the branch name is.\nTo get rid of that we can backspace <BS>
once.\nNow we press the <Left>
key to move the cursor one space left inside the quotes.
..."<C-R>=system("git branch --show-current")<CR><BS>"<Left>\n
\nNow leave a <Space>
and write the commit type, in this case fix
.
\nFinally add a colon and another <Space>
for the commit message and we are done.
..."<C-R>=system("git branch --show-current")<CR><BS>"<Left><Space>fix:<Space>\n
\nPutting it altogether we get the finished product:
\nnoremap ,f :G commit -am "<C-R>=system("git branch --show-current")<CR><BS>"<Left><Space>fix:<Space>\n
\nNow you too can make a smol commit from vim with just a few keystrokes!
", "url": "https://www.tomoliver.net/posts/small-commit-shortcut-for-vim", "title": "Are you committing from Vim the fast way?", "summary": "Tired of suspending Vim to commit from the command line? Why not generate your commit command from within Vim using this simple shortcut.", "image": "https://www.tomoliver.net/img/smol-commit.png", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "vim", "linux" ] }, { "id": "https://www.tomoliver.net/posts/using-an-slr-as-a-webcam-nixos", "content_html": "\nThis year after largely abandoning my macbook in favour of a nixos machine, I started getting requests to "turn my camera on" when video calling people. This was a problem because I didn't actually have a webcam. I thought about buying one but then I realised that I had a perfectly good Canon EOS rebel XS DSLR circa 2008 lying around on my shelf. This camera has a mini-USB port, so naturally I pondered: DSLR + mini-USB + desktop PC = possible webcam ?
\nBut there is just one problem. My Canon EOS rebel XS isn't actually capable of recording video. It can take some nice pictures but that's about it. So that's the end of that then.
\nOr is it?
\nThere happens to be some amazing open source software called gphoto2.\nOnce installed it will allow you to control an array of different supported cameras from your computer (find out if yours is supported with gphoto2 --list-cameras
). This includes taking photos and videos.\nAfter installing, try taking a picture with it like so:\ngphoto2 --capture-image-and-download
. You should hear the shutter activate and the image will be saved to your current working directory.
Despite the aforementioned lack of video functionality on my camera, I decided to try gphoto2 --capture-movie
anyway. Somehow, although my camera does not support video natively, this tool still manages to spit out an mjpeg file (For my camera I needed to put it in "live-view" mode before gphoto2 could record video. This consisted of putting it on portrait mode and then pressing the "set" button so that the viewfinder is off and the screen is displaying an image). Unfortunately though this is not enough to be able to use it as a webcam. It still needs to get assigned a video device such as /dev/video**
.
First of all if you haven't already, you're gonna want to grab gphoto2
and ffmpeg
.
And maybe mpv
also.
# configuration.nix\n...\nenvironment.systemPackages = with pkgs; [\n ffmpeg\n gphoto2\n mpv\n...\n
\nTo create the virtual video device we will need to make use of the v4l2 Linux kernel module. It can be installed by adding to the extra module packages in configuration.nix.
\n# configuration.nix\n...\nboot.extraModulePackages = with config.boot.kernelPackages;\n[ v4l2loopback.out ];\nboot.kernelModules = [\n "v4l2loopback"\n];\nboot.extraModprobeConfig = ''\n options v4l2loopback exclusive_caps=1 card_label="Virtual Camera"\n'';\n...\n
\nYou will now need to run sudo nixos-rebuild switch
and also reboot your computer since we have made some changes to the kernel.
Now try running this command:
\n gphoto2 --stdout --capture-movie |\n ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0\n
\nYou should see output like this:
\nffmpeg version 4.4.1 Copyright (c) 2000-2021 the FFmpeg developers\n built with gcc 11.3.0 (GCC)\n configuration: --disable-static ...\n libavutil 56. 70.100 / 56. 70.100\n libavcodec 58.134.100 / 58.134.100\n libavformat 58. 76.100 / 58. 76.100\n libavdevice 58. 13.100 / 58. 13.100\n libavfilter 7.110.100 / 7.110.100\n libavresample 4. 0. 0 / 4. 0. 0\n libswscale 5. 9.100 / 5. 9.100\n libswresample 3. 9.100 / 3. 9.100\n libpostproc 55. 9.100 / 55. 9.100\nCapturing preview frames as movie to 'stdout'. Press Ctrl-C to abort.\n[mjpeg @ 0x1dd0380] Format mjpeg detected only with low score of 25, misdetection possible!\nInput #0, mjpeg, from 'pipe:':\n Duration: N/A, bitrate: N/A\n Stream #0:0: Video: mjpeg (Baseline), yuvj422p(pc, bt470bg/unknown/unknown), 768x512 ...\nStream mapping:\n Stream #0:0 -> #0:0 (mjpeg (native) -> rawvideo (native))\n[swscaler @ 0x1e27340] deprecated pixel format used, make sure you did set range correctly\nOutput #0, video4linux2,v4l2, to '/dev/video0':\n Metadata:\n encoder : Lavf58.76.100\n Stream #0:0: Video: rawvideo (I420 / 0x30323449) ...\n Metadata:\n encoder : Lavc58.134.100 rawvideo\nframe= 289 fps= 23 q=-0.0 size=N/A time=00:00:11.56 bitrate=N/A speed=0.907x\n
\nNow try this command:
\nmpv av://v4l2:/dev/video0 --profile=low-latency --untimed\n
\nYou should now be able to see the video feed from your webcam.
\n\nIt is a bit annoying to have to execute a command every time we want to use our webcam. Luckily there are ways for this command to be automatically run on startup.
\nI have decided to implement this webcam startup command as a systemd service.
\n# configuration.nix\n...\n systemd.services.webcam = {\n enable = true;\n script = ''\n ${pkgs.gphoto2}/bin/gphoto2 --stdout --capture-movie |\n ${pkgs.ffmpeg}/bin/ffmpeg -i - \\\n -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0\n '';\nwantedBy = [ "multi-user.target" ];\n };\n...\n
\nNow if you do a quick sudo nixos-rebuild switch
and reboot your computer you should find that the webcam service is running.
To check for any problems we can use systemctl status webcam
which will tell us the last time the service was run as well as log of its last output. Handy for debugging.
Its very tempting to stop here.\nHowever considering the current global crises it may be pertinent to wonder whether it is necessary to have the webcam on all the time. It strikes me as sub-optimal for 3 obvious reasons:
\nMy camera has a lens cap, so to be honest the second point does not really bother me. I can always put the lens cap on when I am not using the webcam to make sure certain government agencies aren't being entertained at my expense.\nHowever, leaving a big power hungry DSLR camera on 24/7 certainly is not doing anything for my electricity bill. That's not to mention the CPU overhead required for decoding the video... which upon measurement with htop
looks to be around 10%. Not insignificant especially when considering that I'm not exactly running a podcast here, I don't have that many video calls in a day.
The ideal scenario:
\nTo achieve this we need to make use of a custom udev rule.\nA udev rule is something that tells your computer to perform a certain task when it discovers that a device has become available. This could be an external hard drive or even non-usb devices.\nIn our case we need it to recognise the camera through its usb connection.
\nWe need to specify what command is to be run when the udev rule is triggered.\nFor that I am creating a derivation (nix package) that simply restarts the systemd service. You could also add logging to this for debugging purposes.
\n# start-webcam.nix\nwith import <nixpkgs> { };\n\nwriteShellScriptBin "start-webcam" ''\n systemctl restart webcam\n # debugging example\n # echo "hello" &> /home/tom/myfile.txt\n # If myfile.txt gets created then we know the udev rule has triggered properly\n''\n
\nNow to actually define the udev rule.\nFirst of all we need to find out the device and vendor id of the camera.\nThis is done using the lsusb
command. Since I don't see myself using this particularly often I will install it temporarily using nix-shell
.
This can be done like so: nix-shell -p usbutils
Then running lsusb
we get the following output:
[nix-shell:~/environment]$ lsusb\n...\nBus 002 Device 008: ID 04a9:317b Canon, Inc. Canon Digital Camera\n...\n
\nWe can see from this output that the vendor ID is 04a9
and the device ID is 317b
.
We can now create the udev rule.
\n# configuration.nix\n...\nlet\n startWebcam = import ./start-webcam.nix;\n...\nservices.udev.extraRules = ''\n ACTION=="add", \\\n SUBSYSTEM=="usb", \\\n ATTR{idVendor}=="04a9", \\\n ATTR{idProduct}=="317b", \\\n RUN+="${startWebcam}/bin/start-webcam"\n'';\n...\n
\nWe just need to remove the wantedBy = [ "multi-user.target" ];
line in our systemd service. If we leave this in then the service will start automatically when we next reboot whether the camera is switched on or not.
One more sudo nixos-rebuild switch
and we are finished!
Thanks for reading this far.\nI hope this article has made you think twice before chucking some of your old tech.
", "url": "https://www.tomoliver.net/posts/using-an-slr-as-a-webcam-nixos", "title": "How to use your DSLR from 2008 as a webcam in 2022 (NixOS)", "summary": "Why throw away your old Camera? In this guide we'll show you how simple it is to reuse your old Camera as a webcam using the NixOS linux distribution.", "image": "https://www.tomoliver.net/img/webcam-small.jpg", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "nixos", "hardware", "linux", "lifestyle" ] }, { "id": "https://www.tomoliver.net/posts/vim-print-debug-text-object", "content_html": "First things first, dependencies:
\n\nAnd add this to your vim config:
\n\nfunction! LogInline(text)\n return 'console.log(' . a:text . ')'\nendfunction\n\nfunction! LogBelow(text)\n let endPos = g:TextTransformContext['endPos']\n let lineNumber = endPos[1]\n call append(lineNumber, split('console.log(' . a:text . ')', "\\n"))\nendfunction\n\ncall TextTransform#MakeMappings('', '<Leader>l', 'LogInline')\ncall TextTransform#MakeMappings('', '<Leader>L', 'LogBelow')\n\n
\nFirst lets try doing an inline log.
\n"hello world"\n
\n\nconsole.log("hello world")\n
\nNext we'll log an existing variable.
\nconst myvar = "hello world"\n
\n\nconst myvar = "hello world"\nconsole.log(myvar)\n
\nIt even works on more complex text objects like HTML
\n<div>\n <p>hello</p>\n</div>\n
\n\n<div>\n <p>hello</p>\n</div>\nconsole.log(<div>\n <p>hello</p>\n</div>)\n
\nHopefully this saves someone's fingers.
", "url": "https://www.tomoliver.net/posts/vim-print-debug-text-object", "title": "Easily print-debug any text object in Vim", "summary": "This is something that I do most days at least once so I made a shortcut for it", "image": "https://www.tomoliver.net/img/vim-selection.png", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "vim", "linux" ] }, { "id": "https://www.tomoliver.net/posts/you-dont-need-a-modern-computer", "content_html": "broke
?damaged
planet?hipster
?... then don't buy a new computer! You really don't need anything modern to have a great computing experience. I should know, pretty much all the software I have written since the beginning of 2022 has been done using hardware from 2012! Normies who just poke around in a web browser all day can get by on something even older, like my Mum who uses a laptop from 2009 as her main machine.\nOf course, this is only possible if the hardware in question has been liberated from the shackles of Microsoft/Apple, i.e running Linux. Unfortunately normies usually aren't aware of this so they trash their perfectly good hardware without realising its true potential. Think of the amount of e-waste that could have been prevented if Linux on desktop was the norm.... Where is Greta on this? She likes penguins right?
\n"But I'm a soyftware developer! I need 64 CPU cores and 128GB of RAM and 32GB of VRAM and raytracing cores and...."
\n\nDepending on what you're developing you might need modern hardware, but I'd like to posit that most developers are web developers, and web applications are usually deployed on some flaccid single-core EC2 instance with 1GB of RAM anyway. Trust me, if you pick your development environment correctly and stay away from bloated IDEs and other soydev hallmarks, you probably wont even be able to tell that you're typing on something pre-Trump.
\nYou really don't have to give anything up either, my GPU from 2013 can support 5 displays with a max resolution of 4K, my motherboard supports 32GB of RAM, my CPU is quad core with hyperthreading (I stress the hype). There really isn't much I would gain by shelling out the big bucks on a new workstation.
\nFirst of all, laptops suck. They are less powerful, more noisy, and more fragile than desktops, not to mention they are way more expensive! And that's without the physiotherapy you'll need to correct your neck posture after a decade of using them.
\n"But my (work/school/oppressive entity) requires that I own a laptop!"
\nThe cold hard reality of life strikes again. Laptops are less straightforward, generally you can't upgrade them. The tide might be turning on this however, see framework for example. If you find a framework laptop going for a reasonable price it might well be worth the investment. A safer bet would be to find a Thinkpad on Ebay or FB marketplace (eww), nice and cheaply. No other laptop brand has a better reputation when it comes to longevity and repair-ability than the humble Thinkpad. Unfortunately the modern Thinkpad is not quite as maintenance friendly as its ancestors were, but it is still considered the default choice for people who have principles/value their money. You can even find boomers on youtube who still use the Thinkpad X220 from 2011!
\nFor the CPU we have the i7-3770K, famous for being the best money can buy (in 2012) and being a great overclocker. I managed to get her up to 4.4GHz without having to fiddle with any scary voltage settings.
\nThe Radeon HD 7990 is more than enough for my usage, in fact, since it technically has 2 GPUs running on the same graphics card I could probably even pass one of them through to a VM if I wanted to.
\nI have 32GB of RAM installed but to be honest this is totally overkill for what I do, I rarely break 12GB while developing.
\nIt goes without saying but, GET A MODERN SSD! They are cheap and really do make your system start up a lot quicker.
\nI have found things like Electron apps to be pretty bad when it comes to hogging resources. When using old hardware I would say never use Electron based apps when an alternative exists. Things like Slack or Microsoft Teams will offer a web version which I have found to not only use fewer resources, but to be more reliable too.
\nLeave a webmention about your vintage hardware specs and we can waste time comparing gigahertz and terabytes.
", "url": "https://www.tomoliver.net/posts/you-dont-need-a-modern-computer", "title": "You don't need a modern computer!", "summary": "Using hardware a decade old (2012) as a software developer in 2023", "image": "https://www.tomoliver.net/img/zoomer-boomer-dad.jpg", "date_modified": "2023-11-20T15:40:50.000Z", "author": { "name": "Tom Oliver" }, "tags": [ "boomer", "lifestyle" ] } ] }