In this post, I would like to document my process of learning to use
react-draggable component. The basic usage is quite simple — you only need to wrap your element with a
Draggable component. To learn how, you can refer to the original repo or read a write up by Victoria Lo here on Medium.
What I wanted to do was a little different in that I not only wanted to drag elements but also attach CSS transition animation to them, and that's what I want to cover in this post.
I will use
create-react-app for simplicity.
$ npx create-react-app…
I love making websites with Gatsbyjs and recently started using Mapbox, so I thought I could combine the two. The idea is to create a travel blog (of course, traveling after the virus goes away) where the map will support the text content by dynamically zooming into the area of interest and putting markers of the places from a blog post. When navigating between different posts, the map will continuously highlight the new areas instead of refreshing the whole map each time. This will give a better spatial context to the readers.
Before I dive in, I just want to let you know that I am still learning React and this post is the documentation of my learning process. I will do my best to highlight some mistakes and key findings I made through the process but if you have any suggestions on better ways of doing things, please leave it in the comments so me and others can learn from it. It was a really great exercise for me to get familiar with Gatsby and Mabpx GL JS API as well as using React hooks with functional components. …
Here is a breakdown on what we will look at. First, we will write down some text but instead of storing in a single
string we will break them down into individual words so that our program will remember its original position. We will store each word along with its position in JS objects. …
p5.js has a function called
textToPoints() which is somewhat hidden in the reference. This function returns you with an array of points that are on the outline of the text. It does resampling for you (and you can control the sampling resolution) so it is great to use for quick typographic experiments. In this post, I will show you the basics of how to use it and a few use cases and challenges such as dealing with multiple letters and counter shapes (ie. holes in letters — 0 a, p, B, etc.)
The basic usage is very simple. First, you need to load an external OTF file. It only works with externally loaded font files, and because
loadFont() is asynchronous, you will need to place the function call in
preload() and it will be loaded before the
font object is used in the setup and draw functions. …
The p5.js library itself does not have a way to export a video out of a sketch but because everything you do is simply drawing on an HTML5 Canvas, if you can find a library to export what’s on the Canvas, you should be able to use it. I recently stumbled upon a really good and simple library that exports to H.264 mp4 video straight out of a browser (no server needed) and made a simple example to make it work with a p5js sketch.
If you would rather check out the original library, here is the link, and I will also link to a few other resources at the end of this post. …
Let’s pick up from where we left off in the last post. We cleaned up the JSON annotations and exported it as a CSV file. Now, let’s load the CSV file back in, and filter them with conditions we define and generate PNG images so that we can use the images to train Convolutional Neural Networks.
Again, I am treating this post as a Jupyter Notebook, so if you want to follow along, open Jupyter Notebook in your own environment. You will also need to download Google Fonts Github repo and generate the CSV annotations. …
Note: This is the first post of the two-part series where I will go over downloading Google Fonts and cleaning up the JSON annotations before we can filter fonts and generate PNG images. The link to the 2nd part is at the bottom.
The first time I tried to learn machine learning a few years ago, I felt miserable. I had a hard time installing dependencies, let alone understanding any basic machine learning algorithms. Fast forward to 2020, with more time at home, I decided to give ML another try. This time, it was much easier to find friendly learning resources and tools. …