I've been nerd snipped about a million times by those around me, but Jake Fee (fee.cool) has got to be up there on the leaderboard.
We've spent many nights conversing about technology that uses new forms/UIs/metaphors to more interestingly/helpfully/playfully get ideas across. A favorite of mine from the early days was this companion planting tool.
Companion planting is the act of planting a garden by grouping plants that benefit one another (a nitrogen fixer near nitrogen needing plants) and giving space to plants that may not like co-existing.
Most of the time, companion planting information is stored in books, in tables, or in prose, making it difficult to parse when you're actually looking to figure out what plays well with what. Node and edge graphs on the other hand, allow you to quickly and obviously find connections, building new guilds of plants with no headache. This is information that's existed for generations and I can't find anyone else whose thought to organize it so thoughtfully.
Roughly the same time this tool was being jammed on, we were discussing a new sort of search engine.
The central tenets were:
1. Google getting to decide what was popular or not is lame.
2. Creating your own search index is cool, but tedious, and doesn't allow for much serendipity, as you know all of the sites that are in the index.
3. Human curation is imperative to an interesting internet (shout out https://internetphonebook.net/)
4. An Index should be as big or as small as folks want.
Out of this, the mold web/fungo zone/mycil network formed, a search engine based on the metaphor of fungi. First you'd create a spores file (see https://fee.cool/spores-mini.json) this file contains a few things:
1. All of the site's you'd like to index (websites you find interesting, blogs you read, recipe directories, esoteric forums, the smaller and more intentional sites, the better). These are your 'spores'.
2. A list of all of the other spore lists you'd like to have access to (a friend who knows all the good recipes, a blogger who has great taste, your mom who can always find a proper sewing pattern). This is your 'hyphae' list.
3. A name, being the id of your spore list.
4. Where that spore list is stored on the internet.
This is all stored in JSON like below, and currently needs to be personally hosted, (but I'm looking to fix that!)
Now instead of having a search engine that is dictated for you, you've got one for your sites, and your friends sites, and their friends sites, etc etc. Branching out like an infinite tree. Nothing showed up for your query? Go one level deeper, maybe a friend has the perfect resource stored. Another bonus is that you now know exactly who appreciates a site, allowing you to ask your friend what they like about the recipe site they have saved.
How to actually get this working was a bit of a journey, first we were thinking of an embed iframe of a google search page that auto injected 'site:xxx' into whatever you searched, but this was very limited, then we were thinking of a custom searx instance with some added code, but our instance got rate limited by google very quickly.
The idea sat dormant for a while until I joined the Recurse Center, where I met someone who was trying to create a personal search engine using a killer project called Lieu.
Lieu was built to be a search engine for the https://webring.xxiivv.com/ webring thats organized by the 100 Rabbits folks (probably the coolest people on the internet btw). It was close to what we wanted, a search program that let you put in a list of sites, index those sites, and spin up a site to search, but it was missing the tree-like capabilities we were looking for (it is also written in Go, a lang I had no history in).
I got to work, tweaking Lieu's precrawler to be able to jump between spore files, and add the depth of the found site to the sites to crawl list. Work that would have been easy in python or js had me stretching my brain in this new language, but that was a central idea of the Recurse Center, I was in good company.
Sometime during this period, I had a semi working demo up with a single spore file entry point, it was cool but lacked many of the features allowing it to grow...
1. People able to quickly build their own spore files
2. Automatic crawl updates when a site's index got too old
3. A coat of paint to differentiate itself from Lieu.
The project sat dormant for a while again, but this year, like 4 years after this whole journey started, I redownloaded a fresh copy of Lieu, spent some time re-exploring Go, and got a version up on fly that I'm happy with, you can play around with it here! Yeah, it's still missing all 3 of those features, but this fresh version has much less spaghetti, and I've got a solid trajectory towards answering all of them, along with other bits and bobs that would make the experience work well.
This is all to say, I'd love to know what sites you'd like to index and what elements of a micro distributed search engine you feel are missing. This project exists to allow those annoyed with the rot of the internet to return to an enjoyment, there's so much out there to explore! (reach out on bluesky (@yet.earth) or email (jt@yet.earth)
Fair winds,
JT