we provide through the search bar on KamusiGOLD are only a small part of the project. However, due to our infinitesimal budget, we cannot currently afford to open our data to the public in the ways we would like.
Take, for example, the word "infinitesimal" from the previous sentence. We offer you the opportunity to search for that word in English, and to find its equivalents in all the languages for which we have corresponding data. You can enter any word
in the search bar, and we'll happily share much of what we know about it. What we cannot open up is a permanent address for that term, which is the starting point so you can link to it, dig into any rich information we have about it, improve and expand it
, or use it as data in other applications.
The problem is that when terms such as "infinitesimal" are all given fixed URLs, they transform from data points into Big Data. Each entry becomes not just a web page, but as many web pages as the word has senses; while "light" is one page on Merriam-Webster.com
, concept-specificity makes it 48 different pages on Kamusi. Each web page requires all of the ornamentation that makes each page beautiful. Each entry has extended information that we either need to present on its web page, or provide links to within the code. Every term links to translations, ancestors, or other entries, (e.g. infinity
), with code that you don't see but your device does. The code should also contain a lengthy RDF description
so the data can be pinpointed by other projects. While each word takes fairly infinitesimal resources when we serve it from our database, its associated web pages can be 🐘 elephantine. With millions of words, offering a web page for each demands a lot more resources than offering a simple query to our database.
Moreover, each static page is a target for the search engines that constantly crawl the web to index what's out there. This is really what overloaded
our system and kept us offline for a year before we turned to today's query-only light-access solution. Not only Google and Bing, but also Baidu, Sogou, Yandex, and several others you've never heard of. Most websites have a few dozen pages, or maybe a few hundred. A dictionary with a healthy 100,000 terms would have 100,000 web pages, which is already a lot of 🐘🐘 for the crawlers. As a dictionary of many languages that is attempting to provide you with every word ever known to be spoken, our data contains millions of 🐘🐘🐘🐘, with links from one to the next that are irresistible for the crawlers to follow. It takes a fraction of a second to send each 🐘 back to you, and when a dozen search engines hit at the same time, our server has to give them the same attention. Unlike human users, though, the robots do not pause. As soon as we give them one 🐘, they ask for the next. And the next. And the next, following the chains through millions of links. At one second per entry, 10 million entries would take a single search engine nearly 4 months to crawl, by when it's time to start all over again. With each 🐘 exposed, we spent our all our time telling the robots what we had, with no power left over to serve the data to actual people.
We do have ways to solve the problem, but they involve money
to pay our developers. Basically, we need to create a very limited index page for each entry that the robots can read quickly, and require authenticated login
to get at the real pages with the rich data. Preferably, we will be able to afford a multi-server solution, with robots crawling on one machine while confirmed humans enjoy full data access on hardware that is not inundated by automatons. When we do this, we will be able to open up many more services that take advantage of our precise concept-based data, but are contingent on fixed URLs.
Giving you everything we have won't take a miracle, but does demand adequate sponsorship
to implement known solutions. Meanwhile, our query-only search is not exposed to search engines, so are happy to offer it to you.