Presearch Nodes Are Coming

In our July update, we provided a first public teaser of what is one of the most exciting developments for the Presearch platform this year: the release of our decentralized Presearch Nodes. We promised we’d share more details on the Presearch Nodes in the lead up to the release of our new white paper on July 31st, so without further ado: we’d like to introduce the world to Presearch Nodes!

The Presearch platform has always been planned in phases, with the first phase focused on providing a useful search interface, growing the Presearch community and user base, and ultimately implementing our unique keyword-staking-based advertising model. The end goal has always been to build out a decentralized search engine, and under the leadership of our new CTO, we are now excited to be well on the way toward this next phase of the platform.

Search engines are complicated, and building a decentralized search engine provides some very unique challenges that don’t apply to centralized search engines. For example: How do you prevent malicious actors from running nodes and either stealing user information or returning dangerous or unwanted content? How do you get fast (hundreds of milliseconds) response times across a massively distributed network of servers with drastic variability in their performance and reliability? How do you properly incentivize people to run nodes and fairly balance supply and demand for nodes and searches within the network? Some of these we’ve already solved, and some will require continued experimentation and innovation over time, but we’ll be rolling out both the platform and the decentralization of the platform incrementally to ensure an optimal ongoing search experience for our amazing Presearch community.

Search Architecture

For our initial phase two platform release, there are six core layers of the Presearch search engine architecture:

Core Services: Advertising service, Account management, Search Rewards Tracking, Keyword Staking, Marketplace, and other critical Presearch services which are centrally managed by Presearch.

Web Server: Receives Search Requests from Presearch users and passes them on to the Gateway to get results. Returns a final rendered results page to the user.

Node Registry: Manages the identity of all Nodes, their stats, and any rewards payouts to the Node operators

Node Gateway: Receives requests from the Web Server, removes personally identifiable information from the search, requests and passes the search to one or more healthy Nodes

Nodes: Decentralized search “workers” which connect to the node gateway and perform search operations.

Search Packages: Open source plugins which bring back intelligent answers and info boxes for specific queries.

Our first phase of decentralization will be the release of the Nodes software, which anyone can run and for which we’ll need to build out an expansive and highly decentralized network.

The Gateway is intended to be decentralized in a future phase after the core search network has proven to be robust and secure. Gateway operators will eventually be a smaller network of trusted providers that the Presearch community can choose from (they would have access to and need to protect some private user information). For now, Presearch will operate as a single, trusted Gateway provider while the network is being built out. More and more of the platform will be decentralized over time as the project continues to evolve. The availability of the Nodes in this first software release will already enable the vast majority of the work performed during each search to be decentralized.

Node Operations

Node operators will ultimately be compensated in PRE for the work they provide to the network based upon the amount of value they add to the network. There are at least seven different kinds of operations that a Search Node can perform (with more likely in the future):

  1. Registering (all nodes). Registering with a Node Gateway, which can then route queries to the node if it passes health and security validation.
  2. Validating (all nodes). The Node will coordinate with a Node Validator to ensure that each node is only running trusted Presearch software (to avoid security issues from potential bad actors).
  3. Coordinating. Processing and distributing queries sent by Node Gateways. This may require routing to multiple “Serving” Nodes and aggregating the results.
  4. Federating. Proxying other data sources and returning their data as part of search results.
  5. Serving. Hosting portions of the search index used to process queries.
  6. Crawling. Crawling websites to build out search indexes.
  7. Indexing. Writes federated or crawled data to search indexes to be served by serving Nodes.

Different kinds of servers will work better for running different Nodes for these different use cases. For example:

  • Coordinating will work best on nodes with: low network latency, high network bandwidth, high memory (disk space unimportant, CPU useful, but can vary).
  • Federating will work best on nodes with: low networking latency (cpu, memory, and disk space unimportant. High bandwidth preferred).
  • Serving will work best on nodes with: high memory, high uptime, and low network latency (CPU and disk space useful, but can vary).
  • Crawling will work best on nodes with: high disk space, high network bandwidth (CPU, memory, network latency, and uptime unimportant).
  • Indexing will work best on nodes with: high disk space, high network bandwidth, low network latency, and reasonable CPU, memory, and uptime.

What’s in this First Release?

For the very first release of the Nodes, we will be focused on the following operations:

  • Registering
  • Validating
  • Federating

This means that we will initially be aiming for a network of as many nodes as possible, provided as cheaply as possible, so long as their network latency is low. When we later add in Serving and Aggregating (enabling the Presearch search index to be decentralized), this will introduce the need for different and more powerful servers to join the network. When our Crawling becomes fully decentralized and supported by Nodes, the capacity of the network to crawl the web and refresh data will then be tied directly to the size of the network and its growth.

When will Nodes be Ready?

We are excited to announce that we are only months away from the public release of the Presearch Nodes software! We have a number of other announcements coming between now and then, including an updated explanation of the evolving Tokenomics of the project and how rewards for running Nodes, performing searches, and other activity will evolve over time.

In the meantime, we are excited to announce a Beta Testing program for our Presearch Nodes. We are not ready to begin beta testing just yet, but we are now opening sign-ups for those interested in helping us test and improve our Nodes software on a testnet prior to the mainnet launch. If you are interested in being a beta tester, you can request to be included using the following signup link:

Beta Test Signup Form (Waitlist)

Want to Learn More?

The new Presearch Whitepaper is coming out on July 31st (just 8 days away!) and will contain more detail on the Presearch Roadmap, Nodes, and other exciting updates. We are also working in parallel, on significant enhancements to the Presearch user experience and rewards model which will enable Presearch to smoothly transition into this next phase of the project with enhanced capabilities enabled through the enhanced search experience made possible by the upcoming release of the Presearch Nodes.

For the over 1.5 million registered Presearch users and thousands of Advertisers currently staking nearly 65 Million PRE tokens on the Presearch Keyword Staking platform, there has never been a more exciting time to be a part of the Presearch community!

Don’t miss any updates and join our community:

Presearch is building a decentralized search engine framework — visit or for more info