Just my two cents on reducing the number of nodes:
I think if you reduce the node rewards it will result in a tanking of the token price, and upset many node runners. The only smart/fair way to do it is to get rid of grandfather status. This will not only reduce the number of nodes, but it will also bring upward pressure on the PRE price. A person with grandfathered nodes will have to decide to either consolidate his stake into fewer nodes, or purchase more PRE to bring the node stakes up to the new minimum.
It could be done in stages to lessen the blow and give the node runners time to either cancel VPSâs or obtain more stake. I dunno.
This is against my interests, as I have a lot of grandfathered nodes, but I donât see any other way to do it. Itâs my opinion that this project should never have grandfathered stake minimums, because it makes it impossible to regulate the number of nodes the project needs. Adjustments in stake should apply to everyone and should be done in small increments with plenty of notice to node runners.
I donât think Presearch should make any changes to nodes until there is a definitive understanding of how many nodes / million searches are optimal. Yes there is a cost to keeping things the way it is but if funding has been shored up and there is no solvency risk why not keep it the way it is for now. Gather the data before you potentially make rash decisions that affect the core business. The data should then be shared on how it was analyzed so that the community could engage on any potential blind spots not being considered in the analysis then and only then should we begin to discuss the best solutions to properly align the number of nodes to the number of searches. The risk with implementing node changes without the data to back it up is if nodes get cut too low just as you are trying to bring on users and any users have a bad experience, that would be a complete failure and turn those users away.
You need to have qualitative understanding of the right number of nodes with some level of redundancy. Too much redundancy can be costly but what are the risks and what level of redundancy is the right amount? Also what mechanism or reward construct should be built out or developed to quickly spin up constellations of nodes in catastrophic scenarios to keep the core business up and running?
The two methods mentioned by Tim were reduction of rewards and eliminating grandfathered node status.
I think the team needs to better understand what the right number of nodes should be before making any decisions. However, I will address the two mentioned ideas and some of the drawbacks then provide recommendations:
Flat reductions for rewards is not a good method to reduce nodes because although it would likely reduce the number of nodes it would have the un-intended consequence of the higher quality nodes being replaced by lower quality, ultimately producing less reliable and longer to response times for search results. This would be catastrophic for the project. You canât afford for users to have a bad search experience, especially as you are trying to onboard new users. They try it and donât get results or it takes a long time they will leave and may never come back.
Eliminating grandfathered nodes may be a better alternative than a flat reduction for rewards but also cuts the backbone of the community derived network. It also is a slap in the face of early supporters. And although doesnât have the quality concern as much it would still drastically cut the number of nodes very quickly which is an unknown amount and a concern.
*Full disclosure most of my nodes are grandfathered so both of the considerations would affect me personally as it would likely affect most node runners.
However, if it is determined that less nodes are indeed required and the project is overpaying for needless redundancy then we should consider ways to address this that are beneficial to all.
Are there other options than the two already mentioned by Tim to cut back on the number of nodes and doesnât use a flat reduction of rewards and doesnât eliminate the grandfathered status or at least takes into consideration early supporters in the community?
I think there are ways to reward early supporters and keep the nodes scaling with the network requirements. I just donât agree with the currently offered solutions.
If we are rewarding people by the level and quality of the contribution to the network, as the vision paper lays out, then I think we will always be doing the right thing for the network.
Recommendations: Scaled rewards
Instead of a flat reduction of rewards there could be a scaling of rewards. This scaling would take multiple factors into consideration most obviously the quality of nodes (as per the current node scoring), but also other factors like diversity of providers, diversity of jurisdictions of nodes, and time of service for nodes just to name a few things that come to mind.
What this would do is take these various things into account and based on the total number of nodes required per million searches it would prioritize those factors and pay out on a scaling PRE rewards system. Nodes with highest run times, highest quality, quotas for different VPS providers based on risk, and quotas for nodes in different jurisdictions would make up the core decentralized node pool that would receive the highest PRE rewards. The rest of the nodes outside of the defined pool number/ million searches would have scaled back rewards some on the fringe higher than those newer less quality nodes that may get next to nothing in PRE rewards. This rolling construct is the ultimate model to ensure constant competition for the fully rewarded node slots. Those that are getting paid less may choose to keep their nodes running even at a slight loss or even a major loss in order to preserve their status and time with the network if another node runner in the full reward pool does a poor job managing their nodes they might fall out of the primary node pool and those fringed nodes could be immediately pulled in to a fully rewarded status. To compete or break into the higher rewarded primary node pool you would have to bring on better quality nodes in all the above categories and/or other key metrics I may not be considering. This would disincentivize more of the same, ensure all nodes are of the highest quality, and the practice of current node runners spinning up new nodes every time they get 4-8k PRE rewards. The current and potential node runners would begin to track the node network and fully rewarded node pool to possibly try to break in or expand and spin up new nodes when the network demands it. You might get to have your cake and eat it too because current node runners may still run current nodes even at a loss in order to be first in when the node pool scales and expands with new users and searches. They may even spin up nodes at a loss to anticipate the growth. A construct like this seems more fair and rewarding than flat cuts or eliminating grandfathered nodes. It simply rewards value in proportion to the value provided to the network and rewards older nodes (early supporters) unless a better location, provider, jurisdiction, or quality can out compete.
The above idea is primarily designed for search nodes but other considerations may be required for new node types. I think some of the nodes that might be rejected or eliminated by potential short-term changes in the structure could actually be better for some of the other node roles. For this reason I think before we chop a bunch of nodes without having the data to know what we need we probably should also consider some of these nodes for new roles on the horizon even if for beta testing.
Option to address grandfathered nodes
A potential option to reward early supporters if the decision is made to cut grandfathered nodes and merge all nodes into 1 rewarded structure, might be to give them access to an NFT prior to the change. The NFT would allow the users to earn a much higher reward rate on their nodes, this might only last until the difference in the grandfathered status is recouped.
What would this look like in execution?
Lets say you have 1k, 2k, and 4k nodes but the decision is made to increase the node min to 8k. Presearch would allow those grandfathered users ~73k to mint new 1-time NFTs representing their support status. Then those users that minted NFTs could attach those NFTs to any ongoing 8k min staked nodes, which would allow them to earn higher rewards maybe double the normal reward rate until the node has reached the difference between the grandfathered amount and the current min stake amount. A 1k node NFT would earn double rewards until 7k PRE are earned then the NFT would be expended and the node would start earning regular rewards (a 2k NFT would earn double rewards until 6k are earned; and a 4k NFT until 4k are earned). This would prioritize and honor early supporters but allow the nodes and costs to be reduced and merged into a single format in-line with the scaling of the network.
Ex: If Node stake changes to 8k min. ~73k nodes are grandfathered at 1k, 2k, and 4k
â ~70k nodes actually mint 1-time NFTs this assumes a little breakage.
â A massive consolidation of the number of nodes happens. VPSs are shut down and nodes are re-staked to 8k creating a likely reduction in total nodes down to ~18k-25k nodes.
â Now each user that has the minted NFTs could align the NFTs to one of their 8k nodes and each node with an NFT would start earning double the current reward rate as described above.
->Not everyone would immediately use the NFTs for higher earn and so the total cost to the network would be immediately reduced and eventually once all the NFTs are used the network is fully merged into a single format for all node rewards.
If my first scaled rewards recommendation is implemented this grandfathered situation may not need to be addressed. However, if it is determined for simplicity-sake to merge to a single min stake and reward model this option could easily be implemented in conjunction with the first recommendation and keep all the loyal community node runners happy.
Why not just reduce the rewards for nodes under 4k? As long as they were making a certain dollar amount to cover the average cost of nodes (in previous 10 minutes calc as is now) which in many cases is NOT $1 /month (unless every node is Racknerd and in NY) EU nodes cost more, Asia nodes cost even more. Other remote locations are even more. Where would the global distribution be if it was only done from peoples home PCs?? VPS are required at this point as there certainly are not enough running on home PCs and those are unreliable also.
Itâs already sad enough that node runners are limited to any dollar amount. The original white paper stated that 20% of all revenue would go towards rewarding node runners.
Hopefully any moves done that inflict more pain on node runners are temporary. They provide the literal infrastructure for the entire project.
Havenât read the walls of text here, but I think the best way to reduce the number of nodes is:
a) Take a look at the uptime / performance of the nodes and kick the bad performers from earning
b) Introduce other token sinks for PRE so there are other ways to spend PRE on
Many interesting comments here and I just want to add my 2 cents to it:
I think that its a valid argument that you can´t just erase the whole grandfathered nodes and set a new minimum collateral. also, bringing down node rewards may result in the same negative effects. I want to emphasize in this context that in the current market, liquidity seeks for rates higher than 30% APR and it can be found in many places with elaborate projects. As PRE intends to go Cosmos, my best advice for comparing APRs would be to take a look at the OSMO exchange now. Anything below 30% APR is experiencing huge withdrawals of capital (if its not at the stage of ETH or BTC). It would be a different discussion talking about all examples and considering APRs healthy or not but this is the market and the market rules out any other consideration. Especially in these (hopefully) last 18 months of a bear market.
I would recommend to consolidate the node amount by changing the algorithm that defines the rewards and add a mechanism like on node projects like Streamr/DATA where you can stake anything from 1 to 20,000 tokens and your rewards will depend on this. I particularly would suggest to make it a floating or a tier system and emphasize the amount of staked PRE that results in the daily rewards. I tested running nodes with 200,000 PRE staked and with other sums and the results are pretty much what I expected: Right now the way the algorithm calculates the rewards, it´s almost mandatory to set up a new node whenever you get 4000 PRE instead of eg just staking a higher sum on each existing node. A tier system could eg say: every 10,000 more PRE per node enable 100 more potential searches per day (I know these numbers aren´t right considering the actual situation - it´s just an easy numbers example).
Also - and this is particularly important on Cosmos - Any staked and locked collateral for a PRE node on an upcoming cosmos chain can automatically forward the assets to a chain validator. Validators are able to 1) hold the stakes in a classic way or 2) use superfluid staking and besides classic staking also use the tokes in order to put them into a liquidity contract that serves on places like eg OSMO exchange and will bring an extra (quite big) portion of revenue as it can be used to provide liquidity (eg OSMO/PRE, USDT/PRE - anything that Advertisers will need in large amounts to pay their tokens). I think it´s particularly important to discuss all options here with Cosmos and Osmosis team members because there is huge potential in the cumulative sum of locked node collaterals to also add some of the needed APR by liquidity funding and not only bare node staking and search rewards. Functions like these are the reason why projects move away from Ethereum L2s to Cosmos, so why not use them.
Going that route, I think it will be unavoidable to slowly loose the old grandfather system but OG PRE users could perfectly be compensated with special NFTs that offer benefits that can compensate any losses (revenue booster NFTs, advertising discounts, etc etc).
The problem with the high amount of nodes must be addressed soon as it might drain a lot of capital from the Project but I think it should also be done in steps and with constant regard of new data coming in. Also - as mentioned in the above- this whole discussion must take all optional tools of the Cosmos system into account. Right now, I think there´s a lot of stuff still unknown to the dev team that could potentially be a big part of the overall solution. Again: A close conversation with the technical team of Cosmos and with the financials from Osmosis might help a lot here.
Iâm cautious of any change to node framework. A few things not mentioned yet:
Be careful of changes that could alienate node operators when the project is still young. Multiple node roles are envisioned still, be careful of driving away early supporters.
I see talk about kicking out bad performing nodes. One of my lower performing is my home based Raspberry Pi. Yeah, it gets a worse score than my other VPS based ones but when/if a server farm goes down like earlier this year, its the backbone of the network. I think I had 2000 hits in a day on it when one of the NA gateways had a problem. Be careful of getting rid of lower performing nodes (likely home based nodes) that we want as redundancy.
If the project wants to lower the amount of nodes, go by launch date/order rather than anything else. Only fair way to do it. I have one thatâs launch order ~200. It should have priority over one made yesterday, cause Iâm an early supporter. Maybe less quality nodes will be included but why alienate early supporters? We helped make this network.
Lets say the total number of nodes is reduced to 40k and something like Flux fails with 10k nodes.
Why would you want a ton of bad performing nodes to take over? I think the remaining âgoodâ nodes should provide the redudancy required.
Thanks for the good discussion and ideas. I just wanted to comment on this tier system idea.
The issue I see is that staking tiers unless quality and other factors of the node is taken into consideration this system could force non-quality nodes into performing more less reliable searches. Node quality and performance should always be the strongest consideration because that is what is providing the service for all searches.
Agreed.
I think my scaled rewards solution addresses all of your concerns.
Scroll down to recommendations
When I said quotas for different providers and locations and jurisdictions state and national this would provide a healthy mix of self hosted pi or home setups so that all nodes are not with a handful of providers and if you were already running nodes for a long time your nodes if equal or better in quality will always be considered over a newer node of equal or lower quality.
In this model the only way to break into a fully rewarded pool is if the new node was better quality or provided some other great decentralization benefit or if the pool required grows due to adoption. Itâs a competition model that rewards loyalty and values things that are important and good for the network.
Your right that you wouldnât want a bunch of bad performing nodes to take over but itâs all about some level of decentralization and redundancy. So that if catastrophes happen with the network there are a good mix of nodes to step in and do the job.
Also not all self hosted nodes are bad quality my self hosted are performing equal or better than some of my VPS providers.
The node performance right now is okay enough to provide a relatively acceptable responsiveness of the service. I agree that other things also need to funnel in when it comes to the node rewards but I want to remind that it is crucial to keep up a good investment climate. And a healthy technical ecosystem might be the second but not unimportant thing. Look at projects like FLUX where there is ongoing tightening of node requirements while the clients/projects using the service aren´t even respectable ones bringing in loads of revenue. Even in this example, PRE is the biggest client⌠There, node runners go home with a huge loss and big promises while the only ideas are another cloud storage service and some mystical proof of useful work that´s been around for years now but with no real practical execution. - PRE has a strong use-case and a real chance of being the first sustainable and working business model in this space. But concerning equality: You can´t have it all. Another argument for the âcapitalist approachâ: If I spend 50,000 $ (example) for collateral, I don´t go cheap on Racknerds but rent a private cloud at eg open metal or use other services like dedicated servers on renowned providers. It might not be a popular opinion but PRE is entering a phase where it can´t handle the future with OG home-hosters on an old iMac running Linux or an RPI. There needs to be a healthy portion of bigger node runners that feel comfortable to bet on PRE and supply with robust infrastructure. And no: It doesn´t particularly need to be totally decentralized. All these things sound nice but they don´t lead to success in an acceptable time-frame. and in this phase I think it´s only important to get up and center and raise the numbers of participating users aka more searches, more publicity. Also: Once this is done, you will see much bigger investors entering the node and hosting space as well. And then it might be time for geolocation and technical restrictions and discussions.
Decentralization of nodes is a unique feature of Presearch. This is also very important for marketing. If the network is left with 2 big node operators (exaggerating), whatâs the difference between Presearch and centralized search engines? Decentralization is usually difficult to make effective in the long term, but decentralization of nodes is relatively easy to maintain.
I agree with you which is why I made the recommendations I made on the future of nodes. It takes those quality nodes into consideration, but it also ensures we protect the project from potential failure. If you get the majority of the nodes with a few reputable providers Google cloud, AWS, Oracle, Digital Ocean etc and for some reason governments or those entities decide node running is no longer legal or they start to shadow ban those providers from accessing the APIs that provide the search results Presearch could fail. You want decentralization and some level of backup redundancy (just not too much, yet to be determined what the right amount is according to Timâs own words). There is a place for many of these nodes whether it is search nodes or other node roles or for the right level of redundancy. Competition always creates better value and products so a reward model that takes that into consideration should be very effective at providing all the things the network requires to thrive and survive. People running bare metal server nodes if the results are better than current nodes will make the cut but people trying to pour on more of the same to cheap low quality nodes will begin to quickly be replaced to a large degree.
I agree with you when it comes to redundancy and the overall danger that Presearch could be locked out from the Google etc APIs but I think the new node role(s) should be ready soonest this year and then the indexing can start. I personally know too little about the technical set of the PRE docker apps but maybe if all the excess nodes that are not needed for search request in a particular hour of the day could just dynamically get a different role - maybe as part of a large docker swarm that indexes as a big virtual machine? I donât know⌠I am sure there is room for something ingenious if you have tens of thousands of small VPS nodes at hand.
Thankfully, Tim and team came to the right way of looking at this issue. The premise of the question is not quite right. Presearch does not want to reduce nodes - quite to the contrary more nodes are always a good thing. What it must reduce, however, are expenses.
I have run the calculations for all stake sizes, with and without vps costs, etc. The bottom line is that 4k nodes will get you 30-50% APY, assuming you do at least weekly compounding, and varies depending on your server costs (ie 50% is free hosting). As stake sizes approach very high levels we reach a limit of 20% APY. Yes, grandfathered nodes are even higher than 50% (but low in actual dollar terms)
The point is that all of these APYs are too high relative to the risk free rate, other crypto yields, other investment classes, etc.
One could argue that grandfathered nodes should be eliminated and that the stake size could be raised, but it still wouldnât fix the problem. Add to this, the issue that PRE is actually hard to purchase for US residents (uniswap incurs a tax event on the ETH disposal, kucoin not an option for US residents). Given this, raising minimum stake is not a good solution.
Instead, I believe the right path was chosen to simply reduce the existing âpoolâ or input capital into the existing formulaic method of PRE dispersal to node runners.
I think we will find that for most, the decreased returns on the nodes is still better than most other investment options regardless of 1k,2k,4k configurations and will allow the project to survive long enough for organic sources of revenue to offset or exceed the burn rate.