I want to inform you that due to a recent cyber attack, the HiveSQL infrastructure was taken down yesterday around noon (UTC).
The (not) funny thing is that I was doing a presentation about Hive and the services I created on top of Hive when it happened. I was embarrassed to see some hiccups, but I couldn't decently drop everything immediately to see what was wrong. I cut this one short though...you know...experience and foreboding.
Since then I have worked tirelessly to fix the problem and restore full functionality, which required restoring the system from a backup.
I apologize for any inconvenience this may have caused. We are taking steps to prevent similar incidents in the future. Thank you for your patience and understanding as we work to get everything back up and running.
UPDATE:
As of this afternoon around 5pm UTC, HiveSQL is fully recovered and working normally. All applications that rely on this service are therefore also operational.
Thank you to everyone who supported me during these stressful times.
Attack Description
Since HiveSQL is a community-funded service, I find it important to be as transparent as possible and to share my experience with others, hoping this can help them not to fall victim to a similar issue.
A wave of attacks is currently targeting ESXi servers. These attacks are detected globally and especially in Europe. According to experts from the ecosystem as well as authorities, they might be related to Nevada ransomware and use CVE-2021-21974 as a compromise vector.
Despite the protection systems in place, the attackers succeeded to access one of the infrastructure's servers, probably using another internal host under their control, although investigations are still underway to confirm these assumptions.
The attacker took the opportunity to encrypt the disks of virtual machines, making them unusable unless I paid a ransom of around $50,000. Bad luck for them, the HiveSQL infrastructure is regularly backed up on a remote and secure site.
However, before restoring the backup, I took care with our service provider to identify the operating mode of the attack so that it could not be executed again once the infrastructure was restored.
Once the identified security holes were closed, we were able to start the restoration process. Given the volume of data handled by HiveSQL (over 4TB), this is still ongoing at the time of writing.
The attack was what I consider a surface attack, meaning that it targeted the infrastructure itself but did not result in the exfiltration of any data. An analysis is underway to validate this presumption. But even if this turns out to be correct, we are currently performing an audit of sensitive information and will reset any of them.
Lesson learned
As the manager of HiveSQL, I have been closely monitoring its performance and security for the past 6 years. Despite constant attempts of attack and intrusion, the infrastructure has remained strong and reliable. That is, until now.
I can say that I was taken off guard by this issue. However, I am determined to learn from this experience and take the necessary steps to prevent similar incidents from happening in the future. I can assure you that I'm taking every measure to ensure its security and stability.
1. Backups, Backups, ... and Backups
The backup system put in place saved the day!
I have always placed a strong emphasis on the importance of regular backups. Backups provide a crucial safety net for any infrastructure and ensure that valuable data and information are protected in the event of an unexpected outage.
This incident highlights the significance of regular backups. By having a reliable backup system in place, we were able to quickly recover from the attack and minimize its impact. This serves as a powerful reminder of the importance of having a robust backup strategy in place, as well as the need for constant monitoring and updating of security systems.
2. Insider Threats - The Possibility of an Attack From Within
Our protection against outside attacks has proven to be highly effective, providing a secure barrier for the past 6 years.
However, it is important to consider the possibility that the attack could have originated from within the data center. We realized that our servers rented in the data center were not completely isolated in a network silo, thus offering a potential attack surface to other machines physically connected to the same network.
It is crucial to consider the possibility of an insider threat and investigations are also carried out in this direction. As a precaution, I am conducting a thorough analysis and implementing additional security measures to ensure the protection of our systems and data.
3. Technology watch
Keeping software up to date is a critical component of maintaining the security and stability of any infrastructure. Technology watch and patching software are essential tools for staying ahead of potential threats and fixing vulnerabilities as soon as they are discovered.
Despite my diligent efforts in that, I recently missed an important patch that left one of our servers vulnerable to attack. This serves as a reminder that no security measures are foolproof, and even the most proactive approach to security can sometimes fall short. I am taking steps to ensure that all critical patches are promptly installed in the future to minimize the risk of similar incidents happening again.
Conclusion
I understand that this downtime may have caused some inconvenience to users and apps relying on HiveSQL, and I would like to extend my sincerest apologies. I can assure you that I'm constantly working to improve and provide the best service possible.
Thank you for your continued support and understanding. I look forward to the next 6 years of providing a secure and reliable HiveSQL infrastructure
I will let you know as soon as this recovery process is complete. All communication and support related to this issue will be done in the HiveSQL Discord Channel