social.anoxinon.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Die offizielle Mastodon Instanz des Vereins Anoxinon e.V.

Serverstatistik:

1,1 Tsd.
aktive Profile

#database

11 Beiträge11 Beteiligte0 Beiträge heute

#Biketooter :When planning a #bike #route with #FOSS and #OSM based #routing engines, such as #Brouter / #Bikerouter ,the result analysis frequently shows some #surface percentage as "unknown". I guess that's because the #information is missing in the #OpenstreetMap #database .However, when I upload the same track as #GPX into a #commercial #biking application, such as #Komoot or #Cyclers ,the unknown has disappeared, while they are also based on OSM. How come?

Database for an internal chat with millions of chat messages and over 130,000 files with PII and PHI from the United States exposed publicly for over a month.

Contacted the company responsible for setting up the chat and one of their clients, a Mental Health Clinic, but no one replied back to me and just silently fixed the issue.

jltee.substack.com/p/internal-

The Hub of Stupi.. *misconfigs · Internal chat database for multiple US companies exposed publiclyVon JayeLTee

We've switched to my tool to measure latency and availability of our databases at work. This brand new tool, not yet open source, leverages the power of Prometheus histograms to measure those metrics finely.

I had to use DNS names instead of IP addresses to reach the endpoints, because it was easier to use (label = name).

But, the farthest the infrastructure was, the more degraded the measures were. Everything ran locally so what the hell? The level of degradation was not random, it was equal to the light distance to France.

It was DNS. It's always DNS.

The private zones were not deployed on the local resolvers, so we had a 200ms round trip in Australia, to reach resolvers in France, before actually connecting to the database.

The DNS cache didn't help at all.

#qos#latency#dns

Mississippi libraries ordered to delete academic research in response to state laws
Lawmaker says the removal of scholarly material from library databases would provoke backlash in a state where minorities have fought for equal access to education.
mississippitoday.org/2025/04/0
#Libraries #Research #Mississippi #Academia #AcademicResearch #ResearchCollections #Database #ScholarlyMaterials #Race #Gender

Some time ago I mentioned here, in half-joking way, self-fixing software I work with. I said Patroni #Postgres has the best regeneration ability I've ever seen. And currently "the best ability" includes:

> After network migrations servers changed IP addresses. It broke etcd config so I had to completely delete that config and initialize etcd cluster again. Which also forced cleaning and renewing Patroni config because it is strongly dependent on etcd. Even when configuration temporarily didn't exist, connection with WAL archives (technically other separate server) wasn't interrupted (I am not even sure if real data transfer could happen at that time). It was seemingly enough to start new #database cluster from last timeline. I don't know WHAT forced servers to immediately pull that data on fresh start. At migration time there weren't any real production data so I didn't even purposely try to restore anything.

> Not so long time later (and now with real production things) some script tests, causing lots of database changes in relatively short time, beyond former server's capacity, killed master server. Patroni switched as intended and I could work on increasing server's capacity (had to do it live, not very convenient). First server finally decided data corruption was too big and to fix it automatically deleted whole /var/lib/postgresql/* directory and started to recreate thing from scratch, using data from new master server (and was doing it with at least 2 GB/s speed because why not? :blobcatjoy:).

> During above mentioned process impatient tester hit again with their not optimised scripts, finally killing whole cluster. Swearing silently I increased remaining servers as it was only thing I really could do. Postgresql API mostly wasn't responsive, it had limited info about last state before final failure. It wasn't possible to force any change or affect it in any way.
First server decided to delete whole directory again and recreate it (at least this time I saw exact moment in logs), at the same time second server did rewind to state of third server (why??). All these things happened automatically, without my help. I wouldn't even know what to do :blobcatsweat:

And it's only beginning when we use it on production. Now I wait for stubborn users to do some more unintended durability tests... Maybe I would see it's even more invincible :blobcatamused:

#admin#sysadmin#it

🐱💻🎩 Oh wow, someone decided to turn #Postgres into a "task orchestrator"! Because, you know, managing #database queries wasn't already exciting enough. 🙄 Just imagine all those #developers rushing to #GitHub, eager to trade their robust tools for this groundbreaking #innovation. 🚀
github.com/hatchet-dev/hatchet #Task #Orchestrator #HackerNews #ngated

GitHubGitHub - hatchet-dev/hatchet: Run Background Tasks at ScaleRun Background Tasks at Scale. Contribute to hatchet-dev/hatchet development by creating an account on GitHub.

Check out Kothari Nishchay's session "PostgreSQL pgroll – Zero-Downtime, Reversible Schema Migrations" at #PostgreSQLDayBangkok to explore pgroll, a powerful tool for seamless database migrations.

🔗 Click here youtu.be/uB7egck68Js?si=WD2PXV to watch on the FOSSASIA YouTube channel

youtu.be- YouTubeAuf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
#PostgreSQL#pgroll#Database