Why can't there be a Database tool that has the best of everything?

  • Thread starter Thread starter SlurrerOfSpeech
  • Start date Start date
  • Tags Tags
    Database
AI Thread Summary
The discussion highlights the challenges faced when using SQL Server and Cosmos DB, particularly around complexity, schema management, and data insertion processes. SQL Server is criticized for its complicated authentication methods, bulky syntax, and local setup issues, while Cosmos DB's schema-less nature raises concerns about managing evolving document structures and size limitations. The ideal database proposed would be simple, scalable, and capable of handling large JSON entities without the need for complex schema management. Suggestions for alternatives include H2 Database and using CSV scripts for test data insertion. Ultimately, the conversation underscores that no single database solution fits all needs, emphasizing the importance of finding one that aligns with specific requirements.
SlurrerOfSpeech
Messages
141
Reaction score
11
Having worked with both SQL Server and Cosmos DB (more the former), the following are the biggest pain points I've experienced with each.

SQL Server:
  • Complicated to make it work. There are 100+ different ways to authenticate, for example. All this complicated User-Role mapping, Managed Identity, Multiple Activity Result Sets, blah blah.
  • Syntax is bulky and complicated, even for simple things. Inserting test data means I have to examine the schema, triggers, constraints, etc. of multiple tables, then insert into each one in a specific order, sometimes temporarily dropping constraints, etc.
  • Lots of annoying things about working with it locally. Like if you have both SQL Server 2016 and Sql Server 2012 installed, you have to figure out which one is localhost, and stuff like that.
Cosmos DB (a schema-less, document database):
  • "Schema-less" is not as practical as it sounds. The business entities you're representing with documents will change over time, so now you have to come up with a strategy to handle missing/extra properties on your documents.
  • Document sizes are capped at 2MB. This means I can't represent one-to-many foo-bars in documents unless I'm sure there aren't too many bars with each foo.
  • The "DB" itself is not infinitely scalable. You can scale up Collections finitely (up to 10k requests/second) and scale them out infinitely. But you have to come up with a strategy and write code to create new collections on-the-fly and balance the load. I don't understand how this is any different than scaling SQL Server by creating new databases.
A DB I would like to have:
  • Simple plug-and-play
  • I can define an entity in JSON and it can be arbitrarily large and I can insert it to this DB
  • I can query entities of a specific type (I don't require that it needs to be fast)
  • I can keep throwing entities at it and it will do any scaling automatically
Why doesn't something like this exist? CAP Theorem limitation?
 
Last edited by a moderator:
Technology news on Phys.org
Well you could try H2 Database? Its designed to be more an application specific database and supports SQL pretty well.

h2database.com

It can run standalone with a sql command session and has a web interface so you can work with your tables too or it can run embedded within an application with other apps able to connect to it.

Some folks have used it in microservice apps with docker.
 
  • Like
Likes QuantumQuest
You can use tools to make managing sql much easier. I recommend liquibase.

Also, for test data, wouldn’t it be much easier to create a csv and have a script create the inserts of your test data?
 
  • Like
Likes QuantumQuest and FactChecker
It's true that setting up a local test environment for SQL Server can be challenging, but once I get it set up and can do the programming, I've been extremely happy with the T-SQL language and programming environment, expecially with enhancements since 2012 and beyond.

Yes, dealing with triggers and constraints is a little cumbersome when inserting data--but those are what keep your data uncorrupted when there are multiple users who may not understand why those are there. Trust me, you want them. You really do.
 
  • Like
Likes QuantumQuest
Syntax is bulky and complicated, even for simple things. Inserting test data means I have to examine the schema, triggers, constraints, etc. of multiple tables, then insert into each one in a specific order, sometimes temporarily dropping constraints, etc.
A comment:
This sounds like more of a schema setup problem. I do not know what you did, but I do not encounter problems like what you describe very often, and I am definitely not a DBA. You probably accidentally violated some basic principles. See if you can find an existing functioning schema for some simple requirements, play with it. Then read through the objects, constraints, etc to see how it was defined.
Here is some help:


PS: there is no "best" db, just the one you can use well and that meets your needs. This is the reason there are a lot of different db's out there.
 
SlurrerOfSpeech said:
Inserting test data means I have to examine the schema, triggers, constraints, etc. of multiple tables, then insert into each one in a specific order, sometimes temporarily dropping constraints, etc.
I don't have experience with SQL Server nor Cosmos DB, but that sounds more about a misuse of triggers and constraints.

Why would need to examine triggers & constraints? Their purposes is to automate data manipulation such that you DON'T have to examine multiple columns/tables when inserting, updating or deleting data in one table.

And why would you need to temporarily drop a constraint when inserting data? Then why is there a constraint in the first place? You are either misusing them or you don't insert your data in the proper order.
SlurrerOfSpeech said:
I can define an entity in JSON and it can be arbitrarily large and I can insert it to this DB
With MySQL/MariaDB, you can make routines that will manipulate JSON objects.
 
  • Like
Likes jim mcnamara
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top