[-] imperator@sh.itjust.works 6 points 1 year ago

2 decades, this game is about 20 years old

[-] imperator@sh.itjust.works 6 points 1 year ago

Interesting. I find that Lemmy seems to have picked up a lot of the bad. Too many memes, very shallow discussions. Maybe I'm just not in the right communities.

[-] imperator@sh.itjust.works 7 points 1 year ago

I use samba and digikam

[-] imperator@sh.itjust.works 8 points 1 year ago

Yeah, I built a new PC at the beginning of the pandemic and went Linux. I don't even not windows and play all my games on there.

[-] imperator@sh.itjust.works 8 points 1 year ago

My assumption was this would be a cash grab. I'll still watch it but likely won't see it in theaters

14

Wondering if anyone here has some advise or a good place to learn about dealing with databases with Python. I know SQL fairly well for pulling data and simple updates, but running into potential performance issues the way I've been doing it. Here are 2 examples.

  1. Dealing with Pandas dataframes. I'm doing some reconciliation between a couple of different datasources. I do not have a primary key to work with. I have some very specific matching criteria to determine a match. The matching process is all built within Python. Is there a good way to do the database commits with updates/inserts en masse vs. line by line? I've looked into upsert (or inserts with clause to update with existing data), but pretty much all examples I've seen rely on primary keys (which I don't have since the data has 4 columns I'm matching on).

  2. Dealing with JSON files which have multiple layers of related data. My database is built in such a way that I have a table for header information, line level detail, then third level with specific references assigned to the line level detail. As with a lot of transactional type databases there can be multiple references per line, multiple lines per header. I'm currently looping through the JSON file starting with the header information to create the primary key, then going to the line level detail to create a primary key for the line, but also include the foreign key for the header and also with the reference data. Before inserting I'm doing a lookup to see if the data already exists and then updating if it does or inserting a new record if it doesn't. This works fine, but is slow taking several seconds for maybe 100 inserts in total. While not a big deal since it's for a low volume of sales. I'd rather learn best practice and do this properly with commits/transactions vs inserting an updating each record individually within the ability to rollback should an error occur.

[-] imperator@sh.itjust.works 5 points 1 year ago

This is a Lemmy instance 😊. This isn't kbin.

[-] imperator@sh.itjust.works 9 points 1 year ago

Awesome. Thank you for running this!

[-] imperator@sh.itjust.works 5 points 1 year ago

My work is all windows too. I work in Finance so Linux unfortunately won't work. But at home I'm all Linux!

42
Linux Gamers? (sh.itjust.works)

Just curious if anyone else here primarily games on Linux? Steam Deck counts!

[-] imperator@sh.itjust.works 16 points 1 year ago

Yes, but the sign up prices can be annoying. I tried signing up at a bunch of different instances and it never went through. I'm addition finding communities is a little painful. But all in in a big fan of it.

[-] imperator@sh.itjust.works 12 points 1 year ago

This is likely the case. I'm sure we'll see an IPO this year or early next after they've pumped up numbers.

[-] imperator@sh.itjust.works 11 points 1 year ago

Let's hope it stays that way and grows!

[-] imperator@sh.itjust.works 4 points 1 year ago

Same. Bit bummed out. But I can't see Lemmy taking the reigns unfortunately. It took me 5+ instances to find a place where I could finally sign up.

view more: next ›

imperator

joined 1 year ago