Using Jesus as a reference is unfortunate, yeah, but any other world calendars have to pick a nearly equally arbitrary way to contextualize the start and end year.
Take your pick: https://en.m.wikipedia.org/wiki/Template:Year_in_various_calendars
I personally use "2024 CE" for "common era", with BCE referring to "before common era". This allows us to communicate relatively clearly with other people who use the Gregorian calendar without explicitly endorsing the birth of Jesus as the important event defining the switch-over between CE and BCE... A bit of a cop out, but
Anyway have fun, there are lots of options
Edit: also the one you're referring to in your post is the Holocene Calendar
I guess this isn't NO context but:
Innkeeper married to a nixie: “The Fey never do anything without a price...”
“... How much did you pay for your wife?”
Contrary to most of the opinions in this thread, I think this (and the van gogh incident) is a great and appropriate protest.
It causes a knee-jerk reaction to be mad that they are harming a precious piece of history and culture, which is a perfect juxtaposition to how the climate change harms our precious natural resources and will harm ourselves, and
It achieves this without actually causing permanent damage to the subject artifact, and
It is incendiary enough to remain in our public consciousness long enough for it to affect the discourse.
I only wish there was a more direct way to protest the people most responsible for the worst effects (oil executives, politicians, etc.), but the truth is that the "average middle-class Westerner" (most of the people who have access to enjoy these particular cultural relics) is globally "one of the worst offenders". While I firmly believe that individuals have less power to enact change than corporations and policymakers, this protest does achieve the goal of causing reflection within people who have the power to make changes.
You say you don't like poetry, yet you write a lovely free-form poem. Suspicious...
Oooh it's even cooler than that!! You're spot on, acid is the problem. And acid from food, candy, coffee, etc. is harmful for enamel for sure.
But sugary stuff that isn't acidic also rots teeth. Why? Because the bacteria in your mouth do what's called lactic acid fermentation. Basically, when they take a sugar molecule and want to make "usable" energy out of it (in the form of something called ATP, or adenosine triphosphate), they end up creating lactic acid as a byproduct. In essence, the stuff living in your mouth makes acid out of sugar.
We also need to break sugar down into ATP, but we do something called cellular respiration instead. It uses oxygen and creates CO2 as a byproduct! That's why we need oxygen to breathe, and why we breathe out carbon dioxide. But, when you work your muscles hard (lifting weights, sprinting), you might use the ATP in your muscles faster than your body can make it with cellular respiration. In that case, your cells will also do lactic acid fermentation! That's what we're feeling when we "feel the burn" (well, that and micro-tears in the muscle, in some cases).
Source: I'm a biologist! And I love sharing weird facts like this! Thank you for the excuse to write this out :-)
So I think I can make the claim that I am an expert in this, at least compared to 95%+ of biological researchers. My research foci include epigenetic and emergent interactions like the ones discussed in the article, and although I am not going to back this up by identifying myself, please believe me when I say I've written some papers on the topic.
The concept of junk DNA is perhaps the problem here. Obviously there are large swaths of our genome that do not encode anything or have instructions for proteins. However, dismissing all non-coding DNA as "junk" is a critical error.
Your telomeres are a great example. They don't contain vital information so much as they serve a specific function-- providing a buffer region to be consumed during replication in place of DNA that does contain vital information. Your cells would work less well without telomeres, so calling them junk is inaccurate.
Other examples of important non-coding regions are enhancer and promoter regions. Papers describing the philosophical developments of stochasticity in cellular function note how enhancers are vital for increasing the likelihood of transcription by making it more likely that specific proteins floating in the cellular matrix interact with each other. Promoter regions are something most biologists understand already, so I won't describe them here (apologies for anyone who needs to go read about them elsewhere!). Some regions also inform the 3D structure of the genome, creating topological associated domains (TADs) that bring regions of interest closer together.
Even the sequences with less obvious non-coding functions often have some emergent effect on cellular function. Transcription occurs in nonsense regions despite no mRNA being created; instead, tiny, transient non-coding RNAs (ncRNAs) are produced. Because RNA can have functional and catalytic properties like proteins, these small RNAs "do jobs" while they exist. The kinds of things they do before being degraded are less defined than the mechanistic models of proteins, but as we understand more stochastic models, we are beginning to understand how they work.
One last type of DNA that we used to consider junk: binding sites for transcription factors, nucleosome remodelers, and other DNA binding proteins. Proteins are getting stuck to DNA all the time, and then doing things while they're stuck there. Sometimes even just being a place where a nucleosome with a epigenetic flag can camp out and direct other cellular processes is enough to invalidate calling that region "junk".
Anyway I'm done giving my spiel but the take home message here is that all DNA causes stochastic effects and almost all of it (likely all and we haven't figured it out yet) serves some function in-context. Calling all DNA that doesn't encode for a protein "junk" is outdated-- if anything, the protein encoding regions are the boring parts.
Wikipedia link to radium girls
I think you got the right idea but that description is missing the big points.
They were painting watches and their employers told them to use their lips to make fine points on the brushes, meaning they ingested a ton of the paint. The employers told them it was harmless despite evidence to the contrary. They chose not to use other options because wiping the brush on their lips increased productivity and they were paid per watch.
I don't think you meant to imply that they were doing it for trivial reasons, but I do think mentioning that they were doing it for a job and that their employers were intentionally deceiving them is important context!
Huge disclaimer that I'm not a plumber or even close to a plumber, but I did have a house and think about houses:
Isn't the current "standard" plumbing PEX plumbing, which is basically just a bunch of hoses?
Like I think you're on to something but the industry beat you to the punch 😉
I'm not sure about the time scale you're referring to, but I have some expeirence with dog training and I've been interested in dog training history lately, so maybe I have insight for you. Also, I want to qualify this whole tirade by saying this is a USA-centric breakdown; other countries have different cultural histories with their dogs, and while the underlying animal behavior is the same, I can't speak to whether dogs in other countries are "well" or "poorly" trained.
Prior to the 1900s, dogs weren't really thought of as companion animals the way they are now. Dogs were usually from working lines-- hunting dogs, setters, pointers, terriers, ratters, herders, shepherds, guard dogs, sled dogs, etc. They were considered somewhat adjacent to livestock. In these situations, dogs were often "trained" by their breeding. You don't have to tell a working line rat terrier to kill rats, they just do. Sheepdogs will herd children if there aren't sheep around. Just try keeping a working line husky from pulling in a harness... you can do it, but it's working against it's nature. Mostly around this time, a person had multiple dogs of breeds with natural instincts to do the job they wanted them to do, and the dogs did it. The ones that did it best were bred by their owners, and the next generation was better than the last. It's also important to note that the major written documents describing dog training at this time mostly emphasized rewarding the dogs with meat and praise when they are good, and ignoring them when they are bad.
During and around WWII, there was a new interest in training dogs for policing, warfare, and personal protection. It became more common to have one-dog-one-handler arrangements, and since most working lines of guard dogs were more "bark at intruders and bite strangers" kinds of dogs instead of "dutifully and silently stand by until ordered to kill" dogs, there was an interest in developing training methods to achieve the desired result without needing to breed new working lines.
From this desire during WWII, two schools of thought arose. One was the "traditional" method (not very traditional after all...) which arose from trainers like William Koelher. These methods emphasized discipline, "corrections", and punishment. The other school of thought had its roots from behaviorists like Marian Breland Bailey (an advisee of BF Skinner) that illustrated the power of operant conditioning and positive reinforcement. They both started around the same time (1930s-1960s) but for one reason or another the traditional methods were more popular, and the reinforcement methods were seen as lesser "tid-bit training techniques" based in "the prattle of 'dog psychologists'".
It turns out they were both working with a similar framework-- dogs learn by associating an action or stimulus with a positive or negative outcome. The argument was whether positive or negative outcomes were better at inducing learning gains. At this point, mountains of research shows that positive reinforcement wins out every time, meaning that the behaviorists were more correct than the traditionalists.
Still, as I mentioned, the traditional methods were more popular for a long time. People still think they need to "be an alpha" or leader to their dogs, that they need to discipline the dog so it respects them, that punishing the dog is the way to achieve good behavior. Choke and shock collars, leash corrections, and "alpha rolls" are still common training techniques despite the evidence that they are counterproductive. Additionally, you'll remember what I said about the behaviorist/reinforcement methods being more aligned with training techniques recorded before WWII-- when farmers were training herding dogs, they weren't "alpha roll"ing them, they were giving them meat when they did their job and ignoring them when they didn't.
Anyway that's a whole fucken essay in itself, but the point I'm trying to make is this: prior to WWII, dogs were trained by being paid in daily food and by having the chance to breed. Many working dogs are still trained like this, perhaps giving you the impression that dogs "used to be trained well". Companion dogs are a more modern development and there continues to be two schools of thought about how to train them. People who look deeply into evidence-based dog training methods train their dogs with positive reinforcement-- these dogs are usually what we consider "well trained" dogs, and overwhelmingly these dogs exist in affluent areas where dog owners have the money to pay for expensive trainers, and where they have the free time to train the dog consistently. As class disparity grows, it is becoming more common for people in poorer areas to lack access to the education about the best methods, so they tend to default to "traditional" methods that were more popular in the 20th century. These dogs are... less "well trained". Even if someone wants to put in a lot of effort to learn how to train dogs, they might just not have access to the most up to date knowledge. Additionally, there's evidence that dogs trained with these methods are more susceptible to a lack of generalization than reinforcement trained dogs, which is to say they might act fine in most situations, but they act worse (more fearfully, less predictably) in novel scenarios. That's part of why you might see "well trained" dogs who suddenly and disasterously act out.
One last side note: often dogs who are reactive (the term for dogs who freak out and start screaming when they see a person or a dog or a bike, etc.) are not necessarily untrained. Reactivity is a fear response; you can imagine they might be like a normal human with a spider phobia. They might be 100% perfectly behaved in every situation... except for when a dog walks by. In this situation, the other dog is like a spider.
Traditional training might suggest that you try to order the dog to stop freaking out and punish them if they don't stop when they see another dog, but that's like punishing someone with a spider phobia for freaking out when they see a spider. The reinforcement methods instead try and convince the dog that other dogs (spiders) are actually harmless. This is shown to reduce reactivity much more than punishment. Still, reducing reactivity is like really really hard, just as fixing phobias in humans is. Even if someone is working very hard with training and using the best available techniques, the dog might still freak out when they see another dog (thus looking like they "aren't trained", according to your post).
And LAST last note, maybe the difference you're perceiving is from covid? A lot of people got a lot of dogs but couldn't take them out to socialize and train them due to lockdown. Additionally, during covid a lot of adoption agencies literally ran out of dogs, meaning that dogs that would usually be euthanized because of behavioral issues were instead adopted out to families. Compounded with a lack of socialization, and the fact that many people still use "traditional" training methods, maybe you're just seeing a lot of reactive, fearful dogs? Hopefully that will improve over time!
Anyway thanks for reading my whole fucken essay, lol... I wrote this while on a plane so I guess that's why I was bored enough to write this much. Hope you get something out of it!
I don't want this to be an accusation about lemmy's user base, so take this next comment with a grain of salt:
I feel like lemmy slants male the same way early reddit did, and the same way a lot of more technical communities seem to. I've definitely seen threads where the perspectives being shared feel alien and out of touch, and although I'm sure that they have valuable insights about many topics, it does feel kind of... homogenous?
One specific example is the threads arguing about whether to make lemmy more like reddit or not. Often, there are a lot of comments arguing that they don't want to change lemmy in ways that would make it more inviting, because then more people from reddit would show up. The implication is that the average user of reddit is an idiot or should be unwelcome in some way, but to me it seems that they are just trying to select for men in technical disciplines and who have similar world views to the current user base. Idk, it's not a great look.
Anyway, I'm non-binary and I don't have endometriosis so I'm probably not part of the communities you're looking for... still, I wish you all the best looking for your space. I think it's truly less homogenous here than it seems... We'll get more diverse perspectives over time!
This article is garbage but I'm a molecular biologist and the publication they're talking about is really neat.
The "ELI5 to the point of maybe reducing out the truth" way to explain it is that the researchers can add "flags" to proteins associated with immune responses that make cells pick them up and examine them. This is shown to work for allergins (so say, add a flag to peanut protein and the cells can look at it more closely, go "oh nvm this is fine" and stop freaking out about peanuts) as well as autoimmune diseases (where cells mistake other cells from the same body as potential threats).
It's not nearly to a treatment stage, but tbh this is one of the more exciting approaches I've seen, and I do similar research and thus read a lot of papers like this.
There's a lot of evidence that we are entering a biological "golden age" and we will discover a ton of amazing things very soon. It's worrysome that we still have to deal with instability in other parts of life (climate change, wealth inequality, political polarization) that might slow down the process of turning these discoveries into actual treatments we can use to make lives better...
Still, don't doubt everything you read! A lot of cool stuff is coming, the trick is getting it past the red tape