I heard it was supposed to be human body temperature, but they used horse body temperature instead because it was close to human body temperature but more… stable.
I heard it was supposed to be human body temperature, but they used horse body temperature instead because it was close to human body temperature but more… stable.
A vector space is a collection of vectors in which you can scale vectors and add vectors together such that the scaling and addition operations satisfy some nice relationships. The 2D and 3D vectors that we are used to are common examples. A less common example is polynomials. It’s hard to think of a polynomial as having a direction and a magnitude, but it’s easy to think of polynomials as elements of the vector space of polynomials.
Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response - even the nonsensical ones, before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.
I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.
You’re mistaken unfortunately. The books don’t start that way. They start by describing Arthur Dent’s house.
Nobody is arguing that a grocery stocker requires less skill and training than brain surgery. Literally nobody. And yet you people repeat this idea over and over.
We know you aren’t arguing that every job requires the exact same degree of skill. All that we want to do is say that there are jobs whose required skills are quick to acquire and are therefore easily replaceable. Meanwhile, there are other jobs whose skills take a long time to acquire and are not easily replaceable. We use the term “unskilled labor” to refer to the former group and “skilled labor” to refer to the latter group as a point of convention. When people claim that unskilled labor doesn’t exist, they imply that every single job requires skills that are slow to obtain and therefore every worker is difficult to replace, which is clearly false.
I mean this not as an attack on you but a chance to expand your worldview. Cognitive dissonance hurts, and it’s important to recognize when it’s happening so you can ask further questions.
Where is the cognitive dissonance? Where is the contradiction in distinguishing between jobs that require trained applicants and jobs that don’t require trained applicants?
There is no such thing as an “unskilled worker” because all jobs require skill. It’s called human skill, and it’s what enables us to build societies greater than the sum of its citizens.
If you decide to use “skilled worker” to mean a worker who has a skill, then you are correct that “skilled workers” do not exist. Unfortunately, that’s not what the phrase “skilled worker” means. If that’s how you use the term, then you’re talking about something different to everyone else.
The logical conclusion you are suggesting is that because some humans are less capable, they don’t deserve basic needs such as a home, reliable transportation, internet, food, utilities, etc.
The logical conclusion of “unskilled labor exists” is simply that unskilled labor exists. You cannot jump from the observation that “unskilled labor exists” to the claim that “some people don’t deserve their basic needs.” It’s a non sequitur, and it’s not a position anyone in this thread would support.
And if your basic premise starts with the notion that society should not be meeting the basic needs of its people, then there’s only one thing that would convince you anyway.
This is a straw man. No one here has expressed the position that society shouldn’t meet the basic needs of its people. The position you are arguing against is the position that some jobs require training before hiring and others don’t. Again, that’s just what people mean when they refer to skilled labor and unskilled labor.
Of course! I’m always excited for an opportunity to discuss these sorts of things, so I should be thanking you instead.
I’ll preface this with the fact that I am also not a physicist. I’m also simplifying a few concepts in modern physics, but the general themes should be mostly accurate.
String theory isn’t best described as a genre of physics - it really is a standalone concept. Dark matter and black holes are subjects of cosmology, while string theory is an attempt to unify quantum physics with general relativity. Could string theory be used to study black holes and dark matter? Sure, but it isn’t like physicists are studying black holes and dark matter using methods completely independent from one another and lumping both practices under the label string theory as a simple matter of categorization.
You are correct to say that string theory is an attempt at a theory of everything, but what is a theory of everything? It’s more than a collection of ideas that explain a large swath of physical phenomena wrapped into a single package tied with a nice bow. Indeed, when people propose a theory of everything, they are constructing a single mathematical model for our physical reality. It can be difficult to understand exactly what that means, so allow me to clarify.
Modern theoretical physics is not described in the same manner as classical Newtonian physics. Back then, physical phenomena were essentially described by a collection of distinct models whose effects would be combined to come to a complete prediction. For example, you’d have an equation for gravity, an equation for air resistance, an equation for electrostatic forces, and so on, each of which makes contributions at each point in time to the motion of an object. This is how it still occurs today in applied physics and engineering, but modern theoretical physics - e.g., quantum mechanics, general relativity, and string theory - is handled differently. These theories tend to have a single single equation that is meant to describe the motion of all things, which often gets labeled the principle of stationary action.
The problem that string theory attempts to solve is that the principle of stationary action that arises in the quantum mechanics and the principle of stationary action that arises in general relativity are incompatible. Both theories are meant to describe the motion of everything, but they contradict each other - quantum mechanics works to describe the motion of subatomic particles under the influence of strong, weak, and electromagnetic forces while general relativity works to describe the motion of celestial objects under the influence of gravity. String theory is a way of modeling physics that attempts to do away with this contradiction - that is, string theory is a proposal for a principle of stationary action (which is a single equation) that is meant to unify quantum mechanics and general relativity thus accurately describing the motion of objects of all sizes under the influence of all known forces. It’s in this sense that string theory is a standalone concept.
There is one caveat however. There are actually multiple versions of string theory that rely on different numbers of dimensions and slightly different formulations of the physics. You could say that this implies that string theory is a genre of physics after all, but it’s a much more narrow genre than you seemed to be suggesting in your comment. In fact, Edward Witten showed that all of these different string theories are actually separate ways of looking at a single underlying theory known as M-theory. It could possibly be said that M-theory unifies all string theories into one thus restoring my claim that string theory really is a standalone concept.
You have the spirit of things right, but the details are far more interesting than you might expect.
For example, there are numbers past infinity. The best way (imo) to interpret the symbol ∞ is as the gap in the surreal numbers that separates all infinite surreal numbers from all finite surreal numbers. If we use this definition of ∞, then there are numbers greater than ∞. For example, every infinite surreal number is greater than ∞ by the definition of ∞. Furthermore, ω > ∞, where ω is the first infinite ordinal number. This ordering is derived from the embedding of the ordinal numbers within the surreal numbers.
Additionally, as a classical ordinal number, ω doesn’t behave the way you’d expect it to. For example, we have that 1+ω=ω, but ω+1>ω. This of course implies that 1+ω≠ω+1, which isn’t how finite numbers behave, but it isn’t a contradiction - it’s an observation that addition of classical ordinals isn’t always commutative. It can be made commutative by redefining the sum of two ordinals, a and b, to be the max of a+b and b+a. This definition is required to produce the embedding of the ordinals in the surreal numbers mentioned above (there is a similar adjustment to the definition of ordinal multiplication that is also required).
Note that infinite cardinal numbers do behave the way you expect. The smallest infinite cardinal number, ℵ₀, has the property that ℵ₀+1=ℵ₀=1+ℵ₀. For completeness sake, returning to the realm of surreal numbers, addition behaves differently than both the cardinal numbers and the ordinal numbers. As a surreal number, we have ω+1=1+ω>ω, which is the familiar way that finite numbers behave.
What’s interesting about the convention of using ∞ to represent the gap between finite and infinite surreal numbers is that it renders expressions like ∞+1, 2∞, and ∞² completely meaningless as ∞ isn’t itself a surreal number - it’s a gap. I think this is a good convention since we have seen that the meaning of an addition involving infinite numbers depends on what type of infinity is under consideration. It also lends truth to the statement, “∞ is not a number - it is a concept,” while simultaneously allowing us to make true expressions involving ∞ such as ω>∞. Lastly, it also meshes well with the standard notation of taking limits at infinity.
I don’t know the reason. I think not having the extra blank lines would be better, but it works just fine as is - even the post admits this much. That’s why it’s an enhancement. It’s possible for software to be functional and consistent and still have room for improvement - that doesn’t mean there is a bug.
My point is that someone made the decision for it to do that and that the software works just fine as is. It’s not a bug, it’s just a weird quirk. The fact that they made the enhancement you requested doesn’t make the old behavior buggy. Your post title said “it’s not a bug, it’s a feature!”, but the behavior you reported is not accurately classified as a bug.
It’s not a bug just because the software doesn’t conform to your personal preferences. You’re asking for what would be considered an enhancement - not a bug fix.
It depends. If the variable names are arbitrary, then a map is best. If the variable names are just x_1, x_2, x_3, …, x_n, then a list or dynamic array would be more natural. If n is constant, then a vector or static array is even better.
I don’t recall any socialized courier or food delivery services.
This is just a continuous extension of the discrete case, which is usually proven in an advanced calculus course. It says that given any finite sequence of non-negative real numbers x,
lim_n(Sum_i(x_i^n ))^(1/n)=max_i(x_i).
The proof in this case is simple. Indeed, we know that the limit is always greater than or equal to the max since each term in the sequence is greater or equal to the max. Thus, we only need an upper bound for each term in the sequence that converges to the max as well, and the proof will be completed via the squeeze theorem (sandwich theorem).
Set M=max_i(x_i) and k=dim(x). Since we know that each x_i is less than M, we have that the term in the limit is always less than (kM^n )^(1/n). The limit of this upper bound is easy to compute since if it exists (which it does by bounded monotonicity), then the limit must be equal to the limit of k^(1/n)M. This new limit is clearly M, since the limit of k^(1/n) is equal to 1. Since we have found an upper bound that converges to max_i(x_i), this completes the proof.
Can you extend this proof to the continuous case?
For fun, prove the related theorem:
lim_n(Sum_i(x_i^(-n) ))^(-1/n)=min_i(x_i).
2 may be the only even prime - that is it’s the only prime divisible by 2 - but 3 is the only prime divisible by 3 and 5 is the only prime divisible by 5, so I fail to see how this is unique.
Just say you recently came into some inheritance and that you are looking into investment opportunities. Then they will expect you to be out of your element, so you won’t need to try to pretend you’re someone you’re not. If they ask about the inheritance, say your grandfather made a fortune selling lumber or something boring like that.