Common misconceptions about the differentiation operator

(Note: math may not render properly in Microsoft Edge. Any other browser should work, though. I’m trying to figure out the source of this issue.) This is the differentiation operator in single variable calculus:

$$\frac{d}{dx}$$

Since several procedures in calculus happen to involve manipulating the “numerator” and “denominator” of the differentiation operator, people have gained the incorrect idea that it IS in fact a ratio of two “infinitely small” quantities. This is absolutely not true! Modern mathematics does indeed have a notion of infinitely small quantities (called surreal numbers), but they are defined within an entirely different mathematical system called nonstandard analysis. The whole motive behind formulating a rigorous definition of a limit is avoiding problems with infinity. That is to say, $\dfrac{dy}{dx}$ is not a ratio of an infinitely small change in $y$ to an infinitely small change in $x$; instead, it is shorthand for the limit of the difference quotient of some function $y(x)$. By definition,

$$\frac{d}{dx}{y(x)}=\lim_{\Delta x\to 0}\frac{y(x+\Delta x) – y(x)}{\Delta x}$$

The limit denotes the quantity that the difference quotient “approaches” as $\Delta x$ grows small; that is, for any arbitrarily small difference $\varepsilon$ between the value of $\frac{y(x+\Delta x)-y(x)}{\Delta x}$ and the limit $\lim_{\Delta x\to 0}\frac{y(x+\Delta x) – y(x)}{\Delta x}$, there exists a choice of $\Delta x$ that yields a value of the difference quotient within $\varepsilon$ of the limit. Nothing in this definition has anything to do with “infinitely small quantities”. Instead, it deals with “arbitrarily small” quantities. In the end, though, the limit is a different number altogether that just so happens to be associated with the expression. The expression can get within arbitrarily small distance of the limit given an appropriate choice of $\Delta x$.

Now that the definition is out of the way, it seems necessary to address several instances in calculus where it seems okay to manipulate the differentiation operator as if it’s a fraction. I will cover the the most common situation (and ALL others can be explained, rigorously, using a concept called “differential forms”, which is currently beyond my understanding). The two most common are in the separation of variables for solving simple differential equations and the method of integration by substitution (which I was planning to cover as well, but it turns out Wikipedia has an excellent explanation here).

Separation of variables

After separating the variables in a differential equation, we are left with something of the form:

$$f(y)\frac{dy}{dx}=g(x)$$

At which point we can take the antiderivative of both sides with respect to $x$ (since antiderivatives are unique up to a constant):

$$\int{f(y)\frac{dy}{dx}}dx=\int{g(x)}dx$$

The right hand side of this equation essentially asks, “what function, when we take its derivative with respect to $x$, yields $f(y)\frac{dy}{dx}$”. If we let

$$u:=\int{f(y)}dy$$

then, by the chain rule,

$$\frac{du}{dx}=\frac{du}{dy}\frac{dy}{dx}$$

which is of course the equivalent to

$$\frac{du}{dx}=f(y)\frac{du}{dx}$$

Therefore, integrating both sides with respect to $x$,

$$u=\int{f(y)\frac{dy}{dx}}dx$$

which justifies the final equation

$$\int{f(y)}dy=\int{g(x)}dx$$

So in practice, it seems like you’re “multiplying both sides of the equation by $dx$”, but that is the consequence of the work shown above.