yes, may be mathematically correct but it’s insane.
what they have there is matrix multiplication and inversion.
this is what they calculate:
^{c_2}M_{c_1} = ~ ^{c_2}M_o \cdot ~ ^{o}M_{c_1} = ~ ^{c_2}M_o \cdot (^{c_1}M_o)^{-1}
^{c_2} R_{c_1} and ^{c_2} t_{c_1} exactly form ~^{c_2}M_{c_1}.
that calculation involves the inverse of a matrix. calculating the inverse costs a little more than a transposition (which is possible in special cases) and it may be numerically unstable in the general case, but not here, because here we have orthonormal matrices, which are very tame.
so to apply that “trick” they take the transformation matrices apart:
M = \begin{pmatrix}
R & t \\
0_{1 \times 3} & 1
\end{pmatrix}
that’s a “block matrix”, a big matrix composed of blocks, which are smaller matrices (vectors, scalars).
and instead of saying M^{-1}, which is a “simple” matrix inversion in math and in code, they take that apart and calculate with the parts. how that’s done in general can be seen on wikipedia. if you apply those rules to this specific case:
(if R is orthonormal, meaning its column vectors are normal to each other, and they have unit length (i.e. it’s exactly a rotation, no scaling, no shearing), this simplification holds: R^{-1} = R^T )
M^{-1}
= \begin{pmatrix}
R & t \\
0_{1 \times 3} & 1
\end{pmatrix}^{-1}
= \begin{pmatrix}
R^T & -R^T \cdot t \\
0_{1 \times 3} & 1
\end{pmatrix}
doing that may be okay in code (because code means obscured meaning), if properly documented, but in explaining and understanding the math, it’s absolute cancer.
you’ll understand the math a lot easier if you realize it only involves multiplying transformation matrices, and sometimes inverting them to transform in the other “direction”.
skip over all the insane stuff where they juggle with individual R and t.