The simplest case is if you have a differential equation like this
$$
\frac{d}{dt} M (t) = A (t) M (t)
$$
where ##M (t)## and ##A(t)## are matrices where ##A (t)## is a given matrix which we are taking to be time dependent.
This can be written as an integral equation,
$$
M (t) = M (0) + \int_0^t A (t_1) M (t_1) dt_1
$$
Which can be substituted into itself,
\begin{align*}
M (t)
& =
M (0) + \int_0^t A (t_1) M (0) dt_1 + \int_0^t A (t_1) \int_0^{t_1} A (t_2) M (t_2) dt_2 dt_1
\end{align*}
Continuing in this way we obtain:
\begin{align*}
& M (t) = M (0) + \sum_{n=1}^\infty
\int_0^t dt_1 \int_0^{t_1} dt_2 \cdots \int_0^{t_{n-1}} dt_n
A (t_1) \cdots A (t_n) M (0) \qquad (*)
\end{align*}
(assuming convergence). Let us look in detail at the case of two integrals, i.e.,
$$
\int_0^t dt_1 \int_0^{t_1} dt_2 A (t_1) A (t_2)
$$
We'll write the matrices in component form, so that the above equation reads
$$
\sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{j k_1} (t_1) A_{k_1k} (t_2)
$$
We have that
\begin{align*}
& \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
\nonumber \\
& = \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2) + \frac{1}{2} \sum_{k_1} \int_0^t dt_2 \int_{t_2}^t dt_1 A_{jk_1} (t_1) A_{k_1k} (t_2)
\end{align*}
where in the second integral on the RHS we are integrating over the same region but we have changed the order of integration (compare figures (a) and (b)). By renaming the integration variables in the second integral, we have
\begin{align*}
& \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
\nonumber \\
& = \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2) + \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_{t_1}^t dt_2 A_{jk_1} (t_2) A_{k_1k} (t_1) \qquad (**)
\end{align*}
We define the time-ordered product of two matrices ##A(t_1)## and ##A(t_2)##,
$$
\mathcal{T} \{ A_{j k_1} (t_1) A_{k_1 k} (t_2) \} := \left\{
\begin{matrix}
A_{j k_1} (t_1) A_{k_1 k} (t_2) & : t_1 > t_2 \\
A_{j k_1} (t_2) A_{k_1 k} (t_1) & : t_2 > t_1
\end{matrix}
\right. \qquad (***)
$$
Using ##(***)##, it can be easily verified that ##(**)## can be written,
$$
\sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
= \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^t dt_2 \mathcal{T} \{ A_{jk_1} (t_1) A_{k_1k} (t_2) \} .
$$
The definition ##(***)## obviously generalises: if ##t_{\alpha (1)} > t_{\alpha (1)} > \cdots > t_{\alpha (n)}## where ##\alpha## a permutation of ##\{ 1,2, \dots n\}##, then
$$
\mathcal{T} \{ A_{j k_1} (t_1) A_{k_1 k_2} (t_2) \cdots A_{k_{n-1} k_n} (t_n) \} := A_{j k_1} (t_{\alpha (1)}) A_{k_1 k_2} (t_{\alpha (2)}) \cdots A_{k_{n-1} k_n} (t_{\alpha (n)})
$$
It can be shown that we can then formally write the solution, ##(*)##, as:
\begin{align*}
& M (t) = M (0) + \sum_{n=0}^\infty \frac{1}{n!} \int_0^t dt_1 \int_0^t dt_2 \cdots \int_0^t dt_n \mathcal{T} \left\{ A (t_1) \cdots A (t_n) \right\} M (0)
\end{align*}
or
$$
M (t) = \mathcal{T} \exp \left( \int_0^t A (s) ds \right) M (0)
$$
From which we have that the time-ordered exponential
$$
T [A] (t) := \mathcal{T} \exp \left( \int_0^t A (s) ds \right)
$$
is the solution to the initial value problem:
\begin{align*}
\frac{d}{dt} T [A] (t) & = A (t) T [A] (t)
\nonumber \\
T [A] (0) & = \mathbb{1}
\end{align*}
This is the generalisation of ##d/dt \exp (t {\bf A}) = {\bf A} \exp (t {\bf A})##.