Fix your matrix ##M = \begin{pmatrix} a & b\\ c & d \end{pmatrix}##, and notice that ##M+\epsilon I \longrightarrow M## as ##\epsilon \longrightarrow 0##.
If we can show that the determinant of ##M+\epsilon I## is nonzero for small enough nonzero ##\epsilon##, then we know that matrices with nonzero determinant are dense.* So let's show it.
The determinant of ##M+\epsilon I## is just ##(a+\epsilon)(d+\epsilon) - bc##, which equals (ad-bc) + (a+d)\epsilon + \epsilon^2
-Exercise: If ##ad-bc\neq 0##, then ##(ad-bc) + (a+d)\epsilon + \epsilon^2\neq 0## for small enough nonzero ##\epsilon##. (Though you don't really need to do this one, since ##ad-bc\neq 0## means ##M## already has nonzero determinant.)
-Exercise: If ##ad-bc=0## but ##a+d\neq 0##, then ##(ad-bc) + (a+d)\epsilon + \epsilon^2\neq 0## for small enough nonzero ##\epsilon##.
-Exercise: If ##ad-bc= a+d = 0##, then ##(ad-bc) + (a+d)\epsilon + \epsilon^2\neq 0## for small enough nonzero ##\epsilon##. (This one is basically immediate.)
*[Indeed, for some ##n\in\mathbb N##, we would know ##M+\frac1k I## has nonzero determinant for every ##k\geq n##. Then ##(M+\frac1k I)_{k=n}^\infty## is a sequence of matrices with nonzero determinant, converging to ##M##.]