The set of 2x1 matrices and the set of 1x2 matrices is the same set. It's just ##\mathbb R^2##. The row notation and the column notation for an element of ##\mathbb R^2## are just ways to indicate what multiplication operations you intend to apply to it.
If you denote your basis vectors by i,j, and write an arbitrary vector v as ##v=ai+bj##, then the numbers a,b are called the components of v with respect to the ordered basis (i,j). The matrix of components of v with respect to (i,j) is, by convention, the column matrix ##\begin{pmatrix}a\\ b\end{pmatrix}##.
You can use a row matrix instead of a column matrix if you want to. There is however a reason to prefer the column matrix:
Linear transformations can also be represented by matrices. (See the https://www.physicsforums.com/showthread.php?t=694922 about this). Suppose that L is a linear transformation and that we want to represent v, L and Lv by matrices. I will denote them by [v], [L] and [Lv]. If we take [v] and [Lv] to be column matrices, we can define [L] so that [Lv]=[L][v]. But if we take [v] and [Lv] to be row matrices, the best we can do is to define [L] so that we get [Lv]=[v][L].
Are you saying that you want to write (5,6)+(3,9)=(8,15) as a matrix equality? It doesn't matter if you choose to write them as rows or as columns, as long as you make the same choice for all of them. Either write all three as rows, or write all three as columns.
It sounds like you want to transform two vectors from two different vector spaces into a single vector. There's no natural way to do that. What I mean by that is that you can certainly let u and v be vectors from two different spaces, and f a function such that f(u,v) is a vector in a third space, but there's no preferred function f that you should use to do this.