By trial and error, using common sense. (Smile)
First we need to cover the empty string, so there should be the rule
\[
S\to\epsilon
\]
Next, an obvious approach is to have all other rules introduce one $b$ and two $a$s. By induction on derivation, this condition guarantees that each generated word belongs to the language, but there are grammars that don't have this restriction. The challenge is to generate all strings in the language. The newly introduced $b$ may be located to the left of the newly introduced $a$'s, between them, or to the right. So we may define the following rules.
\[
\begin{aligned}
S&\to SbSaSaS\\
S&\to SaSbSaS\\
S&\to SaSaSbS
\end{aligned}
\]
By inserting $S$ between symbols we don't lose the ability to make the intermediate string (with nonterminal symbols) into an arbitrary string in the language. We don't want to commit, for example, to $baa$ with no way to insert something between $b$ and $a$.
It seems to me that these rules should cover the language, though this grammar is probably highly ambiguous, i.e., each terminal word has many derivations. In other words, inserting $S$ everywhere is probably an overkill. Anyway, the next step is proving that this grammar generates precisely the required language.
It is possible to try to provide a more economical grammar, which is less ambiguous and more suitable for parsing.