A couple of pointers. First, don't call it just g(x). Better would be g(x;x1,x2,x3). You have four parameters here.
Second, don't be so quick to expand. For example, instead of expanding g(x;x1,x2,x3)^2 first and then differentiating, do it the other way around:
\frac{\partial}{\partial x} g(x;x1,x2,x3)^2 =<br />
2g(x;x1,x2,x3)\;\frac{\partial}{\partial x} (g(x;x1,x2,x3))
Setting this to zero yields
g(x;x1,x2,x3) \;\frac{\partial}{\partial x} (g(x;x1,x2,x3)) = 0
The points x at which g(x;x1,x2,x3)=0 represent the local minima in g(x;x1,x2,x3)^2. You want the local maxima, not the local minima. You can ignore the solutions g(x;x1,x2,x3)=0. The local maxima are the solutions to
\frac{\partial}{\partial x} g(x;x1,x2,x3) = 0
This is a quadratic function with potentially two solutions. Find these. This will give you the points at which |g(x)| reaches local minimum. The function |g(x)| might also reach a maximum at the boundary points (0 and 1). The function will reach its maximum value over [0,1] at one these four points.
Now you want the partial derivatives of g(x;x1,x2,x3)^2 wrt each of the xi evaluated at this maximal x to be zero. That's three simultaneous equations in three variables.
Minimizing the maximum (or maximizing the minimum) is used in many places in mathematics. One area is game theory. Here the goal is to find the move that maximizes the "me versus the other guy" score even if the other guy finds the move that minimizes this same score.
Another area is function design. Suppose you are asked to design an approximation g(x) to some hard-to-calculate function f(x) over some range. Most people will use a least squares approximation. This minimizes the root mean square error. A user of this function is more likely concerned with the worst case error, not the root mean square error. The worst case error is max(|f(x)-g(x)|). This is exactly what you are asked to do in this problem.