maalpu said:
The rule then seems be that bra-ket has common integration variables, and ket-bra different ones, so <f|g><a|b> becomes f*(x) g(x) a*(y) b(y) whichever way they are grouped.
Correct, except that you also need \int{dx\:dy} in there for it to make sense, but I think you probably got that already.
maalpu said:
For the operator, here it was just a simple number, a multiplier, but in general it might be eg a partial differential - can it then still be simply applied either left or right ?
In that case, you pretty much have to keep the operator to the left of the quantity, but that's just because our rules for writing down derivatives say that the \partial always applies to the thing to the right of it. This \Omega(x,y) notation for operators doesn't really handle derivatives very well--you can do it, but it involves a bunch of funny Fourier transforms that make it look a lot more complicated than it really is.
maalpu said:
And I notice with |g> above you didn't introduce an ∫dx, but in the final <ψ|Ω you did (and it is by definition just some other bra <ξ|), an inconsistency I see in many places - when does a bra or ket outside a bra-ket imply integration and when not ?
Basically, any time you have a | in an expression with something on both sides of it, you will integrate over a common variable (I think Feynman even once said that the great rule of quantum mechanics is simply that | = \int). So |g\rangle = g(x) doesn't have one, because there's nothing to integrate it with, but \langle \psi|\Omega = \int{\psi^*(x)\Omega(x,y)dx} does, because there are two things to multiply together.
Have you taken a course in linear algebra? If so, then this may be familiar to you in other terms. Technically speaking, when we put together a bra and a ket, what we're really doing is taking the inner product of two vectors. In a normal finite-dimensional vector space, taking an inner product can be done by breaking the vectors apart into a common basis, multiplying each component with the other, and adding them all together (i.e. the dot product). Doing an integral is just the continuous equivalent of this, where the basis vectors are just a continuum of position eigenstates, and g(x) just tells us the weight of each one in the vector. Similarly, an operator is technically a rank-2 tensor that we take the product of with other vectors. That's why it has two different integration variables instead of just one.
If you haven't taken linear algebra, that paragraph may not make much sense. But even if you're not familiar with what's technically going on, the key to remember is that any time you're sticking two things together in Dirac notation, you have to do an integration over their product in integral notation. Doing that will kill their common variable, and leave behind any other variables that might have been lying around, which you can then use to hook up with other functions later on.