# Encryption question

1. Apr 30, 2006

### jeffceth

I hope this is the right place to ask this:

So looking at a symmetric encryption scheme, we see that a simple, say, xor of the data with the secret key will be broken in the event of single known-plaintext attack. Obviously, then, it is intuitive to pad out all plaintext with a random value, and of course perform a simple reversable operation to obfuscate the content so portions of the key cannot be retrieved piecemeal in a known-plaintext attack. One could, for example, compress the plaintext in an encrypted archive using a random value for a key, and then append the random value to the archive and xor the whole thing with the secret key. I understand that this is not actually cryptographically secure. I just don't understand why. What form would attacks against such a method or a similar one take?

(level of math I can handle: out of practice, but first year university math I generally remember)

sincerely,
thatwouldbeme

2. May 9, 2006

### jeffceth

bump. Should I have posted this in a different place?

3. May 9, 2006

### 3trQN

Two bad methods dont make one good :) why would that be any more secure?

Your scheme makes too many assumptions, starting with "In the event of a single known-plaintext attack". You cant guarentee those conditions, and you should assume your attacker has all the information you do except the secret key, including the exact method that you use to encrypt it.

Kerckhoffs principle i think its called

4. May 13, 2006

### jeffceth

I'm afraid I was unclear. I am not making any of those assumptions. I was simply pointing out that informationally speaking there is a 1-to-1 and onto relationship between plaintexts and ciphertexts in the first scheme and thus obviously a single known-plaintext case holds enough information to 'break' the key. By contrast, it would initially seem(prior to close examination) that since in the second scheme the 1-to-1 characteristic of the encryption method is destroyed over an arbitrary individual bit one would not be able to retrieve information about the key from a known-plaintext attack. However, in reality many such systems can be cryptanalysed over a series of known-plaintexts. How would this occur in this case?

If it would help send things in the right direction, my current hypothesis is that successful cryptanalysis of this scheme would involve capitalising on the relationshiip between the bits of the archive key and the bits of the archive itself, since obviously when taken individually each 'block' of information is 100% unpredictable(since the archive key is random, while the archive would be different for each key) so any cryptanalysis must capitalise on the relationship between them. Of course, this information would be limited and/or probabilistic, so it would have to be compiled over multiple plaintext attacks. Does this seem correct, or am I making fatal flaws in how I view this situation?

sincerely,
thatwoudlbeme