No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
On that level it usually falls on computer scientists.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement.
“Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
After all these years, security researchers still don’t know if the encryption is any good?
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement. “Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
And it isn’t even encrypted by default, you manually have to enable that. By default, all your plain text messages are stored on their servers.