Source: https://www.linkedin.com/feed/update/urn%3Ali%3Ashare%3A6549488012206632960
Beyond the ‘Naive’ AI-ML-DL-DNN Generator and Interpreter Premises of LSTMs to ‘Real World’ Adversarial Contexts: Thank you for great explanations and discussions. Here is a thought for future advancements in LSTMs and Attention Mechanisms, etc. The key premise is that the specific word and sentence structures are actually to be taken at ‘face value’ based upon whatever text (data) they contain. However, as it is known in (defensive) cybersecurity-cryptography, one can obfuscate the text (data) by replacing and altering its contents to share the ‘hidden’ meaning with one who has access to the cipher. How can LSTMs account for such ‘substitutions and transpositions’ to ‘decipher’ the ‘real’ meanings? Furthermore, as it is known in (offensive) cybersecurity-cryptography, one can often use misinformation and disinformation, known well to those in business and competitive intelligence, as well as to those in military and corporate surveillance, to intentionally misinform and disinform (as in ‘fake news’), how do LSTMs account for such misinformation? (I am not yet considering more ‘interesting’ techniques such as MITM which can radically alter the input and/or output streams of text.) Here are some clues: On Next Generation Encryption: https://lnkd.in/dPkWnef .