Jump to content

Talk:Neural machine translation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Adding simple Introduction, History and Challenges section

[edit]

Hello, Neural Machine Translation is such an important and promising technology to which all big companies(Google,Facebook,Microsoft) are switching to. I found the current introduction of the topic to be too technical and can be explained in simple language. Not every reader has knowledge about Neural networks/deep learning. Also there is not much information about how this technology became so popular and who introduced it first. I would like to add these sections to the article to make it more useful and informative. The research paper that I am referring to for challenges section is the one by Sutskever, Le, Vinyals and Zaremba.[1]. And for introduction, I am referring to article posted at the website Medium.org.[2].History section would be written in reference to the article by Kalchbrenner and Blunsom.[3] Prekshabehede (talk) 20:29, 5 October 2017 (UTC)[reply]

References

  1. ^ "Luong, M. T., Sutskever, I., Le, Q. V., Vinyals, O., & Zaremba, W. (2014). Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206" (PDF). Cornell University Library.
  2. ^ "History and Frontier of the Neural Machine Translation". medium.org.
  3. ^ "Kalchbrenner, N., & Blunsom, P. (2013, October). Recurrent Continuous Translation Models. In EMNLP (Vol. 3, No. 39, p. 413)" (PDF).

Well go ahead! Escellent idea! You don't need anyone's permission! -- Robert Abitbol — Preceding unsigned comment added by 24.54.3.100 (talk) 05:18, 24 November 2020 (UTC)[reply]

Broken references

[edit]

@MrOllie: Since this paragraph was removed here, this article now contains several broken references. Can these broken references be repaired? Jarble (talk) 02:38, 14 February 2019 (UTC)[reply]

Sure, I cleaned it up. - MrOllie (talk) 02:44, 14 February 2019 (UTC)[reply]

I have a question

[edit]

The talk pages are meant to indicate how an article could be improved, right? This is exactly what I did yesterday in a text and a guy named Largopiazo or Largoplazo erased it and more over this guy has threatened to block me. What's going on here?

I simply suggested to add in the article a paragraph saying that up to now no one has come forward to explain what happens when 2, 3 million sentences have been added to the databases.

If we're not allowed to make suggestions on ways to improve an articke, what are Talk pages for?

-- Robert Abitbol — Preceding unsigned comment added by 2001:18C0:424:B00:4D1C:F790:BC53:BE53 (talk) 02:09, 16 April 2021 (UTC)[reply]

What's going on here is exactly what I've explained to you in detail on multiple occasions is going on here. Refer to all previous commentary if you still don't know. For the benefit of others just arriving: this is about several years of posts to talk pages (or at least this one) contrary to WP:NOTFORUM and WP:No original research. Largoplazo (talk) 02:11, 16 April 2021 (UTC)[reply]
You're not addressing my question. Please answer my question DIRECTLY and stop evading it. My question is: CAN WE MAKE SUGGESTIONS TO IMPROVE A PAGE ON A TALK PAGE OR WILL YOU ERASE ALL SUGGESTIONS? Or let me rephrase my question: If I write a paragraph starting with the words: Here are my suggestions to improve this article will this be a guarantee that you won't vandalize our comments? -- Robert Abitbol
1. Going on at tremendous length about how you think what everybody in the field thinks is wrong goes way beyond "making suggestions to improve the page", especially after you've been told that none of it can trigger any changes to the article without reliable sources and a discussion as to how they override the sources already cited. 2. Using the talk page as a forum for your own original research is not allowed. 3. See WP:BLUDGEON. Largoplazo (talk) 02:20, 16 April 2021 (UTC)[reply]
Fair enough. So I'll start my comments with the words: How to improve this page and you won't be able to say boo. --Robert Abitbol

How the Neural Machine Translation page could be improved

[edit]

There are two aspects to all systems of machine translation:

1) The compilation of the dictionary
2) The translation of the word, segment of sentence or sentence entered by the user.

Unfortunately the article only deals with the second aspect. Whether the explanation of this second aspect makes sense or not depends on everyone's perception. I won't get into it.

I fact I am allowed to indicate my opinion in the context of improving a page but Largoplazo will consider this an opinion so I won't do so. There is a fine line here. You're entitled to produce opinions on how to improve a page whether or not Largoplazo likes it or not. But as I said I won't go into it.

In fact, when it comes to the compilation of the dictionary in the context of NMT, all we know is this:

2, 3 million bilingual sentences are injected inside the databases    

What happens next, nobody knows. This aspect has not at all been dealt in the article.

It would be crucial to get information on what happens after the injection of the million translated sentences in the databases, how the sentences are processed and enter this information in the article.

Thanks Largoplazo for letting me know if you feel the comment is not acceptable. Talk pages are meant to indicate ways to improve a page and this is what I just did. So please don't vandalize my text.

-- Robert Abitbol

"A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language." - MrOllie (talk) 11:17, 16 April 2021 (UTC)[reply]
Well said MrOllie. I think you summed it up really well and I strongly encourage you to enter this sentence in the main article. I don't agree with the word predict however. The program does not predict it eatablishes rules. -- Robert Abitbol
It's already in the main article, hence the quotation marks. You did read the whole article, right? - MrOllie (talk) 12:47, 17 April 2021 (UTC)[reply]
I did but to tell you the truth those words don't mean much to me. Of course every MT that has ever existed works with the encoder-decoder system. What else is new? It's like you're saying to me that cars work with gas. REALLY? BIG DEAL! So this is not a demonstration. A demonstration is not made by one single sentence that indicates what everyone knows: A demonstration is made with examples and so far I have never read a single decent demonstration with real examples. This is the reason why I am saying that this article needs first and foremost info on what happens when millions of sentences are injected in the databases. --Robert Abitbol — Preceding unsigned comment added by 2001:18C0:424:B00:C9DD:EDF6:A37C:1481 (talk) 19:17, 30 April 2021 (UTC)[reply]
An encylopedia article wouldn't have space to show a neural net trained on millions of sentences. That's a bit like saying our article on cars should include a full set of blueprints including all the parts of the engine. You can examine any of the open source projects, though. https://github.com/OpenNMT/OpenNMT-py is a decent one. - MrOllie (talk) 19:31, 30 April 2021 (UTC)[reply]
Whether the app handles millions of sentences or billions of sentences has no relevance. You're going to the extreme. No one is asking for extremely detailed explanations. We simply want an overview. Let's use your metaphor. Currently it's like we're asking how the motor runs and the answer we get is: With gas. Quite an explanation! I find it very strange that no one comes forward to give an outline of the work done by the app on the corpus. I am going to ask a world expert on MT, Jeff Allen and we'll know.

→-- Robert Abitbol

When asked if you have read the page, the answer given was "I did but to tell you the truth those words don't mean much to me". This is refusal to engage with the core argument. The program does not establish rules, the program infers what appears as rules to you, from complex statistical analysis... but in reality it does not even "read" the language neither does it understand it. The program does not read or see language, it sees a list of 0 and 1 that is then translated to us into "language" because computers work on binary code. As such, the program does not understand language but sees patterns that it reproduces and this is why it is performing so well without relying on actual grammar rules. To engage with Neural Machine Translation, you would need to engage with neural networks first and understand how they work. Explaining how NMT works at the granular level you seek, requires to provide some examples, along with code and instructions, which are often engine specific. An encyclopedia needs to be exhaustive and "universal", as such relying on a specific engine to indulge you on something you are not willing to actually make an effort to engage with (see the first sentence of this paragraph) is a waste of time. — Preceding unsigned comment added by 104.163.166.231 (talk) 19:20, 2 June 2021 (UTC)[reply]
I am not here to debate but to give suggestions to improve the page. Any other topic is IRRELEVANT. How about you debate the subject with yourself? -- Robert Abitbol

2023 rewrite

[edit]

Hi, I just published a complete rewrite of this article. I tried to incorporate as much of the present info as I deemed sensible and added quite a bit additional info. What I dropped was:

  • The specific low-resource example of translating Accadian -> English. It's present in the main article about MT, as well, and I think it's more fitting there.
  • Some of the mentioned advances: “large-vocabulary NMT, applications to image captioning, subword-NMT, multilingual NMT, multi-source NMT, character-level NMT” I think these are valuable things to add, but they deserve more than just the mention.
  • Some other details, e.g. that the European Parliament uses NMT.

Since this is basically my first edit of Wikipedia, I'd appreciate some feedback on what I did. :)

In the future, Id like to add a bit more here, I think. E.g.:

  • Sub-word NMT
  • Training variations: Reinforcement learning, curriculum learning, unsupervised NMT, …
  • Multi-lingual and low-resource NMT

--GorgorMith (talk) 16:16, 26 December 2023 (UTC)[reply]