a paper from waseda uni, japan: a pipeline to recover lost parts of an image. from this: to this: visit the page and watch the video from 27seconds mark code (coming soon) <-- certainly hope it will really be released to the public soon. edit 27-feb-2018: the source code has been uploaded to github (since 7 days ago).
Whoa, I would totally love this... just how much time have I spent cleaning and redrawing manhua before...
wait,, the translator of manga redraw the original?? i really tought that they just magically delete the text then add the translated text i'm dumb, please forgive me
Just a reminder. This will not be some executable thats a few megabytes big with magic capabilies. Here is a quote of the paper regarding the training process of the neural network:
Not the translator, manga scanlation is always a team effort. The cleaners and redrawers are the ones that erase the original text and put the translated text on its place.
@GekkoZockt: while it's true that the training process is very time consuming, there are things to consider: - the amount/length of training depends on what kind of problem you want it to solve, and how perfectionist you are --- they spent 2 months train using 8,097,967 images with 500,000 iterations --- nature can be so diverse, but suppose we only need it to work against adachi mitsuru's style? - the training can be done once and others would only be using it. - distributed training to produce several "brains"? one may be tuned to b/w images, others to manhwa/manhua. or maybe limit it further to certain artist style? this is a sample of a tensor flow quick training result: super saiyan classifier. not nearly as powerful as what the waseda uni has done, but hopefully it enough to illustrate my point above: not everyone should be produces, most are consumers.
@gangbuntu I just wanted to point out that it’s a neural network and needs to be trained. Of course the majority of the points you brought up are indeed correct. That being said it will still take some time to train that NN with consumer grade Hardware. Especially the completion network needs a lot of training even if you narrow down the art style. The sample size could also be a problem. The code example you brought up only recognizes image patterns?(only read the readme sry). The completion network needs to make sense of a lot more stuff and needs a sufficient amount of data or you’ll have problems with depth of field, neighboring pixels etc.
you seem to know your stuff but imho dof relates more to live images and imitations of them. don't see much of it employed in manga/comic. on top of my head i can only think of 2: - mark miller's 300 - the 1st spread page blew me away - some artists of witchblade most mangaka have "simplistic" style disclaimer: simplistic might not be the best word for it. it is used as a contrast to the art of Me and the Devil Blues just like png is almost always better than jpg for mangas (characteristic of most manga are sharp strokes/edges while jpg achieve compression by stripping away exactly those) techniques that serve well in photographic images might not fit as well for mangas. * not suggesting it will be much simpler with manga, just that they are different beast.
there's another tool from the same 3 people: Satoshi Iizuka, Edgar Simo-Serra, Hiroshi Ishikawa coloring a gray scale image. the showcase: the source code: https://github.com/satoshiiizuka/siggraph2016_colorization don't think this one will have much use in manga/comic though. i mean hair color in anime/manga, wolverine suit, ...