1
19

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 13 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

2
12
CBR 250R (by Hi Fumiyo) (files.catbox.moe)
submitted 16 hours ago by MentalEdge@sopuli.xyz to c/morphmoe@ani.social

Artist: Hi Fumiyo | pixiv | twitter | danbooru

3
41
Magnet Girls (by Rinotuna) (files.catbox.moe)
submitted 22 hours ago by MentalEdge@sopuli.xyz to c/morphmoe@ani.social

Artist: Rinotuna | pixiv | twitter | artstation | linktree | patreon | danbooru

Full quality: .jpg 1 MB (2274โ€‰ร— 2844)

4
31

Artist: Rinotuna | pixiv | twitter | artstation | linktree | patreon | danbooru

5
6
submitted 1 day ago* (last edited 16 hours ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 12 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

6
39

Artist: X.X.D.X.C | pixiv | danbooru

7
28
submitted 2 days ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 12 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

8
21
(by Shycocoa) (files.catbox.moe)

Artist: Shycocoa | pixiv | twitter | artstation | danbooru

9
26
submitted 3 days ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Yes, some Linux distros use blue kernel-panic screens too but I'm tagging the post [Windows] because that's the "franchise" where the "character" debuted.

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 11 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

10
53
CSX EMD GP38-2 (by Zhvo) (files.catbox.moe)

Artist: Zhvo | pixiv | twitter | danbooru

11
23
Sakura (Random-tan Studio) (files.catbox.moe)
submitted 4 days ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 11 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb.). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

12
23
Katana (Random-tan Studio) (files.catbox.moe)
submitted 5 days ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 10 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

Edit: fixed image link. Who knew global variables in Python were this tricky?

13
13
submitted 6 days ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 10 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

14
19
Original (by Shycocoa) (files.catbox.moe)

Artist: Shycocoa | pixiv | twitter | artstation | danbooru

Full quality: .jpg 2 MB (3322โ€‰ร— 2904)

15
33
F-117 Nighthawk (by Fami) (files.catbox.moe)

Artist: Fami | fediverse | pixiv | twitter | artstation | tumblr | danbooru

16
21

Artist: Hi Fumiyo | pixiv | twitter | danbooru

17
13
submitted 1 week ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 9 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

18
20
Princess Boo (by St) (files.catbox.moe)

Artist: St | pixiv | twitter | danbooru

Full quality: .jpg 1 MB (1900โ€‰ร— 2950)

19
23
Original (by Rinotuna) (files.catbox.moe)

Artist: Rinotuna | pixiv | twitter | artstation | linktree | patreon | danbooru

20
29
Dino-daycare (by Rinotuna) (files.catbox.moe)

Artist: Rinotuna | pixiv | twitter | artstation | linktree | patreon | danbooru

Full quality: .jpg 2 MB (2713โ€‰ร— 3086)

21
10
submitted 1 week ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 9 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

22
45

Artist: Niii | pixiv | danbooru

23
8
submitted 1 week ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 8 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

24
45

Artist: Rinotuna | pixiv | twitter | artstation | linktree | patreon | danbooru

Full quality: .jpg 1 MB (2668โ€‰ร— 3251)

25
26
submitted 1 week ago* (last edited 2 days ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 8 on Tapas

Oops, my post scheduling script ran twice today. I'll check cron syntax once more.

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

view more: next โ€บ

MorphMoe

227 readers
53 users here now

Anthropomorphized everyday objects etc. If it exists, someone has turned it into an anime-girl-or-guy.

  1. Posts must feature "morphmoe". Meaning non-sentient things turned into people.
  2. No nudity. Lewd art is fine, but mark it NSFW.
  3. If posting a more suggestive piece, or one with simply a lot of skin, consider still marking it NSFW.
  4. Include a link to the artist in post body, if you can.
  5. AI Generated content is not allowed.
  6. Positivity only. No shitting on the art, the artists, or the fans of the art/artist.
  7. Finally, all rules of the parent instance still apply, of course.

SauceNao can be used to effectively reverse search the creator of a piece, if you do not know it.

You may also leave the post body blanks or mention @saucechan@ani.social, in which case the bot will attempt to find and provide the source in a comment.

Find other anime communities which may interest you: Here

Other "moe" communities:

founded 5 months ago
MODERATORS