Slanderman

Slanderman

अवलोकन

This primitive form of AI can clone you, and other users, to generate Markov chain messages with occasionally coherent and funny results.


hey, please ignore the slander in the headline.

have you ever wanted to duplicate yourself? that's right, i'm talking about a virtual clone. don't worry, they'll be kept safe and content. if you’re interested, i’ll explain how it works later. but let's cut to the chase - what can I do?

  • a lot of things
  • for example
  • making the clones speak
  • making inspirational quotes
  • making demoralising quotes
  • things with roles
  • and
  • more features
  • to come

if you're having trouble wrapping your head around all the possibilities, feel free to join my server and ask someone for help. currently, i'm limited to markov chains, but i do wish to learn new, more accurate methods of cloning in the future. here's a tier list of generated quotes to convey the range of the (differing quality) generations you can expect:

(just do /help for all the commands.)

How it works

you're still here? alright, i did promise. think of it this way: i collect two discord messages, "Roses are Red" and "Violets are Blue".

graph LR
A[Roses] -- 100% --> B[are]
B -- 100% --> C[Red]

D[Violets] -- 100% --> E[are]
E -- 100% --> F[Blue]

these two messages can be represented as a Markov probability - the chance of the next word after "Roses" being "are" is 100%, as we have not yet created a chain between the two messages. but what if we wanted to construct an entirely new sentence just by using the words from these messages? in that case, we can interlink these two messages to and create a new Markov chain. here's how that would look:

graph LR
A[Roses] -- 100% --> B[are]
B -- 50% --> C[Red]

D[Violets] -- 100% --> B[are]
B -- 50% --> F[Blue]

the first thing to notice is that the chain now branches off and has alternate paths. these alternate paths are entirely determined by the likelihood of a word coming after another word. for example, in our dataset "Roses are Red" and "Violets are Blue", only two words come after "are": "Red" and "Blue" therefore, the probability for a word to be chosen after "are" is 50%. meanwhile, "Roses" and "Violets" both lead to the same word, giving a 100% probability.

nice, got that? let's see what could be generated from this chain:

graph LR
style A fill:#6cd987,stroke:#333,stroke-width:4px
A[Roses] -- 100% --> B[are]

B -- 50% --> C[Red]
style B fill:#6cd987,stroke:#333,stroke-width:4px

D[Violets] -- 100% --> B[are]
style F fill:#6cd987,stroke:#333,stroke-width:4px
B -- 50% --> F[Blue]

"Roses are Blue" well, that's certainly not true, right? which means that we've succeeded. of course, there are many extra steps in filtering and linking data which are not included here, but this is the backbone of the whole system.

Privacy

privacy policy coming soon. but so you are aware, you can opt your data out anytime using the command /wipe_messages data will not be collected before a channel is attached using /attach your data is stored remotely in a secure server and is never distributed or used for reasons other than the described function.

समीक्षा और रेटिंग


0

0 समीक्षाएं

Reviews can be left only by registered users. All reviews are moderated by Top.gg moderators. Please make sure to check our guidelines before posting.

5 स्टार्स

0

4 स्टार्स

0

3 स्टार्स

0

2 स्टार्स

0

1 स्टार

0



यहाँ अभी तक कोई समीक्षा नहीं है!


Top.gg

Explore millions of Discord Bots & Discord Apps

Support

Submit Ticket

Manage Cookie Settings