Right and Wrong - page 2

 
Dominik Christian Egert #:
Like if I paint a landscape and you paint a landscape, are they both unique, and can be considered independent works?

What if someone gets inspired by these two paintings and creates his own, is it unique?

What if another person "copies" some of your painting and copies some of my painting and adds the missing pieces from the third painting, is it unique?

Well, maybe it's unique, but is it inherently original? Or is it a compilation of other work?

Let's say, I take 3 source files of random sources and put them together to form a new program, I compile it, is the resulting binary now my work?

Let's say, I decompile that binary again, and I distribute that source file, is it my work, my creation?

Let's break it down a little more, a NN takes input and produces output. Let's take 7zip. It takes input and produces output.

I think, this could be argued, because a zip contains a dictionary and a list of links to that dictionary. Somewhat similar to a NN, it takes input, stores these inputs in parameters, so this could be seen as the dictionary. And it produces a resolved output, instead of a list of links to a dictionary.

So it could be argued, the NN itself is part of the output, as it is required to reproduce the output.

Same goes for the zip file. The dictionary is required to produce the output.

So, does the content of the zip file now belong to the person that coded the zip-software, or does the dictionary belong to that person?

If carried over to AI, do the parameters of the NN belong to the cider/trainer?

I think, it is very clear, this will not be the case.

So, if this analogy holds up, the result of an NN is the property of the providers to the "dictionary" and therefore the original sources are the owners of the result produced by the NN.

My personal opinion.

A totally different aspect to genetic algorithms is the fact, if a genetic algorithm "finds" a solution from randomness, does randomness contain all solutions? And if so, has there ever been "innovation", or was it just a finding of a solution within a search space?

Anyways, this will lead to lots of discussion on a very broad field of philosophy, legal aspects and for sure personal approaches to tasks and questions, mankind has not solved for thousands of years.

I personally think current NNs and AI is somewhere at the stage of 300bps Modems or acustic couplers... It has just begun, and there will be much more down the road. GPT5, I would say, already show the fundamental issues with the structures that are used for NNs nowadays. There will be an end to such models sooner or later.

These approaches lack the possibility of self reflection, as they are, although very complex, just functions, deterministic. One Input produces one output.

Humans do not work like that, they are non-deterministic, and I think that's the main source of intelligence and creativity. Only god knows why we decide how we decide.

But even this can be argued, as it is evident, we cannot make decisions at all, because at the point where we get conscious knowledge about how we decide, we already have decided for us.

There is a nice video that explains this concept in a nice and easy way to understand what I am trying to point out here, concerning our ability to make decisions. I can look it up and share if requested.

Let me shift it a bit . 

What if a company trains an autoencoder on our "landscapes" , and as many as it can find , then has the autoencoder spit out all possible outcomes (all possible landscapes) , but because it can't happen , let's ask if said company would use the autoencoder as a means to "detect" copies ? Won't even address the fact it needs to be made into law etc , they won't have a hard time there. (i.e. the company becomes the arbiter of its own copyright)

There is of course the gray area that like us , the autoencoder ,or a genetic algorithm , from its inception it has a possible spectrum of outcomes and furthermore is the landscape beautiful because we know what it takes to paint it or because its beautiful ? 

 
Lorentzos Roussos #:

Let me shift it a bit . 

What if a company trains an autoencoder on our "landscapes" , and as many as it can find , then has the autoencoder spit out all possible outcomes (all possible landscapes) , but because it can't happen , let's ask if said company would use the autoencoder as a means to "detect" copies ? Won't even address the fact it needs to be made into law etc , they won't have a hard time there. (i.e. the company becomes the arbiter of its own copyright)

There is of course the gray area that like us , the autoencoder ,or a genetic algorithm , from its inception it has a possible spectrum of outcomes and furthermore is the landscape beautiful because we know what it takes to paint it or because its beautiful ? 


I fully agree with you.

Like the AntiCheatAI that's being created to "save" online gaming. It is able to detect a cheater by behavioral analysis, and can detect/fingerprint a user, even if they create a new, anonymous account. They use input data on how a human moves a mouse, sequence of keystrokes and alike to distinguish between a cheat software corrected input and a pure human input.

But what if I make use of my GDPR rights and ask them to remove my personal data from their systems?

They will most probably reply, training data was anonymous. But the AI can identify me. So how is this anonymous? The input is, the output is not.

This will bring up regulation challenges, and I doubt, it's going to be solvable with the current approach of NNs.

Currently, a NN is a deterministic function. And that means, one input gives one output. There will be, and there must be a shift in technology. A new kind, a new type, a new sector needs to be established in which AI will be categorized to.

Once AI begins to be non deterministic, a new state needs to be introduced. New rules will need to be created to regulate it's usage.

But we are far off topic already....
 
Dominik Christian Egert #:
Honestly, I don't think it will. It is already very questionable when coming to arts, generated by AI, trained on work of existing real people.

It will just expand the question, is an inherent information already present in a neural network, or is it injected by training. And if it is injected, who owns these specific properties of that neural network? The GPU? The Trainer/coder?

Or is it more like a blending.

Imagine this, you take some ingredients and mix them to some new magic medicine. Although you created something new, if one of your initial ingredients is owned by someone else, is the new juice all yours?

In this example, you could argue, you bought the ingredients, but what about the data used for training an AI, have they been bought? And if not, and you use these data to train your network, then certain features and parameters will be a representation of your input.

Now if this NN produces an output and uses some of these specific parameters, will there be a copyright mark, a transparent note about the sources licensing? Well, no.

So, who owns the code generated by an AI?

Or, another example. Imagine you give a class and you reference source code from the internet, a student (AI) takes your teachings and creates some more code, based on what you have shown in class. He will use parts of the lesson, and add other parts from another source. Who owns the code? Especially for AI, as it is unable to "create" new code from its own creativity, there is nothing "new" coming from an AI. It's just a new arrangement of what has been given to it.

This opens a totally new discussion, more of the nature of philosophy, as underlying we would need to answer fundamentals like what is creativity and where is the source of such.

I think it could be interesting to look at patents in this regard, as you cannot patent anything that's not new, innovative or groundbreaking.

I don't think AI will solve this, as it is inherently in its own nature.

For AI based solely on genetic algorithms, I think the discussion would be a totally different one, as it could be argued, there is no pretreated input, and therefore the features and parameters of the neural network are more of a random type. So, in this regard, an AI produced output could be at least not based on any others work than the coder/creator of that NN. And I personally would claim the creator as the holder of the IP produced by the NN.


Thanks for opening this interesting topic. Because of my poor English I cannot engage in complicated topics. Sry 

 
Dominik Christian Egert #:

I fully agree with you.

Like the AntiCheatAI that's being created to "save" online gaming. It is able to detect a cheater by behavioral analysis, and can detect/fingerprint a user, even if they create a new, anonymous account. They use input data on how a human moves a mouse, sequence of keystrokes and alike to distinguish between a cheat software corrected input and a pure human input.

But what if I make use of my GDPR rights and ask them to remove my personal data from their systems?

They will most probably reply, training data was anonymous. But the AI can identify me. So how is this anonymous? The input is, the output is not.

This will bring up regulation challenges, and I doubt, it's going to be solvable with the current approach of NNs.

Currently, a NN is a deterministic function. And that means, one input gives one output. There will be, and there must be a shift in technology. A new kind, a new type, a new sector needs to be established in which AI will be categorized to.

Once AI begins to be non deterministic, a new state needs to be introduced. New rules will need to be created to regulate it's usage.

But we are far off topic already....

Well they will ban you as they are a private company but when it comes to training , meaning your data being used to train the anti cheat your name really is not needed.The real question that extends to other aspects of society (more serious than gaming) is the portion of the population that will ask "well what are you afraid of ?  do you have something to hide?" or in this specific case "what are you afraid of? are you cheating?"

That's true , yeah , in an application other than gaming where the AI can meet you or see you outside yeah even if the name is not stored it will know your behavior . So the GDPR will also change and maybe we will even have a registry of Models , and an anti rogue model department. Meaning , you train a model you must register it with the bureau .Very good observation there where the AI can identify you by your traits and not by your name or id.

What is your opinion on ai when it comes to "information controlling societies" , like where the EU is headed for example ? In a slightly apocalyptic scenario , won't the "free range" ai from 'Murica -however dangerous at first- have a massive advantage over the ais from Europe for instance with the restricted access to what they can learn ?

 
Lorentzos Roussos #:

Well they will ban you as they are a private company but when it comes to training , meaning your data being used to train the anti cheat your name really is not needed.The real question that extends to other aspects of society (more serious than gaming) is the portion of the population that will ask "well what are you afraid of ?  do you have something to hide?" or in this specific case "what are you afraid of? are you cheating?"

That's true , yeah , in an application other than gaming where the AI can meet you or see you outside yeah even if the name is not stored it will know your behavior . So the GDPR will also change and maybe we will even have a registry of Models , and an anti rogue model department. Meaning , you train a model you must register it with the bureau .Very good observation there where the AI can identify you by your traits and not by your name or id.

What is your opinion on ai when it comes to "information controlling societies" , like where the EU is headed for example ? In a slightly apocalyptic scenario , won't the "free range" ai from 'Murica -however dangerous at first- have a massive advantage over the ais from Europe for instance with the restricted access to what they can learn ?

I would say, banning will be problematic. Currently the relationship is about as follows, examplatory: Game Creator contracts AntCheatAI creator. Me, customer, bought a product, using it. I take advantage of my right to have my optional, not required for contractual fulfilling data, to be deleted. - Problem, they cannot do it.

I strongly disagree with the statement, "do you have something to hide?" - No. That is not the point here. And I do have the rights on my personal data. The question undermines me as a sovereign entity to myself and therefore offends my natural given right.

Concerning the EU, I disagree with their plans or ideas to decorate their inhabitants with more governmental control. Again, this is just another eye washing of public perception. Just like Airport Security measures.

It has been proven, who knows his way, will find a way. Any try to gain control will only be a "fake" attempt to calm the broad public. I am allowed to take matches to an airplane, but a lighter is not allowed. Although I could do much more harm with matches than with a lighter. In fact, you are allowed to take everything required to build a bomb to an airplane, but you are not allowed to take water in a bottle exceeding 100ml with you.

Same goes for chat control in the EU. Scanning all chats with an AI will not prevent any crimes. Any serious criminal will use Briar (a chat app) to communicate, and be hidden to any authorities. It's again just an attemt to make you feel good. Except for undermining people's rights, it won't have any effect.

Anyways, this AI that has been used to scan the chats, has shown over 80% false signaling.

Information controlling societies are doomed to fail in the long run. We have seen this in the past already, in the Roman Empire, as a prominent example. Cesar stocked up his Parlament to a size where they were unable to make decisions.

Look at china, they are failing as well. Although the central government tries it's best to keep information controlled, they fail at doing so. And it won't get better for them by collecting even more data by apps like TikTok. Contrary, they loose control more and more over their citizens, look at their demographics, women are leaving the country to find a man abroad, why?? Because information suppression isn't working.

I think the current hype about AI is giving people a false impression. It is not the solution to all problems. Just like the internet is not the solution to communication problems. Bullying is more present than ever before.

But I do believe AI has already taken over mankind. Just take a look at people outside, on the streets. Show me someone without a smartphone and without a social media account.

Social Media AIs are built to harvest your time, to make you think and believe what catches most of your attention. It's profit driven, and it works great. People are manipulated in spending as much time online as possible.

People are addicted to attention, likes and comments. That's a result of good AIs, doing their work.

There is a good documentary on Netflix about that, can share, if requested. Eye-opening.


 
Dominik Christian Egert #:
I would say, banning will be problematic. Currently the relationship is about as follows, examplatory: Game Creator contracts AntCheatAI creator. Me, customer, bought a product, using it. I take advantage of my right to have my optional, not required for contractual fulfilling data, to be deleted. - Problem, they cannot do it.

I strongly disagree with the statement, "do you have something to hide?" - No. That is not the point here. And I do have the rights on my personal data. The question undermines me as a sovereign entity to myself and therefore offends my natural given right.

Concerning the EU, I disagree with their plans or ideas to decorate their inhabitants with more governmental control. Again, this is just another eye washing of public perception. Just like Airport Security measures.

It has been proven, who knows his way, will find a way. Any try to gain control will only be a "fake" attempt to calm the broad public. I am allowed to take matches to an airplane, but a lighter is not allowed. Although I could do much more harm with matches than with a lighter. In fact, you are allowed to take everything required to build a bomb to an airplane, but you are not allowed to take water in a bottle exceeding 100ml with you.

Same goes for chat control in the EU. Scanning all chats with an AI will not prevent any crimes. Any serious criminal will use Briar (a chat app) to communicate, and be hidden to any authorities. It's again just an attemt to make you feel good. Except for undermining people's rights, it won't have any effect.

Anyways, this AI that has been used to scan the chats, has shown over 80% false signaling.

Information controlling societies are doomed to fail in the long run. We have seen this in the past already, in the Roman Empire, as a prominent example. Cesar stocked up his Parlament to a size where they were unable to make decisions.

Look at china, they are failing as well. Although the central government tries it's best to keep information controlled, they fail at doing so. And it won't get better for them by collecting even more data by apps like TikTok. Contrary, they loose control more and more over their citizens, look at their demographics, women are leaving the country to find a man abroad, why?? Because information suppression isn't working.

I think the current hype about AI is giving people a false impression. It is not the solution to all problems. Just like the internet is not the solution to communication problems. Bullying is more present than ever before.

But I do believe AI has already taken over mankind. Just take a look at people outside, on the streets. Show me someone without a smartphone and without a social media account.

Social Media AIs are built to harvest your time, to make you think and believe what catches most of your attention. It's profit driven, and it works great. People are manipulated in spending as much time online as possible.

People are addicted to attention, likes and comments. That's a result of good AIs, doing their work.

There is a good documentary on Netflix about that, can share, if requested. Eye-opening.


Yeah i agree , to relate to the points made earlier in 'Murica you could end up with the opposite of the matrix where the machines liberate humans from humans, in EU straight up matrix .

Allow me to ask this :

Do you believe the differences in popular content comparing the TikTok in China and the TikTok in the West has to do with what the algorithm promotes or the drives of the individuals ? 

(for instance if showing my beautiful body is ""celebrated"" , i want social acceptance so i show my hinnie on tiktok and in the other side of the world more sapio centric traits are ""celebrated"" and they show their creations)  , and what do you think would happen if these 2 userbases merged in one (given the massive population of that area it would weigh in a lot in the "algorithm")?

I've seen that documentary btw , it is indeed interesting . 

Another question similar as above but this time about the sprung up "scientific" youtube videos (mostly about space for now) that are straight up fake info and fake science.

 
Lorentzos Roussos #:

Yeah i agree , to relate to the points made earlier in 'Murica you could end up with the opposite of the matrix where the machines liberate humans from humans, in EU straight up matrix .

Allow me to ask this :

Do you believe the differences in popular content comparing the TikTok in China and the TikTok in the West has to do with what the algorithm promotes or the drives of the individuals ? 

(for instance if showing my beautiful body is ""celebrated"" , i want social acceptance so i show my hinnie on tiktok and in the other side of the world more sapio centric traits are ""celebrated"" and they show their creations)  , and what do you think would happen if these 2 userbases merged in one (given the massive population of that area it would weigh in a lot in the "algorithm")?


Isn't that like different languages supported by ChatGPT?

I think it could be seen in such a way, so therefore the distinguishment could be done in one (larger) AI model, I suppose.

After all, the input selects the considered features of the network, therefore the output will be adjusted accordingly, I would assume.

Having them separated would mean, you would split the world in two or more chunks, making it static. I assume the underlying AI model is universal, so that shifting trends can be adopted and previous "experience" carried over.

At least that would make sense, and since the Chinese are not dumb people, I could imagine that is their approach.
 
Lorentzos Roussos #:

Yeah i agree , to relate to the points made earlier in 'Murica you could end up with the opposite of the matrix where the machines liberate humans from humans, in EU straight up matrix .

Allow me to ask this :

Do you believe the differences in popular content comparing the TikTok in China and the TikTok in the West has to do with what the algorithm promotes or the drives of the individuals ? 


Another question similar as above but this time about the sprung up "scientific" youtube videos (mostly about space for now) that are straight up fake info and fake science.

I think neither, the algorithm is optimized to harvest as much time from you as possible, it doesn't care about you liking the content, it might even show you offending content, if it increases your time spent.

I guess this also answers your question about fake videos.
 
Dominik Christian Egert #:

Isn't that like different languages supported by ChatGPT?

I think it could be seen in such a way, so therefore the distinguishment could be done in one (larger) AI model, I suppose.

After all, the input selects the considered features of the network, therefore the output will be adjusted accordingly, I would assume.

Having them separated would mean, you would split the world in two or more chunks, making it static. I assume the underlying AI model is universal, so that shifting trends can be adopted and previous "experience" carried over.

At least that would make sense, and since the Chinese are not dumb people, I could imagine that is their approach.

Yeah , i get what you mean correct ,but let's say to reference your previous example that the "control" is automated . There's definetely an inner selection algorithm , obviously for social media .

If that were to be ported on "the anti cheat" and a behavior of a cheater happens to coincide partially with what you did at your first hours in the game and the game ejects you.

I think neither, the algorithm is optimized to harvest as much time from you as possible, it doesn't care about you liking the content, it might even show you offending content, if it increases your time spent.

I guess this also answers your question about fake videos.

Showing you offending content keeps you more engaged actually , i would assume.
 
Lorentzos Roussos #:

Yeah , i get what you mean correct ,but let's say to reference your previous example that the "control" is automated . There's definetely an inner selection algorithm , obviously for social media .

If that were to be ported on "the anti cheat" and a behavior of a cheater happens to coincide partially with what you did at your first hours in the game and the game ejects you.

Showing you offending content keeps you more engaged actually , i would assume.
Shouldnt the algorithm be able to make different decisions based on time played, "knowing" beginners from cheaters.

Like if you ask ChatGPT, please reformulate, I don't understand.

Or the distinguishment between different inputs and outputs, like the example of western hemisphere having different appreciations than the eastern hemisphere. When asking ChatGPT for python code or for a lyric? Isn't that comparable to such?
Reason: