Welcome to Debate Club! Please be aware that this is a space for respectful debate, and that your ideas will be challenged here. Please remember to critique the argument, not the author.

Question about research study

vm007
vm007 Posts: 241 Member
Hi,

So research studies are done which take numerous things into account to determine something -now after the data is taken they study people and follow them for few years and what not to determine and conclude -good so far?

My question is- how come they don't just feed all that data into a computer model and let the model run that -allow it to calculate based on , let's say 5 years. Depending on the computer the results would be available within few days or months instead of 5 years of manual following up with people and what not.

I am not the first one to think of something like this- I'm sure they don't do this for a reason so I am just wondering what those reasons are?

Replies

  • JeromeBarry1
    JeromeBarry1 Posts: 10,182 Member
    Rent Watson and try it.
  • cdjs77
    cdjs77 Posts: 176 Member
    I'm a grad student in Statistics and Quantitative Economics with a concentration on Deep Learning, so I would like to answer, but I'm not really sure what you are asking.

    Are you saying why do they conduct year-long studies to confirm hypotheses versus running the numbers through a computer? I don't see how a computer could do this, they need to collect data from people throughout the years in order to actually confirm the hypothesis, computer simulations are not necessarily considered valid proof for many problems.

    I think I need an example of a study you are thinking about, though, in order to understand what you are asking, because your premise isn't very clear.
  • nvmomketo
    nvmomketo Posts: 12,019 Member
    Sometimes one year is not enough to extrapolate data for 5 years from. Some health issues take years to appear or years to reverse.
  • vm007
    vm007 Posts: 241 Member
    cdjs77 wrote: »
    I'm a grad student in Statistics and Quantitative Economics with a concentration on Deep Learning, so I would like to answer, but I'm not really sure what you are asking.

    Are you saying why do they conduct year-long studies to confirm hypotheses versus running the numbers through a computer? I don't see how a computer could do this, they need to collect data from people throughout the years in order to actually confirm the hypothesis, computer simulations are not necessarily considered valid proof for many problems.

    I think I need an example of a study you are thinking about, though, in order to understand what you are asking, because your premise isn't very clear.

    Let's say for example- We want to see how would product A influence people. We know the ingredients of product A -we know the age group, gender , habits etc- we follow people for 2-3 weeks and we see how they respond react over that time. We punch that data into a model -design model in a way that takes into account some variables here and there and let it rip.

    There are deviations which may happen in real life- but we can have two models- one that takes those into account and second model stays intact as if all those people literally followed everything to dot-wouldn't those give out results quicker than doing like a 5-6 year study? and cheaper perhaps?
  • CSARdiver
    CSARdiver Posts: 6,252 Member
    Depends on your design of experiment. People are continually attempting to use data to identify trends and predict the future. The problem is typically some unknown variable at play that was never considered, or what was thought to be a primary influencer turned out to be inconsequential.

    Any data set will be limited and become a snapshot in time, so by doing this you introduce bias. Have you accounted for your bias in this experiment?

    This is further complicated with biological organisms who exhibit free will and unlimited behavioral controls. Next to impossible to predict behavior.
  • SephiraAllen
    SephiraAllen Posts: 78 Member
    Generally speaking, in order to create an accurate computer model, you have to program it with real-life data (which unfortunately, takes time to gather). So at some point, once they have followed enough people for 5+ years, they can input those results and then use that data to create a program that could be used for future similar type studies.

    For example: hurricane forecast models - they have programmed years of tracking data into the system, so that when a new storm starts to form, they can input it's current track and see (using the computer model) the path where other similar hurricanes have gone and it will give them a general idea of where the current storm may go. But those forecasts are based on many years of gathered data (and we still see how off they are sometimes).

    Basically, if they didn't have any real-life data to base the results on it would just be random guessing. So it wouldn't be a very accurate study.
  • L1zardQueen
    L1zardQueen Posts: 8,754 Member
    Generally speaking, in order to create an accurate computer model, you have to program it with real-life data (which unfortunately, takes time to gather). So at some point, once they have followed enough people for 5+ years, they can input those results and then use that data to create a program that could be used for future similar type studies.

    For example: hurricane forecast models - they have programmed years of tracking data into the system, so that when a new storm starts to form, they can input it's current track and see (using the computer model) the path where other similar hurricanes have gone and it will give them a general idea of where the current storm may go. But those forecasts are based on many years of gathered data (and we still see how off they are sometimes).

    Basically, if they didn't have any real-life data to base the results on it would just be random guessing. So it wouldn't be a very accurate study.

    Yes! Predicting earthquakes would be handy. Get on it <3
  • vm007
    vm007 Posts: 241 Member
    Wouldn't the model incorporate for these "behaviors" or "margin of error" or "human mentality" or "placebo".

    I guess we would need a study for the computer model itself to track all those things down first then use it in future lol.

    I'm starting to see the reason. Perhaps a learning algorithm which learns human being and behaviors over time and then realizes we are the biggest threat to our own well-being then enslaves us to protect us from ourselves.

    Ok nvm this morphed into something else.
  • CSARdiver
    CSARdiver Posts: 6,252 Member
    vm007 wrote: »
    Wouldn't the model incorporate for these "behaviors" or "margin of error" or "human mentality" or "placebo".

    I guess we would need a study for the computer model itself to track all those things down first then use it in future lol.

    I'm starting to see the reason. Perhaps a learning algorithm which learns human being and behaviors over time and then realizes we are the biggest threat to our own well-being then enslaves us to protect us from ourselves.

    Ok nvm this morphed into something else.

    There's a limitation in projection. I can predict with a higher degree of accuracy tomorrow. This range expands exponentially over 5 years.

    Another confounding factor is the sensitivity and bias within the initial conditions, so any errors within the initial dataset will also compound exponentially.

    Enslavement and predictability limits adaptability. One man's biggest threat is another man's greatest advantage.
  • cdjs77
    cdjs77 Posts: 176 Member
    CSARdiver wrote: »
    There's a limitation in projection. I can predict with a higher degree of accuracy tomorrow. This range expands exponentially over 5 years.

    Another confounding factor is the sensitivity and bias within the initial conditions, so any errors within the initial dataset will also compound exponentially.

    Enslavement and predictability limits adaptability. One man's biggest threat is another man's greatest advantage.

    This is the main reason. The farther forward in time you go, the less accurate predictions get, much like the weather report, stock predictions, etc. The easiest example of this would be something like an average temperature. The closer you are to the time you are predicting, the less the actual result will vary from your prediction. If it's summer and I know that it has been on average 25 degrees Celsius for the past few days, my prediction that it will be 25 degrees tomorrow will be more accurate than the prediction that it will be 25 degrees in two weeks or two months or two years. In order to make the prediction more accurate further in the future you need more data (will it rain, will it be a different season, is there a cold front, etc). The farther you go, the more data you need and at some point it becomes exponential.

    As for human behavior, this adds another difficulty. People don't always behave the way we expect them to in every situation. For example, if you ask someone every day what they would like for breakfast that day, many people will choose the same thing every day with little variation. However, if you ask someone to choose at the beginning of the month what they will have each day for breakfast that entire month, they will often choose many different breakfasts. This is called Naive Diversification, and human behavior and economics is chock full of these weird inconsistencies and they are often hard to predict.


    vm007 wrote: »
    I guess we would need a study for the computer model itself to track all those things down first then use it in future lol.

    I'm starting to see the reason. Perhaps a learning algorithm which learns human being and behaviors over time and then realizes we are the biggest threat to our own well-being then enslaves us to protect us from ourselves.

    A lot of deep learning research is doing this now, but it's pretty difficult and pretty complicated. You need an inordinate amount of data and due to the time and complexity of such projects, each algorithm is usually only focused on a specific task, like predicting economic cycles, analyzing images, driving a car, etc. You also need to train the algorithm and depending on the amount and availability of the data, this can take a long time. One deep learning program in my department has been training its algorithm for months.

    The other problem is, these algorithms are a bit of a 'black box,' meaning, you can't really see what the learning algorithm does, you just see the outcome. For example, the computer will tell you if the picture you showed it is a dog or not, but you won't be able to see why it thinks that (which is confusing if it falsely identified the picture). If the goal of your study is to understand why you get a certain outcome, most learning algorithms are probably not useful.
  • Leslierussell4134
    Leslierussell4134 Posts: 376 Member
    CSARdiver wrote: »
    Depends on your design of experiment. People are continually attempting to use data to identify trends and predict the future. The problem is typically some unknown variable at play that was never considered, or what was thought to be a primary influencer turned out to be inconsequential.

    Any data set will be limited and become a snapshot in time, so by doing this you introduce bias. Have you accounted for your bias in this experiment?

    This is further complicated with biological organisms who exhibit free will and unlimited behavioral controls. Next to impossible to predict behavior.

    +1
  • vm007
    vm007 Posts: 241 Member
    edited July 2018
    cdjs77 wrote: »
    CSARdiver wrote: »
    There's a limitation in projection. I can predict with a higher degree of accuracy tomorrow. This range expands exponentially over 5 years.

    Another confounding factor is the sensitivity and bias within the initial conditions, so any errors within the initial dataset will also compound exponentially.

    Enslavement and predictability limits adaptability. One man's biggest threat is another man's greatest advantage.

    This is the main reason. The farther forward in time you go, the less accurate predictions get, much like the weather report, stock predictions, etc. The easiest example of this would be something like an average temperature. The closer you are to the time you are predicting, the less the actual result will vary from your prediction. If it's summer and I know that it has been on average 25 degrees Celsius for the past few days, my prediction that it will be 25 degrees tomorrow will be more accurate than the prediction that it will be 25 degrees in two weeks or two months or two years. In order to make the prediction more accurate further in the future you need more data (will it rain, will it be a different season, is there a cold front, etc). The farther you go, the more data you need and at some point it becomes exponential.

    As for human behavior, this adds another difficulty. People don't always behave the way we expect them to in every situation. For example, if you ask someone every day what they would like for breakfast that day, many people will choose the same thing every day with little variation. However, if you ask someone to choose at the beginning of the month what they will have each day for breakfast that entire month, they will often choose many different breakfasts. This is called Naive Diversification, and human behavior and economics is chock full of these weird inconsistencies and they are often hard to predict.


    vm007 wrote: »
    I guess we would need a study for the computer model itself to track all those things down first then use it in future lol.

    I'm starting to see the reason. Perhaps a learning algorithm which learns human being and behaviors over time and then realizes we are the biggest threat to our own well-being then enslaves us to protect us from ourselves.

    A lot of deep learning research is doing this now, but it's pretty difficult and pretty complicated. You need an inordinate amount of data and due to the time and complexity of such projects, each algorithm is usually only focused on a specific task, like predicting economic cycles, analyzing images, driving a car, etc. You also need to train the algorithm and depending on the amount and availability of the data, this can take a long time. One deep learning program in my department has been training its algorithm for months.

    The other problem is, these algorithms are a bit of a 'black box,' meaning, you can't really see what the learning algorithm does, you just see the outcome. For example, the computer will tell you if the picture you showed it is a dog or not, but you won't be able to see why it thinks that (which is confusing if it falsely identified the picture). If the goal of your study is to understand why you get a certain outcome, most learning algorithms are probably not useful.

    Thank you. No wonder those Deep learning cards are insanely expensive and they aren't even good at gaming. HAHA I joke
  • VUA21
    VUA21 Posts: 2,072 Member
    vm007 wrote: »
    cdjs77 wrote: »
    I'm a grad student in Statistics and Quantitative Economics with a concentration on Deep Learning, so I would like to answer, but I'm not really sure what you are asking.

    Are you saying why do they conduct year-long studies to confirm hypotheses versus running the numbers through a computer? I don't see how a computer could do this, they need to collect data from people throughout the years in order to actually confirm the hypothesis, computer simulations are not necessarily considered valid proof for many problems.

    I think I need an example of a study you are thinking about, though, in order to understand what you are asking, because your premise isn't very clear.

    Let's say for example- We want to see how would product A influence people. We know the ingredients of product A -we know the age group, gender , habits etc- we follow people for 2-3 weeks and we see how they respond react over that time. We punch that data into a model -design model in a way that takes into account some variables here and there and let it rip.

    There are deviations which may happen in real life- but we can have two models- one that takes those into account and second model stays intact as if all those people literally followed everything to dot-wouldn't those give out results quicker than doing like a 5-6 year study? and cheaper perhaps?

    The problem is that with medicine, is that most side effects don't show up quickly. Liver failure can take years to show up, a 2 month study would miss that. As far as rushed studies and possible side effects, look up "Thalidomide", the US wouldn't allow it for pregnant women as no long term studies were done, other countries allowed it... Look up the resulting side effects, very sad.