Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Football footage overlay

  1. Jan 3, 2010 #1

    DaveC426913

    User Avatar
    Gold Member

    Watching the news. The 10 minute "null zone" (what some apparently call the "sports news") is on. The computer effects they do by overlaying the football field with downs and blue lines and sandtraps or whatever has caught my eye. It's kind of cool technology.

    I just saw something on the field that flummoxed me. As the camera panned, following all the players running, it passed by something which must have been digitally added. Whatever it was was not "fixed" wrt to the ground, though it looked like it was meant to be. (Imagine some cartoony suerpimposed thing in America's Funniest Home Videos).

    It looked like the love-child of The Imperial Probe Droid and a plucked turkey, with wings and legs extended. If it were projected on the field, it would be about the size of a man. It could not have had any useful function, since it did have any moving parts or identifiable markings or features.

    What could it have been? Why would they stick something in that had no purpose?
     
  2. jcsd
  3. Jan 4, 2010 #2

    russ_watters

    User Avatar

    Staff: Mentor

    Not a football fan, eh? What you describe (and might I say, that's a vivid image you provided!) sounds like the skycam: http://en.wikipedia.org/wiki/Skycam

    Also used for a very long zip-line shot in Spiderman 2 when Spidey is fighting the baddie on the El in Chicago....

    Note: there is a lot of stuf added to sports digitally and in real-time now, particularly advertisements. Baseball fields have green screens behind the plate for digital advertisements; NASCAR just uses the infield grass for their green screen. Often the cut-scenes when they put out of town scores up during breakds in football games are on digitally rendered jumbotrons that look real but aren't. Oh, and Tom Brady's hair and Tony Romo's dimples aren't real either.
     
    Last edited: Jan 4, 2010
  4. Jan 4, 2010 #3

    minger

    User Avatar
    Science Advisor

    I think what you're referring to is the ads and first-down markers that they've been placing for the last few years. From what I understand, it takes special cameras that need to be calibrated before each game. i.e. The camera "knows" where it's pointing at such that the digital effects can be added to the field, with the correct perspective and orientation.

    Actually here is some more information:
    http://en.wikipedia.org/wiki/1st_&_Ten_(graphics_system [Broken])

    Luckily [/sarcasm] they've expanded it to include "on-field" advertisements.
     
    Last edited by a moderator: May 4, 2017
  5. Jan 4, 2010 #4

    DaveC426913

    User Avatar
    Gold Member

    Yes. While I may not follow sports, I am familiar with the technology.



    Right. That's what it was all right.

    One of my working hypotheses was a camera - but it had no supports so I dismissed it, not knowing they now have them on zip lines. (Must've stolen skycam technology from David Letterman :tongue2:).

    Thanks. Now I can go back to watching ten minutes of Discovery Channel during the null zone...
     
  6. Jan 4, 2010 #5
    Good word!
     
  7. Jan 4, 2010 #6

    mheslep

    User Avatar
    Gold Member

    Maybe not. I believe one or more of the networks have just started using some state of the art virtual camera technology out of Carnegie Mellon. That is, they position cameras around the field in the usual matter, but are now able to generate a camera quality image as if it is generated from some other virtual camera, compositing an image using data from one or more cameras and possibly depth of field sensors.
     
  8. Jan 4, 2010 #7

    russ_watters

    User Avatar

    Staff: Mentor

    That would surprise me greatly. I've seen them do quick cut-scene morphing rotations from one angle to another, but generating the skycam shot and making it believable would be next to impossible since there would be a lot of missing data.
     
  9. Jan 4, 2010 #8

    mheslep

    User Avatar
    Gold Member

    Yes certainly there are limitations to how far off the camera axis they can go. I'm assuming the OP's description might fit a composite shot. He didn't say it was overhead, but only "as the camera panned, following all the players running"
     
  10. Jan 4, 2010 #9

    russ_watters

    User Avatar

    Staff: Mentor

    The shot I described is always a freeze-frame. What you describe would almost certainly be impossible realtime, if at all. I can't imagine there is enough data to generate it since the skycam is at a different angle and much closer (so vastly different perspective) than the cameras around the stadium. I've certainly never seen anything like it. But just so I'm clear on what you are talking about, do you have a link explains/shows the type of shot you mean?
     
    Last edited: Jan 4, 2010
  11. Jan 4, 2010 #10

    russ_watters

    User Avatar

    Staff: Mentor

    mheslep, is this what you are referring to? http://www.sciencedaily.com/releases/2001/01/010124075009.htm
    I don't remember seeing it then and am pretty sure it isn't in use today.

    In any case, it looks to me like it does a flyby by sequencing frames from multiple cameras. It doesn't look to me like it generates any synthetic views. [edit] Actually, toward the end of the article, they say they can generate new perspectives. I'm not clear if that was actually implimented in the super bowl or not.
     
    Last edited: Jan 5, 2010
  12. Jan 4, 2010 #11

    mheslep

    User Avatar
    Gold Member

    http://www.ri.cmu.edu/events/sb35/eyevision_best_of.mpg [Broken]
    Edit: though in these examples I believe they only demonstrate camera to camera tracking, and not new perspective generation, though this technology claims that ability:
    Takeo Kanade is the originator. His feature tracking algorithms are fundamental in computer vision; I've used his work many times.
    http://www.ri.cmu.edu/events/sb35/tksuperbowl.html [Broken]
     
    Last edited by a moderator: May 4, 2017
  13. Jan 5, 2010 #12
    Ah yes. His algorithms are used extensively in our lab as well for insect navigation.
     
    Last edited by a moderator: May 4, 2017
  14. Jan 5, 2010 #13

    mheslep

    User Avatar
    Gold Member

    yep Kanade, thats him.
     
  15. Jan 5, 2010 #14

    russ_watters

    User Avatar

    Staff: Mentor

    ...and now that I think of it, what I described in post #7 may actually be the same technology.
     
  16. Jan 5, 2010 #15

    mheslep

    User Avatar
    Gold Member

    Dragonfly's, in particular? I just saw a Jasons presentation by a researcher doing work tracking Dragonfly motion through space by tracking features on the insect. Spectacular. He discovered some aerodynamic and maneuvering features of the little killers nobody new about. Don't recall where the guy was from.
     
  17. Jan 5, 2010 #16

    mheslep

    User Avatar
    Gold Member

    No doubt.
     
  18. Jan 5, 2010 #17
    No, its for controls algorithms not feature tracking. Its for optic flow measurement.
     
  19. Jan 5, 2010 #18

    minger

    User Avatar
    Science Advisor

    As an AVID football fan, I'm familiar with both of these. The image skew thing is done by ESPN. It looks really cheesy at an attempt to do what that other guy did.

    As far as that other technology, it was implemented in one (at least) or maybe two Super Bowls probably some 8 or so years ago. It was actually pretty cool. From what I understand, they had some 40-odd cameras lined up around the stadium.

    The effect was quite a bit "smoother" than that example mpeg posted. However, it was still quite noticable that it was merely transitioning between cameras.
     
  20. Jan 5, 2010 #19

    mheslep

    User Avatar
    Gold Member

    Or, the operator commanded a pan that was an abrupt angle change with nothing in between.
     
  21. Jan 5, 2010 #20

    mheslep

    User Avatar
    Gold Member

    That's what I meant. Optical flow requires identification of the same 'point' (feature) as it moves through space as rendered in successive images taken over time.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook