next up previous contents
Next: Note on the Numerical Up: MATTERS OF GRAVITY, The Previous: Bibliography   Contents

The Numerical Relativity Data Analysis Meeting

Patrick Brady, University of Wisconsin, Milwaukee patrick-at-gravity.phys.uwm.edu

The Numerical Relativity and Data Analysis workshop that was held at MIT on 6-7 November 2006 attracted 67 participants from both the source modeling and data analysis communities. The meeting was structured to encourage significant discussion by having only 4 speakers on the first day and 3 speakers on the second. This meeting had a rather narrow focus, dealing primarily with binary black holes; the organizers hope that future meetings will address other important sources. Based on the hallway conversations, the meeting appears to have succeeded in bringing together researchers from both communities. All the talks, some rough notes from the discussions, and the list of participants are posted on the meeting web site at http://www.lsc-group.phys.uwm.edu/events/nrda/

The meeting opened with a status report, by Ulrich Sperhake, on numerical simulations of binary black holes. The talk took a broad view and reported on results from various groups, the technical status of dynamical simulations, and touched on issues of initial data and boundary conditions. The discussion that followed included comments by members of the numerical relativity community about the boundary conditions, waveform extraction methods and radii, and convergence testing to understand the accuracy of the simulations. Data analysts asked a number of questions about accuracy of the current simulations; some of the numerical relativists turned the question around and asked how accurate they need to be. These discussions continued through the coffee break and led very naturally into the second talk.

Duncan Brown summarized the current status of searches for binary black holes using data from gravitational-wave detectors. In his talk, he emphasized that mismatch-fractional loss of signal to noise-is the correct measure of accuracy when discussing simulations of gravitational waveforms for use in searches. This point was immediately picked up by those present; several groups had already started to use the mismatch to understand the accuracy of their simulations. Brown further explained that sophisticated data analysis pipelines are developed to deal with the non-Gaussian nature of the gravitational-wave detector noise. He used this to emphasize that a good match is necessary, but not sufficient in a matched filtering search for signals. Finally, Brown pointed out that we should develop a standard format for publishing data from the numerical relativity community for use in searches for gravitational waves. Pablo Laguna reported that there is already a collaboration (NRwaves) among numerical relativists to collect their waveforms together for the sake of making comparisons between the results.

The next presentation, by Mark Miller, addressed issues of numerical accuracy in simulations. Mark invited the Caltech/Cornell and Jena groups to present a summary of their investigations of accuracy in numerical simulations. He followed that with a nice discussion of how numerical relativity fits together with ongoing data analysis efforts. In this discussion, he also presented a way to think about numerical accuracy. His proposal generated considerable discussion among experts in both numerical relativity and data analysis. Broadly speaking, everybody agreed with trying to quantify the errors in the numerical solutions, but precisely how to define the error remained unclear.

The last talk of the first day was given by Stephen Fairhurst. He discussed the different sources of measurement error that affect gravitational-wave observations. In particular, he emphasized the difference between statistical and systematic (instrumental) errors if the true waveform is accurately known. He explained how these issues feed into current and future searches. In general, the required accuracy of a simulation will depend on the accuracy with which the instrumental response can be calibrated. Fairhurst finished by explaining that this question is best answered by adding numerical waveforms to real data and exploring our ability to detect and measure them.

The first talk of the second day was given by Alessandra Buonanno. She discussed comparisons between approximate analytically computed waveforms and corresponding waveforms computed using numerical relativity. For equal masses, she explained that both approaches (when taken to sufficient accuracy) give very similar waveforms up to the merger regime. It remains an open question to understand the physical origin of the break in the numerically computed spectrum for these equal mass systems and to explore the effects of spin on the waveforms. Buonanno finished by highlighting the need for numerical simulations that start from initial data that is physically close enough to a real inspiral.

The workshop ended with Manuela Campanelli and Patrick Sutton summarizing ``what we heard about ......'' data analysis and numerical relativity, respectively. Two points resonated through their presentations and the following discussions. First, making data from numerical relativity simulations available for data analysis is highly desirable, although some effort is needed to quantify the errors on these data. Second, this meeting was useful and people would like to meet again to talk in more detail.


next up previous contents
Next: Note on the Numerical Up: MATTERS OF GRAVITY, The Previous: Bibliography   Contents
David Garfinkle 2007-08-31