1.Place a multi-well plate with samples on a microscope. After you activate HCS Site Generator from the µManager plugins menu, a window will be displayed containing a grid representing a multi-well plate. On the right are a series of controls. Choose a Plate Format corresponding to the type of multi-well plate to be used (available options are 24-, 96-, and 384-well plates of standard dimensions, as well as a layout with four adjacent slides).
2.Calibrate the grid so that it matches the stage positions corresponding to different wells. Press the “Calibrate XY” button and move the XY stage to the center of well A1. If a joystick is not available for controlling the XY stage, use µManager’s “Stage Control” plugin.
3.Click “Select wells” and use the mouse to choose the wells that should be visited for imaging. Dragging or clicking allows wells to be selected; holding down the control key allows multiple wells to be included.
4.Specify a grid of “Imaging Sites” by entering the number of rows and columns and the spacing between images in microns. Images in this grid pattern will be acquired in each well during multi-dimensional acquisition.
5.Finally, press “Build MM List” to send the calculated positions to µManager’s XY Position List. Open the Multi-Dimensional Acquisition window, make sure “Multi Positions (XY)” is checked, and press the “Edit Position List” button to make sure the well plate positions are present in the list. Then apply all other desired acquisition settings and press “Acquire!”
6.Auto-focus settings for HCS acquisition are specified in the Multi-Dimensional Acquisition Window in the same way as with any other multi-position protocol. In addition, the HCS plugin provides “3-point” interpolative focus calculation that can be used instead of auto-focusing as a much faster alternative. 3-point focus can be also used together with the regular image-based focus to make the latter’s operation faster and more reliable. To set up 3-point focus:
6.1.Move the XY stage to a particular well, and using live mode, find the Z drive position for best focus.
6.2.Press the “+ Mark Point” button at lower right. Repeat this procedure at two more positions, as far away as possible from the first position and from each other.
6.3.Press “Set 3-Point List” to provide µManager with a “focal plane” that will then be adhered to during multi-well imaging. The focal plane is simply a plane fitted through three points selected in the previous steps. It is important to enable all three axes (XY and Z) in the Position List window in order for 3-point focus to work. The HCS plugin will then use the focal plane to calculate an estimated Z position for each selected imaging site and include it in the position list. Therefore, whenever the XY stage moves to the new imaging site, the estimated Z position will be applied as well. Typically 3-point focus keeps images in focus for magnifications 10X and less, thus eliminating the need for a much slower autofocus procedure. For higher magnifications, 3-point focus is most likely not good enough but can bring the Z-stage close to the actual focus position for subsequent refinement by automated autofocus.
1.Choose a computer. First, determine the necessary data size. If the maximum size of image sequences (taking into account the number of frames and region of interest size) is less than 100 GB, it may be practical to store the images in RAM and save to disk after the acquisition. For larger sizes, a RAID0 array of solid-state drives (SSDs) can be used. Ideally, the RAID controller should support the SATA ‘TRIM’ command, necessary to maintain optimal SSD performance without reformatting. For acquisition at 1.1 GB/s data rates, we used three or four SSDs (Samsung 840 Series 256 GB) in a RAID0 configuration, using a PCIe RAID controller (LSI 9240-4i) on a Windows 7 computer with 128 GB of PC1600 RAM on eight memory channels and two quad-core Intel Xeon CPUs. For maximum performance, set the RAID0-based virtual disk strip size to 1MB, the write policy to “Always Write Back,” and the allocation unit size to 64 KB.
2.Ensure that the SSD RAID0 array can handle high data speeds (more than 1.1 GB/s) by using a disk speed-benchmarking program, such as HDTune. To accurately measure writing speed, it may help to temporarily disable the disk’s write caching and to select the longest write length possible in the benchmarking application.
3.Install the desired sCMOS camera using the Hardware Configuration Wizard and select device settings for high-speed imaging. For example, the 10-tap Andor Zyla can run at 100 frames per second when 11-bit images are selected.
4.If using µManager version 1.4.15, make sure that the option “Fast storage” is activated (Tools menu > Options). This option is the default since µManager release 1.4.16 and has been removed from the Options.
5.For a test acquisition, open the Multi-Dimensional Acquisition window. Select “Time Points” and choose a number of images to acquire (the required storage size is displayed). Set the time interval to zero.
6.De-select Positions, Slices, Channels, and Autofocus. For storage in RAM, deselect “Save images.” For storage on disk, select “Save images”, choose “Image stack file”, and select a directory on the storage drive. Press the “Acquire!” button to start the acquisition.
1.Focus on a sample that has high contrast and has non-repetitive features.
2.Open the Pixel Calibrator plugin from the Plugins menu, if it hasn’t already been launched by the Slide Explorer.
3.Choose a “Safe travel radius” in microns to ensure that the Pixel Calibrator does not move the XY stage so far as to crash the objective lens into the side of the stage. On most specimens, 1 mm of travel radius is enough to calibrate even low-magnification objectives.
4.Press the “Start” button to initiate pixel calibration. The Pixel Calibrator plugin acquires a first “home” image, and then commands the XY stage to move along the X axis by a very small initial displacement (1 micron). Then it doubles the displacement for each subsequent move. An image is acquired with the current hardware settings at each new position until the stage has moved by nearly a whole field of view. A cross correlation between the last displaced image and the initial home image is used to measure the size of the displacement between them in pixels. A similar image series is acquired and analyzed along the Y axis. Then, the XY stage is commanded to move the center of the home image so that it can be imaged at each of four corners of the display. Pixel displacements (j,k)i are measured at each of these four corners and the corresponding requested XY stage coordinates (x,y)i are automatically used to solve the linear least squares problem to compute the affine transformation matrix A between the camera pixel coordinates and the stage coordinates. The affine relation is stored in µManager and used by the Slide Explorer to synthesize a large image made of a mosaic of individual images from the camera.
1.Press “New” to start a new Slide Explorer session, and choose a location to save the new data set. A window will open and begin by tiling a very small region (2 × 2 tiles) in a single slice, with all channels requested.
2.Expand the Slide Explorer window by dragging the bottom-right corner, and press the “-” button once to zoom out. The empty area around the center tiles will begin to fill in with new tiles as the hardware acquires new images at unexplored locations.
3.Use the mouse to drag the Slide Explorer map to a new position. Tiles are acquired, starting from the center, and then radially outward in a rectangular spiral. Only tiles in empty areas of the visible window will be acquired, to let the user control the acquisition area.
4.Press the “<” and “>” buttons to move to different Z slice levels. New tiles will be acquired at the current Z slice.
5.Press the spacebar to pause or resume acquisition.
6.When acquisition of the map view is paused, double click anywhere in the displayed map to navigate the stage back to that position. A yellow box indicates your current location on the map. Use buttons on µManager’s main window to acquire a Snap or run Live mode, or run a multi-dimensional acquisition at that location.
7.Press the control key to use mouse clicks to add positions to µManager’s position list. These positions can be used in multi-dimensional acquisition sets. For example, the user might wish to acquire a large map on a field of cells, mark several cells of interest as positions in the position list, and then acquire a time lapse across these positions to record their behavior in parallel.
8.Localization Microscopy Plugin
1.Imaging parameters provide the plugin with the necessary information to express the results in numbers of photons rather than arbitrary units. The “Photon Conversion factor” (number of photons per digital unit in the image) depends on various camera settings including analog gain and readout mode and can be found in the documentation for your camera. It can also be measured, as described on the µManager website [8]. Use the “read” button to deduce some imaging parameters from the selected image. Do not correct the Photon Conversion factor for the (linear) EM gain used, as the software will apply this correction automatically.
2.Find Maxima settings: The software finds local maxima by searching in a square with a size defined in “Box Size”. It only accepts local maxima whose intensity is “Noise tolerance” higher than the average of the four corners. Pre-filtering is rarely needed. The maxima finder is very fast and can even be used on live images.
3.The “Fit parameters” let you specify the kind of multi-dimensional Gaussian that will serve as a model for the PSF.
3.1.“Dimensions”: choose whether the X- and Y-width of the Gaussian vary together during fitting (1D) or can change independently (2D), and if the resulting ellipse can rotate (3D).
3.2.“Fitter”: choose among fitting algoritms, including Simplex (Nelder Mead), Maximum Likelihood Estimator (MLE), and Levenberg-Marquardt algorithms. Levenberg-Marquardt uses the first derivative of the fitting function with respect to fitting parameters to pursue a best-fit result more quickly.
3.3.“Max iterations”: specify how long fitting should run for a given particle before the algorithm decides a particle is unfittable.
3.4.“Box size”: specify the width (in pixels) of the square sub-image over which the Gaussian function is to be fit to pixel intensities (usually, 6 pixels is sufficient).
4.The “Filter Data...” section allows outliers to be removed from point-fitting results. Spot width and intensities can be restricted to a range of physically reasonable values. Because single fluorescent particles can blink during an imaging sequence, you can specify the number of frames during which a particle can be missing before a track is declared “ended.”
5.Press the Fit button to run the spot finding and fitting algorithms. On machines with multiple cores, algorithms run in parallel to maximize efficiency (set the ImageJ setting “Parallel threads”, available through the ImageJ menu Edit > Options > Memory & Threads, to the number of available cores). Speeds of 20,000 spots per second are easily obtained on recent hardware.
6.To track motion of a single particle, draw an ROI around the particle with the mouse, and then click the Track button. The plugin will compute a time series for that particle and add it to the list of fitting series in the Data window (described below). Pressing the MTrack button will result in a tracking series for each particle detected in the first image of the time series. At present, tracking is not very sophisticated; it merely attempts a Gaussian fit on successive frames and can get confused by nearby particles.
7.The Data button opens a secondary Data window. This window contains a list of fitting series resulting from image sets, and a set of powerful tools for manipulating these data sets:
7.1.The General panel provides for saving and loading spot tracking data sets from files. Binary saving saves spots to a Tagged Spot Format (tsf) [9], which offers high-performance storage.
7.2.The two-color section carries out matching of spots between two overlaid channels (as from a DualView imager, or two cameras), and provides algorithms for computing a coordinate mapping between the two channels: Affine mapping provides a strictly linear mapping between the two data sets; LWM (local weighted mean [10]) allows for local warping due to chromatic aberrations. The “2C Correct” button allows data sets from one channel to be re-mapped to the coordinate system of the other channel. The resulting re-mapped pair of images can be used for directly measuring the distance between two particles of different colors.
7.3.The Tracks section allows plotting of positions and intensities of particles across time points. The “Math” button provides a way to do arithmetic operations between track series, and the “Straighten” button uses Principal Component Analysis to rotate a track (as from the movement of a molecular motor along an extended filament) along the X axis for further analysis.
7.4.The Localization Microscopy Panel provides for the rendering of PALM/STORM-style images. The “Z Calibration” button assumes the use of an astigmatic lens introduced in the imaging path to measure Z position by the shape of the fit Gaussian [11]. The Drift Correction button removes an overall displacement of the particles over time, to correct for drift in the XY Stage. Spots in the image can be rendered as individual pixels or simulated Gaussian probability density. Displayed spots can be filtered by width and intensity.
7.5.The computed results from spots and tracks can be displayed in ImageJ’s spreadsheet window. Selecting individual rows from particle-fitting results will highlight the corresponding spot in the image data set, allowing easy inspection and debugging of fitting results.
1.Connect the camera and the sequencing controller to the computer and use µManager’s Hardware Configuration Wizard to generate a configuration setup that includes both devices. If the sequencer controls a third device (such as a piezo Z stage or laser combiner), ensure that these are also connected with the appropriate cables.
2.Use µManager’s Snap and Live modes to confirm that the camera can acquire images. Open the Device/Property browser and confirm that the controller’s properties are available and can be set to different values. For example, the Arduino microcontroller device adapter exposes a set of “TTL output” properties that can be set “On” and “Off”. If the Arduino is controlling an illuminator, then changing these properties should result in turning the different illumination channels on and off.
3.Enable “Sequencing” in the sequencing device. All sequencing device adapters have a property (usually called “Sequencing” or something similar) that, when “On,” allows µManager to upload command sequences to the sequencing device. To ensure that Sequencing is “On” by default, edit (or create) the “System” configuration group so that it includes the “Sequencing” property and contains a preset named “Startup”, in which “Sequencing” is set to “On”.
4.µManager is now ready to run a hardware-triggered MDA sequence. µManager’s MDA engine creates a triggered or partially triggered acquisition sequence by looking for channel switching and slice switching events that can be triggered by hardware and then generates trigger sequences. For example, an acquisition could require two channels (call them A and B), and a 3-slice Z stack (1, 2, and 3): in this case, the acquisition engine will produce a 6-event trigger sequence (A1, B1, A2, B2, A3, B3). If non-triggerable changes are required, the MDA engine falls back to software-controlled operation. Hardware triggering is enabled when these conditions are satisfied:
4.1.For multiple channels, check “Channels” and select a “Channel group” and add two or more channels to the list of channels. For hardware triggering, the property changes necessary to switch between channels must all be triggerable properties, and the exposure time for each channel must be the same.
4.2.To include a Z stack, check the “Z-stacks (slices)” box and enter a Start, End, and Step Size in microns. For hardware triggering, µManager’s current default Z stage must be a triggerable piezo stage.
4.3.Activate Time Points, Multiple Positions, and Autofocus as needed. Choose an Acquisition Order “Slices, Channels” for channels to be switched at each Z slice, or “Channels, Slices” for a Z stack to be obtained for each channel. Multiple Positions and Autofocus are not hardware-triggered (but a sub-sequence of Z slices or channels at each position can be hardware-triggered).
5.Click “Acquire!” We expect one image plane (a monochrome image for a particular channel and slice) acquired at every camera exposure. As each image arrives from the camera, the acquisition engine records metadata with the appropriate Channel, Slice, and Frame tags. The multi-dimensional image viewer then uses those tags to display the image planes correctly.
1.After addition of two or more physical cameras to your configuration (or two virtual “Demo Cameras” for testing), add the Multi Camera virtual camera device (from the “Utilities” device group) to your current configuration using µManager’s Hardware Configuration Wizard. The Multi Camera virtual device’s purpose is to broadcast the commands it receives to multiple cameras, and to pass any acquired images from its child cameras back to the application.
2.In the main µManager window, edit the System configuration group, and add the Core-Camera property and the Camera1/Camera2/Camera3 properties from the Multi Camera device. Then edit the System-Startup configuration preset, and set Core-Camera to “Multi Camera” and the Camera properties to the appropriate physical cameras. It is also convenient to add an extra “Camera” configuration group, with the sole property “Core”-”Camera”. This will produce a drop down menu on the main window that allows you to choose between imaging from either of the individual physical cameras or Multi Camera mode.
3.Now the exposure and binning settings will be propagated from Multi Camera to the physical cameras whenever they are set from the user interface. Focus on a sample with particles that can be imaged in two channels (for example, TetraSpeck beads from Invitrogen, “Full Spectrum” from Bangs Labs). Turn on live mode to observe how the two cameras are overlaid as a pair of channels.
4.It may be that the image from one camera is mirrored, or physical constraints prevent a camera from being rotated by 90 degrees. In this case, open the “Image Flipper” Plugin, turn on Live mode, and experiment with rotations and flipping until the images are properly aligned. Once the orientation is correct, perform fine adjustments to the two cameras to get their images precisely registered.
5.Open the Multi-Dimensional Acquisition window, and set up a two-channel acquisition (for example, Cy3 and Cy5). Create a 10-frame sequence with no time interval between frames, and press the “Acquire!” button. Multi-channel, multi-camera acquisitions produce an image set with several synthetic channels. These synthetic channels will have names combining the name of the camera with the name of the physical channel, e.g.: “Camera1-Cy3”, “Camera2-Cy3”, “Camera1-Cy5”, “Camera2-Cy5”.
6.To get perfect (sub-pixel) registration between the images on the two cameras, image registration software will almost invariably be necessary. To facilitate such registration, download and install the ImageJ plugin “GridAligner” [12], which uses a calibration image to automatically deduce the optimal affine transform function and applies the transform to other experimental images sets.
1.After the hardware device has been installed, use the Hardware Configuration Wizard to add the phototargeting device to µManager’s configuration settings. A list of supported devices can be found on the µManager website.
2.Place a slide for testing and calibrating the phototargeting device on the microscope stage. Lower-power devices intended for photobleaching or photoswitching fluorescent dyes can be calibrated using a uniformly autofluorescent plastic slide such as those from Chroma [13]. High-power devices used for photo-ablation can be calibrated using a metal-coated mirror slide, which is ablated at each point visited by the illumination, producing a hole visible under brightfield illumination.
3.Start the Projector plugin. The plugin window contains three tabs for phototargeting control: “Point and Shoot”, “ROIs”, and “Setup.” Choose the Setup tab first, and click the “On” and “Off” buttons. The illumination should turn on and off, and may be visibly emitted at the objective. Galvo mirror-based devices will exhibit a single illumination spot, while SLMs will illuminate a large region. Once the illumination is on, enable µManager’s live mode and ensure that the illumination is visible on the camera. Press the “Center” button to illuminate a small region at the center of the ‘phototargeter’s two-dimensional range. Adjust the mounting of the phototargeting hardware to roughly center the illumination pattern on the camera image. To avoid over-exposing the camera with fluorescence from a test slide, it may be necessary to reduce illumination power, reduce camera exposure time, or switch to a dichroic where only a small part of the emitted light is transmitted.
4.Adjust the optics to tighten the focus of the spot on the test slide. Plastic fluorescent slides, due to their thickness, will exhibit a “halo” around the intended pattern, even when it is in focus. Focusing the spot just beyond the surface of the test slide usually minimizes the halo pattern.
5.Once the illumination pattern has been properly aligned, the camera’s pixel coordinates need to be mapped to the phototargeter’s coordinate system, which consists of galvo mirror angles or SLM pixel coordinates. Press the “Calibrate” button in the Projector Plugin’s “Setup” tab. The calibration procedure starts by illuminating a few test spots sequentially near the center of the phototargeting range. Each spot is imaged, and its position is automatically measured by the plugin. A successful measurement will be indicated by a small “+” overlaid on the spot. This initial test generates a rough linear estimate of the relation between camera pixels and phototargeting coordinates. Next, a series of spots in a square lattice are illuminated one by one across the camera field of view and automatically located. Following these measurements, affine transformation matrices are computed for each cell in the lattice, and stored as a detailed non-linear coordinate map between the camera and the phototargeting device.
6.We can now test the results of the calibration procedure. Enable µManager’s Live mode, and switch to the “Point and Shoot” tab, and press the Point and Shoot “On” button. If prompted, enter an “illumination interval” for the spot. Holding down the Control button, click somewhere on the live image. An illumination spot should appear at the mouse click for the interval requested. Try clicking in a number of places to confirm that the calibration is correct; if there appear to be errors, then try improving the illumination and repeat the previous calibration step.
7.Point and Shoot mode can be very useful for many experiments, such as FRAP. On a live specimen, switch to a dichroic cube that allows simultaneous imaging illumination and phototargeting illumination. Using the Multi-Dimensional Acquisition dialog, start a fast acquisition (zero interval between frames). Now “Point and Shoot” can be triggered simultaneously with imaging, and the immediate response to phototargeting can be observed and recorded.
8.Using the calibration slide, switch to the “ROI(s)” tab. Choose the “polygon ROI” tool in the ImageJ toolbar window, and then draw an ROI on the live image. Next, in the Projector Controls, click the “Set ROIs” button to send the ROI to the phototargeting device. When transmission succeeds, the message “One ROI submitted” should appear. Set “Loop” to 1 “times” and “Spot dwell time” to 500 ms. Pressing “Run ROIs now!” will result in illuminating the ROI (galvo-based devices will raster across the ROI, while SLMs will turn on all pixels inside the ROI). For multiple ROIs, use ImageJ’s “ROI Manager.”
9.After ROIs have been submitted, you can attach phototargeting events to a multi-dimensional acquisition. Check the “Run ROIs in Multi-Dimensional Acquisition” checkbox, and specify the frame (time point) at which phototargeting should occur. You can also require that ROIs are repeatedly illuminated in later frames.
1.µManager has a build-in trouble reporting mechanism (µManager menu Help > Report a problem...). Use this to send complete documentation of the issue to the µManager developers. They will respond by email and help to figure out the problem.
2.The µManager mailing list [14] has about 1000 subscribers some of whom may have experienced the same problem and are willing to help. The mailing list can be found at: https://micro-manager.org/wiki/Micro-Manager%20Community