blob: 3f677005bf63fd7eecb32e610fa90f8e46407ad1 [file] [log] [blame]
type=page
status=published
title=Administering GlassFish Server Instances
next=named-configurations.html
prev=clusters.html
~~~~~~
Administering GlassFish Server Instances
========================================
[[GSHAG00006]][[gkrbv]]
[[administering-glassfish-server-instances]]
5 Administering GlassFish Server Instances
------------------------------------------
A GlassFish Server instance is a single Virtual Machine for the Java
platform (Java Virtual Machine or JVM machine) on a single node in which
GlassFish Server is running. A node defines the host where the GlassFish
Server instance resides. The JVM machine must be compatible with the
Java Platform, Enterprise Edition (Java EE).
GlassFish Server instances form the basis of an application deployment.
An instance is a building block in the clustering, load balancing, and
session persistence features of GlassFish Server. Each instance belongs
to a single domain and has its own directory structure, configuration,
and deployed applications. Every instance contains a reference to a node
that defines the host where the instance resides.
The following topics are addressed here:
* link:#gkrbn[Types of GlassFish Server Instances]
* link:#gkqal[Administering GlassFish Server Instances Centrally]
* link:#gkqdw[Administering GlassFish Server Instances Locally]
* link:#gkrdd[Resynchronizing GlassFish Server Instances and the DAS]
* link:#gkqcr[Migrating EJB Timers]
[[gkrbn]][[GSHAG00186]][[types-of-glassfish-server-instances]]
Types of GlassFish Server Instances
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Each GlassFish Server instance is one of the following types of
instance:
Standalone instance::
A standalone instance does not share its configuration with any other
instances or clusters. A standalone instance is created if either of
the following conditions is met: +
* No configuration or cluster is specified in the command to create
the instance.
* A configuration that is not referenced by any other instances or
clusters is specified in the command to create the instance. +
When no configuration or cluster is specified, a copy of the
`default-config` configuration is created for the instance. The name
of this configuration is instance-name`-config`, where instance-name
represents the name of an unclustered server instance.
Shared instance::
A shared instance shares its configuration with other instances or
clusters. A shared instance is created if a configuration that is
referenced by other instances or clusters is specified in the command
to create the instance.
Clustered instance::
A clustered instance inherits its configuration from the cluster to
which the instance belongs and shares its configuration with other
instances in the cluster. A clustered instance is created if a cluster
is specified in the command to create the instance. +
Any instance that is not part of a cluster is considered an
unclustered server instance. Therefore, standalone instances and
shared instances are unclustered server instances.
[[gkqal]][[GSHAG00187]][[administering-glassfish-server-instances-centrally]]
Administering GlassFish Server Instances Centrally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Centralized administration requires the Distributed Component Object
Model (DCOM) remote protocol or secure shell (SSH) to be set up. If DCOM
or SSH is set up, you can administer clustered instances without the
need to log in to hosts where remote instances reside. For information
about setting up DCOM and SSH, see link:ssh-setup.html#gkshg[Enabling
Centralized Administration of GlassFish Server Instances].
Administering GlassFish Server instances centrally involves the
following tasks:
* link:#gkqch[To Create an Instance Centrally]
* link:#gkrcb[To List All Instances in a Domain]
* link:#gkqcw[To Delete an Instance Centrally]
* link:#gkqcj[To Start a Cluster]
* link:#gkqcl[To Stop a Cluster]
* link:#gkqaw[To Start an Individual Instance Centrally]
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqcc[To Restart an Individual Instance Centrally]
[[gkqch]][[GSHAG00106]][[to-create-an-instance-centrally]]
To Create an Instance Centrally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `create-instance` subcommand in remote mode to create a
GlassFish Server instance centrally. Creating an instance adds the
instance to the DAS configuration and creates the instance's files on
the host where the instance resides.
If the instance is a clustered instance that is managed by GMS, system
properties for the instance that relate to GMS must be configured
correctly. To avoid the need to restart the DAS and the instance,
configure an instance's system properties that relate to GMS when you
create the instance. If you change GMS-related system properties for an
existing instance, the DAS and the instance must be restarted to apply
the changes. For information about GMS, see
link:clusters.html#gjfnl[Group Management Service].
[[GSHAG380]]
Before You Begin
Ensure that following prerequisites are met:
* The node where the instance is to reside exists.
* The node where the instance is to reside is either enabled for remote
communication or represents the host on which the DAS is running. For
information about how to create a node that is enabled for remote
communication, see the following sections:
** link:nodes.html#CHDIGBJB[To Create a `DCOM` Node]
** link:nodes.html#gkrnf[To Create an `SSH` Node]
* The user of the DAS can use DCOM or SSH to log in to the host for the
node where the instance is to reside.
If any of these prerequisites is not met, create the instance locally as
explained in link:#gkqbl[To Create an Instance Locally].
If you are adding the instance to a cluster, ensure that the cluster to
which you are adding the instance exists. For information about how to
create a cluster, see link:clusters.html#gkqdm[To Create a Cluster].
If the instance is to reference an existing named configuration, ensure
that the configuration exists. For more information, see
link:named-configurations.html#abdjr[To Create a Named Configuration].
The instance might be a clustered instance that is managed by GMS and
resides on a node that represents a multihome host. In this situation,
ensure that you have the Internet Protocol (IP) address of the network
interface to which GMS binds.
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the `create-instance` subcommand. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for configuring the
instance, see the link:../reference-manual/create-instance.html#GSRFM00033[`create-instance`(1)] help page.
|=======================================================================
* If you are creating a standalone instance, do not specify a cluster. +
If the instance is to reference an existing configuration, specify a
configuration that no other cluster or instance references. +
[source,oac_no_warn]
----
asadmin> create-instance --node node-name
[--config configuration-name]instance-name
----
node-name::
The node on which the instance is to reside.
configuration-name::
The name of the existing named configuration that the instance will
reference. +
If you do not require the instance to reference an existing
configuration, omit this option. A copy of the `default-config`
configuration is created for the instance. The name of this
configuration is instance-name`-config`, where instance-name is the
name of the server instance.
instance-name::
Your choice of name for the instance that you are creating.
* If you are creating a shared instance, specify the configuration that
the instance will share with other clusters or instances. +
Do not specify a cluster. +
[source,oac_no_warn]
----
asadmin> create-instance --node node-name
--config configuration-name instance-name
----
node-name::
The node on which the instance is to reside.
configuration-name::
The name of the existing named configuration that the instance will
reference.
instance-name::
Your choice of name for the instance that you are creating.
* If you are creating a clustered instance, specify the cluster to which
the instance will belong. +
If the instance is managed by GMS and resides on a node that represents
a multihome host, specify the `GMS-BIND-INTERFACE-ADDRESS-`cluster-name
system property. +
[source,oac_no_warn]
----
asadmin> create-instance --cluster cluster-name --node node-name
[--systemproperties GMS-BIND-INTERFACE-ADDRESS-cluster-name=bind-address]instance-name
----
cluster-name::
The name of the cluster to which you are adding the instance.
node-name::
The node on which the instance is to reside.
bind-address::
The IP address of the network interface to which GMS binds. Specify
this option only if the instance is managed by GMS and resides on a
node that represents a multihome host.
instance-name::
Your choice of name for the instance that you are creating.
[[GSHAG00039]][[gkqmv]]
Example 5-1 Creating a Clustered Instance Centrally
This example adds the instance `pmd-i1` to the cluster `pmdclust` in the
domain `domain1`. The instance resides on the node `sj01`, which
represents the host `sj01.example.com`.
[source,oac_no_warn]
----
asadmin> create-instance --cluster pmdclust --node sj01 pmd-i1
Port Assignments for server instance pmd-i1:
JMX_SYSTEM_CONNECTOR_PORT=28686
JMS_PROVIDER_PORT=27676
HTTP_LISTENER_PORT=28080
ASADMIN_LISTENER_PORT=24848
IIOP_SSL_LISTENER_PORT=23820
IIOP_LISTENER_PORT=23700
HTTP_SSL_LISTENER_PORT=28181
IIOP_SSL_MUTUALAUTH_PORT=23920
The instance, pmd-i1, was created on host sj01.example.com
Command create-instance executed successfully.
----
[[GSHAG381]]
See Also
* link:nodes.html#CHDIGBJB[To Create a `DCOM` Node]
* link:nodes.html#gkrnf[To Create an `SSH` Node]
* link:#gkqbl[To Create an Instance Locally]
* link:../reference-manual/create-instance.html#GSRFM00033[`create-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help create-instance` at the command line.
[[GSHAG382]]
Next Steps
After creating an instance, you can start the instance as explained in
the following sections:
* link:#gkqaw[To Start an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
[[gkrcb]][[GSHAG00107]][[to-list-all-instances-in-a-domain]]
To List All Instances in a Domain
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `list-instances` subcommand in remote mode to obtain information
about existing instances in a domain.
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`] subcommand. +
[source,oac_no_warn]
----
asadmin> list-instances
----
[[GSHAG00040]][[gksfe]]
Example 5-2 Listing Basic Information About All GlassFish Server
Instances in a Domain
This example lists the name and status of all GlassFish Server instances
in the current domain.
[source,oac_no_warn]
----
asadmin> list-instances
pmd-i2 running
yml-i2 running
pmd-i1 running
yml-i1 running
pmdsa1 not running
Command list-instances executed successfully.
----
[[GSHAG00041]][[gkabz]]
Example 5-3 Listing Detailed Information About All GlassFish Server
Instances in a Domain
This example lists detailed information about all GlassFish Server
instances in the current domain.
[source,oac_no_warn]
----
asadmin> list-instances --long=true
NAME HOST PORT PID CLUSTER STATE
pmd-i1 sj01.example.com 24848 31310 pmdcluster running
yml-i1 sj01.example.com 24849 25355 ymlcluster running
pmdsa1 localhost 24848 -1 --- not running
pmd-i2 sj02.example.com 24848 22498 pmdcluster running
yml-i2 sj02.example.com 24849 20476 ymlcluster running
ymlsa1 localhost 24849 -1 --- not running
Command list-instances executed successfully.
----
[[GSHAG383]]
See Also
link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help list-instances` at the command line.
[[gkqcw]][[GSHAG00108]][[to-delete-an-instance-centrally]]
To Delete an Instance Centrally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `delete-instance` subcommand in remote mode to delete a
GlassFish Server instance centrally.
[width="100%",cols="<100%",]
|=======================================================================
a|
Caution:
If you are using a Java Message Service (JMS) cluster with a master
broker, do not delete the instance that is associated with the master
broker. If this instance must be deleted, use the
link:../reference-manual/change-master-broker.html#GSRFM00005[`change-master-broker`] subcommand to assign the master
broker to a different instance.
|=======================================================================
Deleting an instance involves the following:
* Removing the instance from the configuration of the DAS
* Deleting the instance's files from file system
[[GSHAG384]]
Before You Begin
Ensure that the instance that you are deleting is not running. For
information about how to stop an instance, see the following sections:
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Confirm that the instance is not running. +
[source,oac_no_warn]
----
asadmin> list-instances instance-name
----
instance-name::
The name of the instance that you are deleting.
3. Run the link:../reference-manual/delete-instance.html#GSRFM00085[`delete-instance`] subcommand. +
[source,oac_no_warn]
----
asadmin> delete-instance instance-name
----
instance-name::
The name of the instance that you are deleting.
[[GSHAG00042]][[gkqms]]
Example 5-4 Deleting an Instance Centrally
This example confirms that the instance `pmd-i1` is not running and
deletes the instance.
[source,oac_no_warn]
----
asadmin> list-instances pmd-i1
pmd-i1 not running
Command list-instances executed successfully.
asadmin> delete-instance pmd-i1
Command _delete-instance-filesystem executed successfully.
The instance, pmd-i1, was deleted from host sj01.example.com
Command delete-instance executed successfully.
----
[[GSHAG385]]
See Also
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
* link:../reference-manual/change-master-broker.html#GSRFM00005[`change-master-broker`(1)]
* link:../reference-manual/delete-instance.html#GSRFM00085[`delete-instance`(1)]
* link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`(1)]
You can also view the full syntax and options of the subcommands by
typing the following commands at the command line:
* `asadmin help delete-instance`
* `asadmin help list-instances`
[[gkqcj]][[GSHAG00109]][[to-start-a-cluster]]
To Start a Cluster
^^^^^^^^^^^^^^^^^^
Use the `start-cluster` subcommand in remote mode to start a cluster.
Starting a cluster starts all instances in the cluster that are not
already running.
[[GSHAG386]]
Before You Begin
Ensure that following prerequisites are met:
* Each node where an instance in the cluster resides is either enabled
for remote communication or represents the host on which the DAS is
running.
* The user of the DAS can use DCOM or SSH to log in to the host for any
node where instances in the cluster reside.
If any of these prerequisites is not met, start the cluster by starting
each instance locally as explained in link:#gkqak[To Start an Individual
Instance Locally].
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the link:../reference-manual/start-cluster.html#GSRFM00233[`start-cluster`] subcommand. +
[source,oac_no_warn]
----
asadmin> start-cluster cluster-name
----
cluster-name::
The name of the cluster that you are starting.
[[GSHAG00043]][[gkqml]]
Example 5-5 Starting a Cluster
This example starts the cluster `pmdcluster`.
[source,oac_no_warn]
----
asadmin> start-cluster pmdcluster
Command start-cluster executed successfully.
----
[[GSHAG387]]
See Also
* link:#gkqak[To Start an Individual Instance Locally]
* link:../reference-manual/start-cluster.html#GSRFM00233[`start-cluster`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help start-cluster` at the command line.
[[GSHAG388]]
Next Steps
After starting a cluster, you can deploy applications to the cluster.
For more information, see link:../application-deployment-guide/toc.html#GSDPG[GlassFish Server Open Source
Edition Application Deployment Guide].
[[gkqcl]][[GSHAG00110]][[to-stop-a-cluster]]
To Stop a Cluster
^^^^^^^^^^^^^^^^^
Use the `stop-cluster` subcommand in remote mode to stop a cluster.
Stopping a cluster stops all running instances in the cluster.
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the link:../reference-manual/stop-cluster.html#GSRFM00238[`stop-cluster`] subcommand. +
[source,oac_no_warn]
----
asadmin> stop-cluster cluster-name
----
cluster-name::
The name of the cluster that you are stopping.
[[GSHAG00044]][[gkqmn]]
Example 5-6 Stopping a Cluster
This example stops the cluster `pmdcluster`.
[source,oac_no_warn]
----
asadmin> stop-cluster pmdcluster
Command stop-cluster executed successfully.
----
[[GSHAG389]]
See Also
link:../reference-manual/stop-cluster.html#GSRFM00238[`stop-cluster`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help stop-cluster` at the command line.
[[GSHAG390]]
Troubleshooting
If instances in the cluster have become unresponsive and fail to stop,
run the subcommand again with the `--kill` option set to `true`. When
this option is `true`, the subcommand uses functionality of the
operating system to kill the process for each running instance in the
cluster.
[[gkqaw]][[GSHAG00111]][[to-start-an-individual-instance-centrally]]
To Start an Individual Instance Centrally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `start-instance` subcommand in remote mode to start an
individual instance centrally.
[[GSHAG391]]
Before You Begin
Ensure that following prerequisites are met:
* The node where the instance resides is either enabled for remote
communication or represents the host on which the DAS is running.
* The user of the DAS can use DCOM or SSH to log in to the host for the
node where the instance resides.
If any of these prerequisites is not met, start the instance locally as
explained in link:#gkqak[To Start an Individual Instance Locally].
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the `start-instance` subcommand. +
[source,oac_no_warn]
----
asadmin> start-instance instance-name
----
::
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for controlling the
behavior of the instance, see the link:../reference-manual/start-instance.html#GSRFM00236[`start-instance`(1)]
help page.
|=======================================================================
instance-name::
The name of the instance that you are starting.
[[GSHAG00045]][[gkqoa]]
Example 5-7 Starting an Individual Instance Centrally
This example starts the instance `pmd-i2`, which resides on the node
`sj02`. This node represents the host `sj02.example.com`. The
configuration of the instance on this node already matched the
configuration of the instance in the DAS when the instance was started.
[source,oac_no_warn]
----
asadmin> start-instance pmd-i2
CLI801 Instance is already synchronized
Waiting for pmd-i2 to start ............
Successfully started the instance: pmd-i2
instance Location: /export/glassfish3/glassfish/nodes/sj02/pmd-i2
Log File: /export/glassfish3/glassfish/nodes/sj02/pmd-i2/logs/server.log
Admin Port: 24851
Command start-local-instance executed successfully.
The instance, pmd-i2, was started on host sj02.example.com
Command start-instance executed successfully.
----
[[GSHAG392]]
See Also
link:../reference-manual/start-instance.html#GSRFM00236[`start-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help start-instance` at the command line.
[[GSHAG393]]
Next Steps
After starting an instance, you can deploy applications to the instance.
For more information, see the link:../application-deployment-guide/toc.html#GSDPG[GlassFish Server Open Source
Edition Application Deployment Guide].
[[gkqaj]][[GSHAG00112]][[to-stop-an-individual-instance-centrally]]
To Stop an Individual Instance Centrally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `stop-instance` subcommand in remote mode to stop an individual
instance centrally.
When an instance is stopped, the instance stops accepting new requests
and waits for all outstanding requests to be completed.
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the link:../reference-manual/stop-instance.html#GSRFM00241[`stop-instance`] subcommand.
[[GSHAG00046]][[gkqpy]]
Example 5-8 Stopping an Individual Instance Centrally
This example stops the instance `pmd-i2`.
[source,oac_no_warn]
----
asadmin> stop-instance pmd-i2
The instance, pmd-i2, is stopped.
Command stop-instance executed successfully.
----
[[GSHAG394]]
See Also
link:../reference-manual/stop-instance.html#GSRFM00241[`stop-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help stop-instance` at the command line.
[[GSHAG395]]
Troubleshooting
If the instance has become unresponsive and fails to stop, run the
subcommand again with the `--kill` option set to `true`. When this
option is `true`, the subcommand uses functionality of the operating
system to kill the instance process.
[[gkqcc]][[GSHAG00113]][[to-restart-an-individual-instance-centrally]]
To Restart an Individual Instance Centrally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `restart-instance` subcommand in remote mode to start an
individual instance centrally.
When this subcommand restarts an instance, the DAS synchronizes the
instance with changes since the last synchronization as described in
link:#gksbo[Default Synchronization for Files and Directories].
If you require different synchronization behavior, stop and start the
instance as explained in link:#gksak[To Resynchronize an Instance and
the DAS Online].
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Run the link:../reference-manual/restart-instance.html#GSRFM00219[`restart-instance`] subcommand. +
[source,oac_no_warn]
----
asadmin> restart-instance instance-name
----
instance-name::
The name of the instance that you are restarting.
[[GSHAG00047]][[gkqqt]]
Example 5-9 Restarting an Individual Instance Centrally
This example restarts the instance `pmd-i2`.
[source,oac_no_warn]
----
asadmin> restart-instance pmd-i2
pmd-i2 was restarted.
Command restart-instance executed successfully.
----
[[GSHAG396]]
See Also
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqaw[To Start an Individual Instance Centrally]
* link:../reference-manual/restart-instance.html#GSRFM00219[`restart-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help restart-instance` at the command line.
[[GSHAG397]]
Troubleshooting
If the instance has become unresponsive and fails to stop, run the
subcommand again with the `--kill` option set to `true`. When this
option is `true`, the subcommand uses functionality of the operating
system to kill the instance process before restarting the instance.
[[gkqdw]][[GSHAG00188]][[administering-glassfish-server-instances-locally]]
Administering GlassFish Server Instances Locally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Local administration does not require DCOM or SSH to be set up. If
neither DCOM nor SSH is set up, you must log in to each host where
remote instances reside and administer the instances individually.
Administering GlassFish Server instances locally involves the following
tasks:
* link:#gkqbl[To Create an Instance Locally]
* link:#gkqed[To Delete an Instance Locally]
* link:#gkqak[To Start an Individual Instance Locally]
* link:#gkqci[To Stop an Individual Instance Locally]
* link:#gkqef[To Restart an Individual Instance Locally]
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Even if neither DCOM nor SSH is set up, you can obtain information about
instances in a domain without logging in to each host where remote
instances reside. For instructions, see link:#gkrcb[To List All
Instances in a Domain].
|=======================================================================
[[gkqbl]][[GSHAG00114]][[to-create-an-instance-locally]]
To Create an Instance Locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `create-local-instance` subcommand in remote mode to create a
GlassFish Server instance locally. Creating an instance adds the
instance to the DAS configuration and creates the instance's files on
the host where the instance resides.
If the instance is a clustered instance that is managed by GMS, system
properties for the instance that relate to GMS must be configured
correctly. To avoid the need to restart the DAS and the instance,
configure an instance's system properties that relate to GMS when you
create the instance. If you change GMS-related system properties for an
existing instance, the DAS and the instance must be restarted to apply
the changes. For information about GMS, see
link:clusters.html#gjfnl[Group Management Service].
[[GSHAG398]]
Before You Begin
If you plan to specify the node on which the instance is to reside,
ensure that the node exists.
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
If you create the instance on a host for which no nodes are defined, you
can create the instance without creating a node beforehand. In this
situation, GlassFish Server creates a `CONFIG` node for you. The name of
the node is the unqualified name of the host.
|=======================================================================
For information about how to create a node, see the following sections:
* link:nodes.html#CHDIGBJB[To Create a `DCOM` Node]
* link:nodes.html#gkrnf[To Create an `SSH` Node]
* link:nodes.html#gkrll[To Create a `CONFIG` Node]
If you are adding the instance to a cluster, ensure that the cluster to
which you are adding the instance exists. For information about how to
create a cluster, see link:clusters.html#gkqdm[To Create a Cluster].
If the instance is to reference an existing named configuration, ensure
that the configuration exists. For more information, see
link:named-configurations.html#abdjr[To Create a Named Configuration].
The instance might be a clustered instance that is managed by GMS and
resides on a node that represents a multihome host. In this situation,
ensure that you have the Internet Protocol (IP) address of the network
interface to which GMS binds.
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Log in to the host that is represented by the node where the
instance is to reside.
3. Run the `create-local-instance` subcommand. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for configuring the
instance, see the link:../reference-manual/create-local-instance.html#GSRFM00044[`create-local-instance`(1)] help
page.
|=======================================================================
* If you are creating a standalone instance, do not specify a cluster. +
If the instance is to reference an existing configuration, specify a
configuration that no other cluster or instance references. +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
create-local-instance [--node node-name] [--config configuration-name]instance-name
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
node-name::
The node on which the instance is to reside. +
If you are creating the instance on a host for which fewer than two
nodes are defined, you may omit this option. +
If no nodes are defined for the host, GlassFish Server creates a
CONFIG node for you. The name of the node is the unqualified name of
the host. +
If one node is defined for the host, the instance is created on that
node.
configuration-name::
The name of the existing named configuration that the instance will
reference. +
If you do not require the instance to reference an existing
configuration, omit this option. A copy of the `default-config`
configuration is created for the instance. The name of this
configuration is instance-name`-config`, where instance-name is the
name of the server instance.
instance-name::
Your choice of name for the instance that you are creating.
* If you are creating a shared instance, specify the configuration that
the instance will share with other clusters or instances. +
Do not specify a cluster. +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
create-local-instance [--node node-name] --config configuration-name instance-name
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
node-name::
The node on which the instance is to reside. +
If you are creating the instance on a host for which fewer than two
nodes are defined, you may omit this option. +
If no nodes are defined for the host, GlassFish Server creates a
`CONFIG` node for you. The name of the node is the unqualified name of
the host. +
If one node is defined for the host, the instance is created on that
node.
configuration-name::
The name of the existing named configuration that the instance will
reference.
instance-name::
Your choice of name for the instance that you are creating.
* If you are creating a clustered instance, specify the cluster to which
the instance will belong. +
If the instance is managed by GMS and resides on a node that represents
a multihome host, specify the `GMS-BIND-INTERFACE-ADDRESS-`cluster-name
system property. +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
create-local-instance --cluster cluster-name [--node node-name]
[--systemproperties GMS-BIND-INTERFACE-ADDRESS-cluster-name=bind-address]instance-name
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
cluster-name::
The name of the cluster to which you are adding the instance.
node-name::
The node on which the instance is to reside. +
If you are creating the instance on a host for which fewer than two
nodes are defined, you may omit this option. +
If no nodes are defined for the host, GlassFish Server creates a
`CONFIG` node for you. The name of the node is the unqualified name of
the host. +
If one node is defined for the host, the instance is created on that
node.
bind-address::
The IP address of the network interface to which GMS binds. Specify
this option only if the instance is managed by GMS and resides on a
node that represents a multihome host.
instance-name::
Your choice of name for the instance that you are creating.
[[GSHAG00048]][[gktfa]]
Example 5-10 Creating a Clustered Instance Locally Without Specifying a
Node
This example adds the instance `kui-i1` to the cluster `kuicluster`
locally. The `CONFIG` node `xk01` is created automatically to represent
the host `xk01.example.com`, on which this example is run. The DAS is
running on the host `dashost.example.com` and listens for administration
requests on the default port.
The commands to list the nodes in the domain are included in this
example only to demonstrate the creation of the node `xk01`. These
commands are not required to create the instance.
[source,oac_no_warn]
----
$ asadmin --host dashost.example.com list-nodes --long
NODE NAME TYPE NODE HOST INSTALL DIRECTORY REFERENCED BY
localhost-domain1 CONFIG localhost /export/glassfish3
Command list-nodes executed successfully.
$ asadmin --host dashost.example.com
create-local-instance --cluster kuicluster kui-i1
Rendezvoused with DAS on dashost.example.com:4848.
Port Assignments for server instance kui-i1:
JMX_SYSTEM_CONNECTOR_PORT=28687
JMS_PROVIDER_PORT=27677
HTTP_LISTENER_PORT=28081
ASADMIN_LISTENER_PORT=24849
JAVA_DEBUGGER_PORT=29009
IIOP_SSL_LISTENER_PORT=23820
IIOP_LISTENER_PORT=23700
OSGI_SHELL_TELNET_PORT=26666
HTTP_SSL_LISTENER_PORT=28182
IIOP_SSL_MUTUALAUTH_PORT=23920
Command create-local-instance executed successfully.
$ asadmin --host dashost.example.com list-nodes --long
NODE NAME TYPE NODE HOST INSTALL DIRECTORY REFERENCED BY
localhost-domain1 CONFIG localhost /export/glassfish3
xk01 CONFIG xk01.example.com /export/glassfish3 kui-i1
Command list-nodes executed successfully.
----
[[GSHAG00049]][[gkqps]]
Example 5-11 Creating a Clustered Instance Locally
This example adds the instance `yml-i1` to the cluster `ymlcluster`
locally. The instance resides on the node `sj01`. The DAS is running on
the host `das1.example.com` and listens for administration requests on
the default port.
[source,oac_no_warn]
----
$ asadmin --host das1.example.com
create-local-instance --cluster ymlcluster --node sj01 yml-i1
Rendezvoused with DAS on das1.example.com:4848.
Port Assignments for server instance yml-i1:
JMX_SYSTEM_CONNECTOR_PORT=28687
JMS_PROVIDER_PORT=27677
HTTP_LISTENER_PORT=28081
ASADMIN_LISTENER_PORT=24849
JAVA_DEBUGGER_PORT=29009
IIOP_SSL_LISTENER_PORT=23820
IIOP_LISTENER_PORT=23700
OSGI_SHELL_TELNET_PORT=26666
HTTP_SSL_LISTENER_PORT=28182
IIOP_SSL_MUTUALAUTH_PORT=23920
Command create-local-instance executed successfully.
----
[[GSHAG399]]
See Also
* link:nodes.html#CHDIGBJB[To Create a `DCOM` Node]
* link:nodes.html#gkrnf[To Create an `SSH` Node]
* link:nodes.html#gkrll[To Create a `CONFIG` Node]
* link:../reference-manual/create-local-instance.html#GSRFM00044[`create-local-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help create-local-instance` at the command line.
[[GSHAG400]]
Next Steps
After creating an instance, you can start the instance as explained in
the following sections:
* link:#gkqaw[To Start an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
[[gkqed]][[GSHAG00115]][[to-delete-an-instance-locally]]
To Delete an Instance Locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `delete-local-instance` subcommand in remote mode to delete a
GlassFish Server instance locally.
[width="100%",cols="<100%",]
|=======================================================================
a|
Caution:
If you are using a Java Message Service (JMS) cluster with a master
broker, do not delete the instance that is associated with the master
broker. If this instance must be deleted, use the
link:../reference-manual/change-master-broker.html#GSRFM00005[`change-master-broker`] subcommand to assign the master
broker to a different instance.
|=======================================================================
Deleting an instance involves the following:
* Removing the instance from the configuration of the DAS
* Deleting the instance's files from file system
[[GSHAG401]]
Before You Begin
Ensure that the instance that you are deleting is not running. For
information about how to stop an instance, see the following sections:
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
1. Ensure that the DAS is running. +
Remote subcommands require a running server.
2. Log in to the host that is represented by the node where the
instance resides.
3. Confirm that the instance is not running. +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
list-instances instance-name
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
instance-name::
The name of the instance that you are deleting.
4. Run the link:../reference-manual/delete-local-instance.html#GSRFM00096[`delete-local-instance`] subcommand. +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
delete-local-instance [--node node-name]instance-name
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
node-name::
The node on which the instance resides. If only one node is defined
for the GlassFish Server installation that you are running on the
node's host, you may omit this option.
instance-name::
The name of the instance that you are deleting.
[[GSHAG00050]][[gkqqu]]
Example 5-12 Deleting an Instance Locally
This example confirms that the instance `yml-i1` is not running and
deletes the instance.
[source,oac_no_warn]
----
$ asadmin --host das1.example.com list-instances yml-i1
yml-i1 not running
Command list-instances executed successfully.
$ asadmin --host das1.example.com delete-local-instance --node sj01 yml-i1
Command delete-local-instance executed successfully.
----
[[GSHAG402]]
See Also
* link:#gkqaj[To Stop an Individual Instance Centrally]
* link:#gkqci[To Stop an Individual Instance Locally]
* link:../reference-manual/change-master-broker.html#GSRFM00005[`change-master-broker`(1)]
* link:../reference-manual/delete-local-instance.html#GSRFM00096[`delete-local-instance`(1)]
* link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`(1)]
You can also view the full syntax and options of the subcommands by
typing the following commands at the command line:
* `asadmin help delete-local-instance`
* `asadmin help list-instances`
[[gkqak]][[GSHAG00116]][[to-start-an-individual-instance-locally]]
To Start an Individual Instance Locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `start-local-instance` subcommand in local mode to start an
individual instance locally.
1. Log in to the host that is represented by the node where the
instance resides.
2. Run the `start-local-instance` subcommand. +
[source,oac_no_warn]
----
$ asadmin start-local-instance [--node node-name]instance-name
----
::
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for controlling the
behavior of the instance, see the
link:../reference-manual/start-local-instance.html#GSRFM00237[`start-local-instance`(1)] help page.
|=======================================================================
node-name::
The node on which the instance resides. If only one node is defined
for the GlassFish Server installation that you are running on the
node's host, you may omit this option.
instance-name::
The name of the instance that you are starting.
[[GSHAG00051]][[gkqpu]]
Example 5-13 Starting an Individual Instance Locally
This example starts the instance `yml-i1` locally. The instance resides
on the node `sj01`.
[source,oac_no_warn]
----
$ asadmin start-local-instance --node sj01 yml-i1
Waiting for yml-i1 to start ...............
Successfully started the instance: yml-i1
instance Location: /export/glassfish3/glassfish/nodes/sj01/yml-i1
Log File: /export/glassfish3/glassfish/nodes/sj01/yml-i1/logs/server.log
Admin Port: 24849
Command start-local-instance executed successfully.
----
[[GSHAG403]]
See Also
link:../reference-manual/start-local-instance.html#GSRFM00237[`start-local-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help start-local-instance` at the command line.
[[GSHAG404]]
Next Steps
After starting an instance, you can deploy applications to the instance.
For more information, see the link:../application-deployment-guide/toc.html#GSDPG[GlassFish Server Open Source
Edition Application Deployment Guide].
[[gkqci]][[GSHAG00117]][[to-stop-an-individual-instance-locally]]
To Stop an Individual Instance Locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `stop-local-instance` subcommand in local mode to stop an
individual instance locally.
When an instance is stopped, the instance stops accepting new requests
and waits for all outstanding requests to be completed.
1. Log in to the host that is represented by the node where the
instance resides.
2. Run the link:../reference-manual/stop-local-instance.html#GSRFM00242[`stop-local-instance`] subcommand. +
[source,oac_no_warn]
----
$ asadmin stop-local-instance [--node node-name]instance-name
----
node-name::
The node on which the instance resides. If only one node is defined
for the GlassFish Server installation that you are running on the
node's host, you may omit this option.
instance-name::
The name of the instance that you are stopping.
[[GSHAG00052]][[gkqoo]]
Example 5-14 Stopping an Individual Instance Locally
This example stops the instance `yml-i1` locally. The instance resides
on the node `sj01`.
[source,oac_no_warn]
----
$ asadmin stop-local-instance --node sj01 yml-i1
Waiting for the instance to stop ....
Command stop-local-instance executed successfully.
----
[[GSHAG405]]
See Also
link:../reference-manual/stop-local-instance.html#GSRFM00242[`stop-local-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help stop-local-instance` at the command line.
[[GSHAG406]]
Troubleshooting
If the instance has become unresponsive and fails to stop, run the
subcommand again with the `--kill` option set to `true`. When this
option is `true`, the subcommand uses functionality of the operating
system to kill the instance process.
[[gkqef]][[GSHAG00118]][[to-restart-an-individual-instance-locally]]
To Restart an Individual Instance Locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use the `restart-local-instance` subcommand in local mode to restart an
individual instance locally.
When this subcommand restarts an instance, the DAS synchronizes the
instance with changes since the last synchronization as described in
link:#gksbo[Default Synchronization for Files and Directories].
If you require different synchronization behavior, stop and start the
instance as explained in link:#gksak[To Resynchronize an Instance and
the DAS Online].
1. Log in to the host that is represented by the node where the
instance resides.
2. Run the `restart-local-instance` subcommand. +
[source,oac_no_warn]
----
$ asadmin restart-local-instance [--node node-name]instance-name
----
node-name::
The node on which the instance resides. If only one node is defined
for the GlassFish Server installation that you are running on the
node's host, you may omit this option.
instance-name::
The name of the instance that you are restarting.
[[GSHAG00053]][[gkqnt]]
Example 5-15 Restarting an Individual Instance Locally
This example restarts the instance `yml-i1` locally. The instance
resides on the node `sj01`.
[source,oac_no_warn]
----
$ asadmin restart-local-instance --node sj01 yml-i1
Command restart-local-instance executed successfully.
----
[[GSHAG407]]
See Also
link:../reference-manual/restart-local-instance.html#GSRFM00220[`restart-local-instance`(1)]
You can also view the full syntax and options of the subcommand by
typing `asadmin help restart-local-instance` at the command line.
[[GSHAG408]]
Troubleshooting
If the instance has become unresponsive and fails to stop, run the
subcommand again with the `--kill` option set to `true`. When this
option is `true`, the subcommand uses functionality of the operating
system to kill the instance process before restarting the instance.
[[gkrdd]][[GSHAG00189]][[resynchronizing-glassfish-server-instances-and-the-das]]
Resynchronizing GlassFish Server Instances and the DAS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configuration data for a GlassFish Server instance is stored as follows:
* In the repository of the domain administration server (DAS)
* In a cache on the host that is local to the instance
The configuration data in these locations must be synchronized. The
cache is synchronized in the following circumstances:
* Whenever an `asadmin` subcommand is run. For more information, see
"link:../administration-guide/overview.html#GSADG00697[Impact of Configuration Changes]" in GlassFish Server
Open Source Edition Administration Guide.
* When a user uses the administration tools to start or restart an
instance.
[[gksbo]][[GSHAG00267]][[default-synchronization-for-files-and-directories]]
Default Synchronization for Files and Directories
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `--sync` option of the subcommands for starting an instance controls
the type of synchronization between the DAS and the instance's files
when the instance is started. You can use this option to override the
default synchronization behavior for the files and directories of an
instance. For more information, see link:#gksak[To Resynchronize an
Instance and the DAS Online].
On the DAS, the files and directories of an instance are stored in the
domain-dir directory, where domain-dir is the directory in which a
domain's configuration is stored. The default synchronization behavior
for the files and directories of an instance is as follows:
`applications`::
This directory contains a subdirectory for each application that is
deployed to the instance. +
By default, only a change to an application's top-level directory
within the application directory causes the DAS to synchronize that
application's directory. When the DAS resynchronizes the
`applications` directory, all the application's files and all
generated content that is related to the application are copied to the
instance. +
If a file below a top-level subdirectory is changed without a change
to a file in the top-level subdirectory, full synchronization is
required. In normal operation, files below the top-level
subdirectories of these directories are not changed and such files
should not be changed by users. If an application is deployed and
undeployed, full synchronization is not necessary to update the
instance with the change.
`config`::
This directory contains configuration files for the entire domain. +
By default, the DAS resynchronizes files that have been modified since
the last resynchronization only if the `domain.xml` file in this
directory has been modified. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
If you add a file to the `config` directory of an instance, the file
is deleted when the instance is resynchronized with the DAS. However,
any file that you add to the `config` directory of the DAS is not
deleted when instances and the DAS are resynchronized. By default, any
file that you add to the `config` directory of the DAS is not
resynchronized. If you require any additional configuration files to
be resynchronized, you must specify the files explicitly. For more
information, see link:#gksaz[To Resynchronize Additional Configuration
Files].
|=======================================================================
`config`::
`config/`config-name::
This directory contains files that are to be shared by all instances
that reference the named configuration config-name. A config-name
directory exists for each named configuration in the configuration of
the DAS. +
Because the config-name directory contains the subdirectories `lib`
and `docroot`, this directory might be very large. Therefore, by
default, only a change to a file or a top-level subdirectory of
config-name causes the DAS to resynchronize the config-name directory.
`config/domain.xml`::
This file contains the DAS configuration for the domain to which the
instance belongs. +
By default, the DAS resynchronizes this file if it has been modified
since the last resynchronization. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
A change to the `config/domain.xml` file is required to cause the DAS
to resynchronize an instance's files. If the `config/domain.xml` file
has not changed since the last resynchronization, none of the
instance's files is resynchronized, even if some of these files are
out of date in the cache.
|=======================================================================
`docroot`::
This directory is the HTTP document root directory. By default, all
instances in a domain use the same document root directory. To enable
instances to use a different document root directory, a virtual server
must be created in which the `docroot` property is set. For more
information, see the link:../reference-manual/create-virtual-server.html#GSRFM00062[`create-virtual-server`(1)] help
page. +
The `docroot` directory might be very large. Therefore, by default,
only a change to a file or a subdirectory in the top level of the
`docroot` directory causes the DAS to resynchronize the `docroot`
directory. The DAS checks files in the top level of the `docroot`
directory to ensure that changes to the `index.html` file are
detected. +
When the DAS resynchronizes the `docroot` directory, all modified
files and subdirectories at any level are copied to the instance. +
If a file below a top-level subdirectory is changed without a change
to a file in the top-level subdirectory, full synchronization is
required.
`generated`::
This directory contains generated files for Java EE applications and
modules, for example, EJB stubs, compiled JSP classes, and security
policy files. Do not modify the contents of this directory. +
This directory is resynchronized when the `applications` directory is
resynchronized. Therefore, only directories for applications that are
deployed to the instance are resynchronized.
`java-web-start`::
This directory is not resynchronized. It is created and populated as
required on each instance.
`lib`::
`lib/classes`::
These directories contain common Java class files or JAR archives and
ZIP archives for use by applications that are deployed to the entire
domain. Typically, these directories contain common JDBC drivers and
other utility libraries that are shared by all applications in the
domain. +
The contents of these directories are loaded by the common class
loader. For more information, see "link:../application-development-guide/class-loaders.html#GSDVG00342[Using the Common
Class Loader]" in GlassFish Server Open Source Edition Application
Development Guide. The class loader loads the contents of these
directories in the following order: +
1. `lib/classes`
2. `lib/*.jar`
3. `lib/*.zip` +
The `lib` directory also contains the following subdirectories: +
`applibs`;;
This directory contains application-specific Java class files or JAR
archives and ZIP archives for use by applications that are deployed
to the entire domain.
`ext`;;
This directory contains optional packages in JAR archives and ZIP
archives for use by applications that are deployed to the entire
domain. These archive files are loaded by using Java extension
mechanism. For more information, see
http://download.oracle.com/javase/6/docs/technotes/guides/extensions/extensions.html[Optional
Packages - An Overview]
(`http://docs.oracle.com/javase/7/docs/technotes/guides/extensions/extensions.html`). +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Optional packages were formerly known as standard extensions or
extensions.
|=======================================================================
The `lib` directory and its subdirectories typically contain only a
small number of files. Therefore, by default, a change to any file in
these directories causes the DAS to resynchronize the file that
changed.
[[gksak]][[GSHAG00119]][[to-resynchronize-an-instance-and-the-das-online]]
To Resynchronize an Instance and the DAS Online
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Resynchronizing an instance and the DAS updates the instance with
changes to the instance's configuration files on the DAS. An instance is
resynchronized with the DAS when the instance is started or restarted.
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Resynchronization of an instance is only required if the instance is
stopped. A running instance does not require resynchronization.
|=======================================================================
1. Ensure that the DAS is running.
2. Determine whether the instance is stopped. +
[source,oac_no_warn]
----
asadmin> list-instances instance-name
----
instance-name::
The name of the instance that you are resynchronizing with the DAS. +
If the instance is stopped, the `list-instances` subcommand indicates
that the instance is not running.
3. If the instance is stopped, start the instance. +
If the instance is running, no further action is required.
* If DCOM or SSH is set up, start the instance centrally. +
If you require full synchronization, set the `--sync` option of the
`start-instance` subcommand to `full`. If default synchronization is
sufficient, omit this option. +
[source,oac_no_warn]
----
asadmin> start-instance [--sync full] instance-name
----
::
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for controlling the
behavior of the instance, see the link:../reference-manual/start-instance.html#GSRFM00236[`start-instance`(1)]
help page.
|=======================================================================
instance-name::
The name of the instance that you are starting.
* If neither DCOM nor SSH is set up, start the instance locally from the
host where the instance resides. +
If you require full synchronization, set the `--sync` option of the
`start-local-instance` subcommand to `full`. If default synchronization
is sufficient, omit this option. +
[source,oac_no_warn]
----
$ asadmin start-local-instance [--node node-name] [--sync full] instance-name
----
::
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for controlling the
behavior of the instance, see the
link:../reference-manual/start-local-instance.html#GSRFM00237[`start-local-instance`(1)] help page.
|=======================================================================
node-name::
The node on which the instance resides. If only one node is defined
for the GlassFish Server installation that you are running on the
node's host, you may omit this option.
instance-name::
The name of the instance that you are starting.
[[GSHAG00054]][[gksfu]]
Example 5-16 Resynchronizing an Instance and the DAS Online
This example determines that the instance `yml-i1` is stopped and fully
resynchronizes the instance with the DAS. Because neither DCOM nor SSH
is set up, the instance is started locally on the host where the
instance resides. In this example, multiple nodes are defined for the
GlassFish Server installation that is running on the node's host.
To determine whether the instance is stopped, the following command is
run in multimode on the DAS host:
[source,oac_no_warn]
----
asadmin> list-instances yml-i1
yml-i1 not running
Command list-instances executed successfully.
----
To start the instance, the following command is run in single mode on
the host where the instance resides:
[source,oac_no_warn]
----
$ asadmin start-local-instance --node sj01 --sync full yml-i1
Removing all cached state for instance yml-i1.
Waiting for yml-i1 to start ...............
Successfully started the instance: yml-i1
instance Location: /export/glassfish3/glassfish/nodes/sj01/yml-i1
Log File: /export/glassfish3/glassfish/nodes/sj01/yml-i1/logs/server.log
Admin Port: 24849
Command start-local-instance executed successfully.
----
[[GSHAG409]]
See Also
* link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`(1)]
* link:../reference-manual/start-instance.html#GSRFM00236[`start-instance`(1)]
* link:../reference-manual/start-local-instance.html#GSRFM00237[`start-local-instance`(1)]
You can also view the full syntax and options of the subcommands by
typing the following commands at the command line.
`asadmin help list-instances`
`asadmin help start-instance`
`asadmin help start-local-instance`
[[gksav]][[GSHAG00120]][[to-resynchronize-library-files]]
To Resynchronize Library Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To ensure that library files are resynchronized correctly, you must
ensure that each library file is placed in the correct directory for the
type of file.
1. Place each library file in the correct location for the type of
library file as shown in the following table. +
[width="100%",cols="<53%,<47%",options="header",]
|=======================================================================
|Type of Library Files |Location
|Common JAR archives and ZIP archives for all applications in a domain.
|domain-dir`/lib`
|Common Java class files for a domain for all applications in a domain.
|domain-dir`/lib/classes`
|Application-specific libraries. |domain-dir`/lib/applibs`
|Optional packages for all applications in a domain.
|domain-dir`/lib/ext`
|Library files for all applications that are deployed to a specific
cluster or standalone instance. |domain-dir`/config/`config-name`/lib`
|Optional packages for all applications that are deployed to a specific
cluster or standalone instance.
|domain-dir`/config/`config-name`/lib/ext`
|=======================================================================
domain-dir::
The directory in which the domain's configuration is stored.
config-name::
For a standalone instance: the named configuration that the instance
references. +
For a clustered instance: the named configuration that the cluster to
which the instance belongs references.
2. When you deploy an application that depends on these library files,
use the `--libraries` option of the deploy subcommand to specify these
dependencies. +
For library files in the domain-dir`/lib/applib` directory, only the JAR
file name is required, for example: +
[source,oac_no_warn]
----
asadmin> deploy --libraries commons-coll.jar,X1.jar app.ear
----
For other types of library file, the full path is required.
[[GSHAG411]]
See Also
link:../reference-manual/deploy.html#GSRFM00114[`deploy`(1)]
You can also view the full syntax and options of the subcommands by
typing the command `asadmin help deploy` at the command line.
[[gksco]][[GSHAG00121]][[to-resynchronize-custom-configuration-files-for-an-instance]]
To Resynchronize Custom Configuration Files for an Instance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration files in the domain-dir`/config` directory that are
resynchronized are resynchronized for the entire domain. If you create a
custom configuration file for an instance or a cluster, the custom file
is resynchronized only for the instance or cluster.
1. Place the custom configuration file in the
domain-dir`/config/`config-name directory.::
domain-dir::
The directory in which the domain's configuration is stored.
config-name::
The named configuration that the instance references.
2. If the instance locates the file through an option of the Java
application launcher, update the option.
1. Delete the option. +
[source,oac_no_warn]
----
asadmin> delete-jvm-options --target instance-name
option-name=current-value
----
instance-name::
The name of the instance for which the custom configuration file is
created.
option-name::
The name of the option for locating the file.
current-value::
The current value of the option for locating the file.
2. Re-create the option that you deleted in the previous step. +
[source,oac_no_warn]
----
asadmin> create-jvm-options --target instance-name
option-name=new-value
----
instance-name::
The name of the instance for which the custom configuration file is
created.
option-name::
The name of the option for locating the file.
new-value::
The new value of the option for locating the file.
[[GSHAG00055]][[gksfr]]
Example 5-17 Updating the Option for Locating a Configuration File
This example updates the option for locating the `server.policy` file to
specify a custom file for the instance `pmd`.
[source,oac_no_warn]
----
asadmin> delete-jvm-options --target pmd
-Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy
Deleted 1 option(s)
Command delete-jvm-options executed successfully.
asadmin> create-jvm-options --target pmd
-Djava.security.policy=${com.sun.aas.instanceRoot}/config/pmd-config/server.policy
Created 1 option(s)
Command create-jvm-options executed successfully.
----
[[GSHAG412]]
See Also
* link:../reference-manual/create-jvm-options.html#GSRFM00042[`create-jvm-options`(1)]
* link:../reference-manual/delete-jvm-options.html#GSRFM00094[`delete-jvm-options`(1)]
You can also view the full syntax and options of the subcommands by
typing the following commands at the command line.
`asadmin help create-jvm-options`
`asadmin help delete-jvm-options`
[[gkscp]][[GSHAG00122]][[to-resynchronize-users-changes-to-files]]
To Resynchronize Users' Changes to Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A change to the `config/domain.xml` file is required to cause the DAS to
resynchronize instances' files. If other files in the domain directory
are changed without a change to the `config/domain.xml` file, instances
are not resynchronized with these changes.
The following changes are examples of changes to the domain directory
without a change to the `config/domain.xml` file:
* Adding files to the `lib` directory
* Adding certificates to the key store by using the `keytool` command
1. Change the last modified time of the `config/domain.xml` file. +
Exactly how to change the last modified time depends on the operating
system. For example, on UNIX and Linux systems, you can use the
http://www.oracle.com/pls/topic/lookup?ctx=E18752&id=REFMAN1touch-1[`touch`(1)]
command.
2. Resynchronize each instance in the domain with the DAS. +
For instructions, see link:#gksak[To Resynchronize an Instance and the
DAS Online].
[[GSHAG413]]
See Also
* link:#gksak[To Resynchronize an Instance and the DAS Online]
* http://www.oracle.com/pls/topic/lookup?ctx=E18752&id=REFMAN1touch-1[`touch`(1)]
[[gksaz]][[GSHAG00123]][[to-resynchronize-additional-configuration-files]]
To Resynchronize Additional Configuration Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, GlassFish Server synchronizes only the following
configuration files:
* `admin-keyfile`
* `cacerts.jks`
* `default-web.xml`
* `domain.xml`
* `domain-passwords`
* `keyfile`
* `keystore.jks`
* `server.policy`
* `sun-acc.xml`
* `wss-server-config-1.0`
* `xml wss-server-config-2.0.xml`
If you require instances in a domain to be resynchronized with
additional configuration files for the domain, you can specify a list of
files to resynchronize.
[width="100%",cols="<100%",]
|=======================================================================
a|
Caution:
If you specify a list of files to resynchronize, you must specify all
the files that the instances require, including the files that GlassFish
Server resynchronizes by default. Any file in the instance's cache that
is not in the list is deleted when the instance is resynchronized with
the DAS.
|=======================================================================
In the `config` directory of the domain, create a plain text file that
is named `config-files` that lists the files to resynchronize.
In the `config-files` file, list each file name on a separate line.
[[GSHAG00056]][[gksgl]]
Example 5-18 `config-files` File
This example shows the content of a `config-files` file. This file
specifies that the `some-other-info` file is to be resynchronized in
addition to the files that GlassFish Server resynchronizes by default:
[source,oac_no_warn]
----
admin-keyfile
cacerts.jks
default-web.xml
domain.xml
domain-passwords
keyfile
keystore.jks
server.policy
sun-acc.xml
wss-server-config-1.0.xml
wss-server-config-2.0.xml
some-other-info
----
[[gksdj]][[GSHAG00124]][[to-prevent-deletion-of-application-generated-files]]
To Prevent Deletion of Application-Generated Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the DAS resynchronizes an instance's files, the DAS deletes from
the instance's cache any files that are not listed for
resynchronization. If an application creates files in a directory that
the DAS resynchronizes, these files are deleted when the DAS
resynchronizes an instance with the DAS.
Put the files in a subdirectory under the domain directory that is not
defined by GlassFish Server, for example,
`/export/glassfish3/glassfish/domains/domain1/myapp/myfile`.
[[gksdy]][[GSHAG00125]][[to-resynchronize-an-instance-and-the-das-offline]]
To Resynchronize an Instance and the DAS Offline
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Resynchronizing an instance and the DAS offline updates the instance's
cache without the need for the instance to be able to communicate with
the DAS. Offline resynchronization is typically required for the
following reasons:
* To reestablish the instance after an upgrade
* To synchronize the instance manually with the DAS when the instance
cannot contact the DAS
1. Ensure that the DAS is running.
2. [[gktio]]
Export the configuration data that you are resynchronizing to an archive
file.
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for exporting the
configuration data, see the link:../reference-manual/export-sync-bundle.html#GSRFM00134[`export-sync-bundle`(1)]
help page.
|=======================================================================
How to export the data depends on the host from where you run the
`export-sync-bundle` subcommand.
* From the DAS host, run the `export-sync-bundle` subcommand as follows: +
[source,oac_no_warn]
----
asadmin> export-sync-bundle --target target
----
target::
The cluster or standalone instance for which to export configuration
data. +
Do not specify a clustered instance. If you specify a clustered
instance, an error occurs. To export configuration data for a
clustered instance, specify the name of the cluster of which the
instance is a member, not the instance. +
The file is created on the DAS host.
* From the host where the instance resides, run the `export-sync-bundle`
subcommand as follows: +
[source,oac_no_warn]
----
$ asadmin --host das-host [--port admin-port]
export-sync-bundle [--retrieve=true] --target target
----
das-host::
The name of the host where the DAS is running.
admin-port::
The HTTP or HTTPS port on which the DAS listens for administration
requests. If the DAS listens on the default port for administration
requests, you may omit this option.
target::
The cluster or standalone instance for which to export configuration
data. +
Do not specify a clustered instance. If you specify a clustered
instance, an error occurs. To export configuration data for a
clustered instance, specify the name of the cluster of which the
instance is a member, not the instance. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
To create the archive file on the host where the instance resides, set
the `--retrieve` option to `true`. If you omit this option, the archive
file is created on the DAS host.
|=======================================================================
3. If necessary, copy the archive file that you created in
Step link:#gktio[2] from the DAS host to the host where the instance
resides.
4. From the host where the instance resides, import the instance's
configuration data from the archive file that you created in
Step link:#gktio[2]. +
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
Only the options that are required to complete this task are provided in
this step. For information about all the options for importing the
configuration data, see the link:../reference-manual/import-sync-bundle.html#GSRFM00142[`import-sync-bundle`(1)]
help page.
|=======================================================================
[source,oac_no_warn]
----
$ asadmin import-sync-bundle [--node node-name] --instance instance-name archive-file
----
node-name::
The node on which the instance resides. If you omit this option, the
subcommand determines the node from the DAS configuration in the
archive file.
instance-name::
The name of the instance that you are resynchronizing.
archive-file::
The name of the file, including the path, that contains the archive
file to import.
[[GSHAG00057]][[gksgg]]
Example 5-19 Resynchronizing an Instance and the DAS Offline
This example resynchronizes the clustered instance `yml-i1` and the DAS
offline. The instance is a member of the cluster `ymlcluster`. The
archive file that contains the instance's configuration data is created
on the host where the instance resides.
[source,oac_no_warn]
----
$ asadmin --host dashost.example.com
export-sync-bundle --retrieve=true --target ymlcluster
Command export-sync-bundle executed successfully.
$ asadmin import-sync-bundle --node sj01
--instance yml-i1 ymlcluster-sync-bundle.zip
Command import-sync-bundle executed successfully.
----
[[GSHAG414]]
See Also
* link:../reference-manual/export-sync-bundle.html#GSRFM00134[`export-sync-bundle`(1)]
* link:../reference-manual/import-sync-bundle.html#GSRFM00142[`import-sync-bundle`(1)]
You can also view the full syntax and options of the subcommands by
typing the following commands at the command line.
`asadmin help export-sync-bundle`
`asadmin help import-sync-bundle`
[[gkqcr]][[GSHAG00190]][[migrating-ejb-timers]]
Migrating EJB Timers
~~~~~~~~~~~~~~~~~~~~
If a GlassFish Server server instance stops or fails abnormally, it may
be desirable to migrate the EJB timers defined for that stopped server
instance to another running server instance.
Automatic timer migration is enabled by default for clustered server
instances that are stopped normally. Automatic timer migration can also
be enabled to handle clustered server instance crashes. In addition,
timers can be migrated manually for stopped or crashed server instances.
* link:#gkvwo[To Enable Automatic EJB Timer Migration for Failed
Clustered Instances]
* link:#abdji[To Migrate EJB Timers Manually]
[[gkvwo]][[GSHAG00126]][[to-enable-automatic-ejb-timer-migration-for-failed-clustered-instances]]
To Enable Automatic EJB Timer Migration for Failed Clustered Instances
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic migration of EJB timers is enabled by default for clustered
server instances that are stopped normally. If the Group Management
Service (GMS) is enabled and a clustered instance is stopped normally,
no further action is required for timer migration to occur. The
procedure in this section is only necessary if you want to enable
automatic timer migration for clustered server instances that have
stopped abnormally.
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
If the GMS is enabled, the default automatic timer migration cannot be
disabled. To disable automatic timer migration, you must first disable
the GMS. For information about the GMS, see
link:clusters.html#gjfnl[Group Management Service].
|=======================================================================
[[GSHAG415]]
Before You Begin
Automatic EJB timer migration can only be configured for clustered
server instances. Automatic timer migration is not possible for
standalone server instances.
Enable delegated transaction recovery for the cluster.
This enables automatic timer migration for failed server instances in
the cluster.
For instructions on enabling delegated transaction recovery, see
"link:../administration-guide/transactions.html#GSADG00022[Administering Transactions]" in GlassFish Server Open
Source Edition Administration Guide.
[[abdji]][[GSHAG00127]][[to-migrate-ejb-timers-manually]]
To Migrate EJB Timers Manually
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
EJB timers can be migrated manually from a stopped source instance to a
specified target instance in the same cluster if GMS notification is not
enabled. If no target instance is specified, the DAS will attempt to
find a suitable server instance. A migration notification will then be
sent to the selected target server instance.
Note the following restrictions:
* If the source instance is part of a cluster, then the target instance
must also be part of that same cluster.
* It is not possible to migrate timers from a standalone instance to a
clustered instance, or from one cluster to another cluster.
* It is not possible to migrate timers from one standalone instance to
another standalone instance.
* All EJB timers defined for a given instance are migrated with this
procedure. It is not possible to migrate individual timers.
[[GSHAG416]]
Before You Begin
The server instance from which the EJB timers are to be migrated should
not be active during the migration process.
1. Verify that the source clustered server instance from which the EJB
timers are to be migrated is not currently running. +
[source,oac_no_warn]
----
asadmin> list-instances source-instance
----
2. Stop the instance from which the timers are to be migrated, if that
instance is still running. +
[source,oac_no_warn]
----
asadmin> stop-instance source-instance
----
::
[width="100%",cols="<100%",]
|=======================================================================
a|
Note:
The target instance to which the timers will be migrated should be
running.
|=======================================================================
3. List the currently defined EJB timers on the source instance, if
desired. +
[source,oac_no_warn]
----
asadmin> list-timers source-cluster
----
4. Migrate the timers from the stopped source instance to the target
instance. +
[source,oac_no_warn]
----
asadmin> migrate-timers --target target-instance source-instance
----
[[GSHAG00058]][[gkmgw]]
Example 5-20 Migrating an EJB Timer
The following example show how to migrate timers from a clustered source
instance named `football` to a clustered target instance named `soccer`.
[source,oac_no_warn]
----
asadmin> migrate-timers --target soccer football
----
[[GSHAG417]]
See Also
link:../reference-manual/list-timers.html#GSRFM00205[`list-timers`(1)],
link:../reference-manual/migrate-timers.html#GSRFM00211[`migrate-timers`(1)],
link:../reference-manual/list-instances.html#GSRFM00170[`list-instances`(1)],
link:../reference-manual/stop-instance.html#GSRFM00241[`stop-instance`(1)]