Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions src/content/docs/about/authoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Start a new line with between 2 and 6 `#` symbols, followed by a single space, a
## Example second-level heading
```

The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [frontmatter](#frontmatter)). Therefore the first _authored_ heading on every page should be a second level heading (`##`).
The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [frontmatter](#frontmatter)). Therefore the first *authored* heading on every page should be a second level heading (`##`).

:::note[Second level heading requirement]
Authored headings should start at the second level (`##`) on every page, since the first level (`#`) is reserved for the page title which is machine-generated.
Expand Down Expand Up @@ -112,16 +112,16 @@ A section heading's `id` is usually the same text string as the heading itself,

### Emphasizing text

Wrap text to be emphasized with `_ ` for italics, `**` for bold, and `~~` for strikethrough.
Wrap text to be emphasized with `*` for italics, `**` for bold, and `~~` for strikethrough.

```md
<!-- example.md -->

_Italicized_ text
*Italicized* text

**Bold** text

**_Bold and italicized_** text
***Bold and italicized*** text

~~Strikethrough~~ text
```
Expand Down
6 changes: 3 additions & 3 deletions src/content/docs/administration/backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ ArchivesSpace provides simple scripts for windows and unix-like systems for back

### When using the embedded demo database

Note: _NEVER use the demo database in production._. You can run:
Note: *NEVER use the demo database in production.*. You can run:

```shell
scripts/backup.sh --output /path/to/backup-yyyymmdd.zip
Expand Down Expand Up @@ -104,8 +104,8 @@ your `mysqldump` backup into an empty database. If you are using the
`scripts/backup.sh` script (described above), this dump file is named
`mysqldump.sql` in your backup `.zip` file.

To load a MySQL dump file, follow the directions in _Set up your MySQL
database_ to create an empty database with the appropriate
To load a MySQL dump file, follow the directions in *Set up your MySQL
database* to create an empty database with the appropriate
permissions. Then, populate the database from your backup file using
the MySQL client:

Expand Down
20 changes: 10 additions & 10 deletions src/content/docs/administration/upgrading_1_5_0.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,29 +32,29 @@ Converting the container data model in version 1.4.2 and earlier versions of Arc

## Frequently Asked Questions

_How will my data be converted to the new model?_
*How will my data be converted to the new model?*

When your installation is upgraded to 1.5.0, the conversion will happen as part of the upgrade process.

_Can I continue to use the current model for containers and not convert to the new model?_
*Can I continue to use the current model for containers and not convert to the new model?*

Because it is such a substantial improvement (see the [new features list](#new-features-in-150) below), the new model is required for all using ArchivesSpace 1.5.0 and higher. The only way to continue using the current model is to never upgrade beyond 1.4.2.

_What if I’m already using the container management plugin made available to the community by Yale University?_
*What if I’m already using the container management plugin made available to the community by Yale University?*

Conversion of data created using the Yale container management plugin, or a local adaptation of the plugin, will also happen as part of the process of upgrading to 1.5.0. Some steps will be skipped when they are not needed. At the end of the process, the new container data model will be integrated into your ArchivesSpace and will not need to be loaded or maintained as a plugin.

Those currently running the container management plugin will need to remove the container management plugin from the list in your config file prior to starting the conversion or a validation name error will occur.

_I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?_
*I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?*

No, you must migrate to 1.4.2 or earlier versions and then upgrade your installation to 1.5.0 according to the instructions provided here.

_What changes are being made to the previous model for containers?_
*What changes are being made to the previous model for containers?*

The biggest change is the new concept of top containers. A top container is the highest level container in which a particular instance is stored. Top containers are in some ways analogous to the current Container 1, but broken out from the entire container record (child and grandparent container records). As such, top containers enable more efficient recording and updating of the highest level containers in your collection.

_How does ArchivesSpace determine what is a top container?_
*How does ArchivesSpace determine what is a top container?*

During the conversion, ArchivesSpace will find all the Container 1s in your current ArchivesSpace database. It will then evaluate them as follows:

Expand All @@ -64,7 +64,7 @@ During the conversion, ArchivesSpace will find all the Container 1s in your curr

## Preparation

_What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?_
*What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?*

- If your Container 1s have unique barcodes, you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors.
- If your Container 1s do not have barcodes, but have a nonduplicative container identifier sequence within each accession or resource (e.g. Box 1, Box 2, Box 3), or the identifiers are only reused within an accession or resource for different types of containers (for example, you have a Box 1 through 10 and an Oversize Box 1 through 3) you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors.
Expand All @@ -74,7 +74,7 @@ You do not need to make any changes to Container 2 fields or Container 3 fields.

If you use the current Container Extent fields, these will no longer be available in 1.5.0. Any data in these fields will be migrated to a new Extent sub-record during the conversion. You can evaluate whether this data should remain in an extent record or if it belongs in a container profile or other fields and then move it accordingly after the conversion is complete.

_I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?_
*I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?*

If you have a box and folder associated with a component (or any other hierarchical relationship of containers), you will need to add identifiers to the container element so that the EAD importer knows which is the top container. If you previously used Archivists' Toolkit to create EAD, your containers probably already have container identifiers. If your container elements do not have identifiers already, Yale University has made available an [XSLT transformation file](https://github.com/YaleArchivesSpace/xslt-files/blob/master/EAD_add_IDs_to_containers.xsl) to add them. You will need to run it before importing the EAD file into ArchivesSpace.

Expand Down Expand Up @@ -112,7 +112,7 @@ Because this is a major change in the data model for this portion of the applica
- When the error report shows no errors, or when you are satisfied with the remaining errors, your production instance is ready to be upgraded.
- When the final upgrade/conversion is complete, you can move ArchivesSpace version 1.5.0 into production.

_What are some common errors or anomalies that will be flagged in the conversion?_
*What are some common errors or anomalies that will be flagged in the conversion?*

- A container with a barcode has different indicators or types in different records.
- A container with a particular type and indicator sometimes has a barcode and sometimes doesn’t.
Expand All @@ -122,7 +122,7 @@ _What are some common errors or anomalies that will be flagged in the conversion

The conversion process can resolve some of these errors for you by supplying or deleting values as it deems appropriate, but for the most control over the process you will most likely want to resolve such issues yourself in your ArchivesSpace database before converting to the new container model.

_Are there any known conversion issues?_
*Are there any known conversion issues?*

Due to a change in the ArchivesSpace EAD importer in 2015, some EADs with hierarchical containers not designated by a @parent attribute were turned into multiple instance records. This has since been corrected in the application, but we are working on a plugin (now available at [Instance Joiner Plugin](https://github.com/archivesspace-plugins/instance_joiner) that will enable you to turn these back into single instances so that subcontainers are not mistakenly turned into top containers.

Expand Down
8 changes: 4 additions & 4 deletions src/content/docs/architecture/backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ take the values supplied by the JSONModel object it is passed and
assume that everything that needs to be there is there, and that
validation has already happened.

The remaining two aspects _are_ enforced by the model layer, but
The remaining two aspects *are* enforced by the model layer, but
generally don't pertain to just a single record type. For example, an
accession may be linked to zero or more subjects, but so can several
other record types, so it doesn't make sense for the `Accession` model
Expand Down Expand Up @@ -188,11 +188,11 @@ then manipulate the result to implement the desired behaviour.
### Nested records

Some record types, like accessions, digital objects, and subjects, are
_top-level records_, in the sense that they are created independently
*top-level records*, in the sense that they are created independently
of any other record and are addressable via their own URI. However,
there are a number of records that can't exist in isolation, and only
exist in the context of another record. When one record can contain
instances of another record, we call them _nested records_.
instances of another record, we call them *nested records*.

To give an example, the `date` record type is nested within an
`accession` record (among others). When the model layer is asked to
Expand Down Expand Up @@ -394,7 +394,7 @@ records in the system. The major actors in the permissions model are:
- Records -- A unit of information in the system. Some records are
global (existing outside of any given repository), while some are
repository-scoped (belonging to a single repository).
- Groups -- A set of users _within_ a repository. Each group is
- Groups -- A set of users *within* a repository. Each group is
assigned zero or more permissions, which it confers upon its
members.
- Permissions -- An action that a user can perform. For example, A
Expand Down
40 changes: 20 additions & 20 deletions src/content/docs/development/dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,12 @@ If using Docker & Docker Compose install them following the official documentati
- [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)
- [https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/)

_Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker._
*Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker.*

The recommended way of developing ArchivesSpace is to fork the repository and clone it locally.

_Note: all commands in the following instructions assume you are in the root directory of your local fork
unless otherwise specified._
*Note: all commands in the following instructions assume you are in the root directory of your local fork
unless otherwise specified.*

**Quickstart**

Expand Down Expand Up @@ -94,9 +94,9 @@ Start by building the images. This creates a custom Solr image that includes Arc
docker-compose -f docker-compose-dev.yml build
```

_Note: you only need to run the above command once. You would only need to rerun this command if a)
*Note: you only need to run the above command once. You would only need to rerun this command if a)
you delete the image and therefore need to recreate it, or b) you make a change to ArchivesSpace's Solr
configuration and therefore need to rebuild the image to include the updated configuration._
configuration and therefore need to rebuild the image to include the updated configuration.*

Run MySQL and Solr in the background:

Expand Down Expand Up @@ -162,13 +162,13 @@ dependencies--JRuby, Gems, etc. This one command creates a fully
self-contained development environment where everything is downloaded
within the ArchivesSpace project `build` directory.

_It is not necessary and generally incorrect to manually install JRuby
*It is not necessary and generally incorrect to manually install JRuby
& bundler etc. for ArchivesSpace (whether with a version manager or
otherwise)._
otherwise).*

_The self-contained ArchivesSpace development environment typically does
*The self-contained ArchivesSpace development environment typically does
not interact with other J/Ruby environments you may have on your system
(such as those managed by rbenv or similar)._
(such as those managed by rbenv or similar).*

This is the starting point for all ArchivesSpace development. You may need
to re-run this command after fetching updates, or when making changes to
Expand Down Expand Up @@ -248,8 +248,8 @@ mysql -h 127.0.0.1 -u as -pas123 -e "DROP DATABASE archivesspace"
mysql -h 127.0.0.1 -u as -pas123 -e "CREATE DATABASE IF NOT EXISTS archivesspace DEFAULT CHARACTER SET utf8mb4"
```

_Note: you can skip the above step if MySQL was just started for the first time or any time you
have an empty ArchivesSpace (one where `db:migrate` has not been run)._
*Note: you can skip the above step if MySQL was just started for the first time or any time you
have an empty ArchivesSpace (one where `db:migrate` has not been run).*

Assuming you have MySQL running and an empty `archivesspace` database available you can proceed
to restore:
Expand All @@ -259,8 +259,8 @@ gzip -dc ./build/mysql_db_fixtures/blank.sql.gz | mysql --host=127.0.0.1 --port=
./build/run db:migrate
```

_Note: The above instructions should work out-of-the-box. If you want to use your own database
and / or have configured MySQL differently then adjust the commands as needed._
*Note: The above instructions should work out-of-the-box. If you want to use your own database
and / or have configured MySQL differently then adjust the commands as needed.*

After the restore `./build/run db:migrate` is run to catch any migration updates. You can now
proceed to run the application dev servers, as described below, with data already
Expand All @@ -278,9 +278,9 @@ Will wipe out any existing Solr state. This is not required when setting
up for the first time, but is often required after a database reset (such as
after running the `./build/run db:nuke` task).

_More specifically what this does is submit a delete all request to Solr and empty
*More specifically what this does is submit a delete all request to Solr and empty
out the contents of the `./build/dev/indexer*_state` directories, which is described
below._
below.*

### Run the development servers

Expand Down Expand Up @@ -321,7 +321,7 @@ servers directly via build tasks:
These should be run in different terminal sessions and do not need to be run
in a specific order or are all required.

_An example use case for running a server directly is to use the pry debugger._
*An example use case for running a server directly is to use the pry debugger.*

**Advanced: debugging with pry**

Expand Down Expand Up @@ -359,8 +359,8 @@ Running the developments servers will create directories in `./build/dev`:

./build/run db:nuke

_Note: the folders will be created as they are needed, so they may not all be present
at all times._
*Note: the folders will be created as they are needed, so they may not all be present
at all times.*

## Running the tests

Expand All @@ -378,9 +378,9 @@ You can also run a single spec file with:

./build/run backend:test -Dspec="myfile_spec.rb"

_By default the tests are configured to run using a separate MySQL & Solr from the
*By default the tests are configured to run using a separate MySQL & Solr from the
development servers. This means that the development and test environments will not
interfere with each other._
interfere with each other.*

```bash
# run the backend / api tests
Expand Down
Loading