Login
Newsletter
Werbung

Sicherheit: Mehrere Probleme in Red Hat OpenShift Container Storage
Aktuelle Meldungen Distributionen
Name: Mehrere Probleme in Red Hat OpenShift Container Storage
ID: RHSA-2021:2041-01
Distribution: Red Hat
Plattformen: Red Hat OpenShift Container Storage
Datum: Mi, 19. Mai 2021, 23:48
Referenzen: https://access.redhat.com/security/cve/CVE-2021-3114
https://access.redhat.com/security/cve/CVE-2020-28362
https://access.redhat.com/security/cve/CVE-2020-26160
https://access.redhat.com/security/cve/CVE-2020-7608
https://access.redhat.com/security/cve/CVE-2021-20305
https://access.redhat.com/security/cve/CVE-2021-3450
https://access.redhat.com/security/cve/CVE-2020-8565
https://access.redhat.com/security/cve/CVE-2021-3139
https://access.redhat.com/security/cve/CVE-2021-3528
https://access.redhat.com/security/cve/CVE-2020-26289
https://access.redhat.com/security/cve/CVE-2020-25678
https://access.redhat.com/security/cve/CVE-2020-7774
https://access.redhat.com/security/cve/CVE-2021-3449
Applikationen: Red Hat OpenShift Container Storage

Originalnachricht

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

=====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat OpenShift Container Storage 4.7.0
security, bug fix, and enhancement update
Advisory ID: RHSA-2021:2041-01
Product: Red Hat OpenShift Container Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2021:2041
Issue date: 2021-05-19
CVE Names: CVE-2020-7608 CVE-2020-7774 CVE-2020-8565
CVE-2020-25678 CVE-2020-26160 CVE-2020-26289
CVE-2020-28362 CVE-2021-3114 CVE-2021-3139
CVE-2021-3449 CVE-2021-3450 CVE-2021-3528
CVE-2021-20305
=====================================================================

1. Summary:

Updated images which include numerous security fixes, bug fixes, and
enhancements are now available for Red Hat OpenShift Container Storage
4.7.0 on Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat OpenShift Container Storage is software-defined storage integrated
with and optimized for the Red Hat OpenShift Container Platform. Red Hat
OpenShift Container Storage is a highly scalable, production-grade
persistent storage for stateful applications running in the Red Hat
OpenShift Container Platform. In addition to persistent storage, Red Hat
OpenShift Container Storage provisions a multicloud data management service
with an S3 compatible API.

Security Fix(es):

* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

* kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in
logs when logLevel >= 9 (CVE-2020-8565)

* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

* nodejs-date-and-time: ReDoS in parsing via date.compile (CVE-2020-26289)

* golang: math/big: panic during recursive division of very large numbers
(CVE-2020-28362)

* golang: crypto/elliptic: incorrect operations on the P-224 curve
(CVE-2021-3114)

* NooBaa: noobaa-operator leaking RPC AuthToken into log files
(CVE-2021-3528)

* nodejs-yargs-parser: prototype pollution vulnerability (CVE-2020-7608)

For more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information, refer to the CVE
page(s) listed in the References section.

Bug Fix(es):

This update includes various bug fixes and enhancements. Space precludes
documenting all of these changes in this advisory. Users are directed to
the Red Hat OpenShift Container Storage Release Notes for information on
the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s
torage/4.7/html-single/4.7_release_notes/index

All Red Hat OpenShift Container Storage users are advised to upgrade to
these updated images.

3. Solution:

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

4. Bugs fixed (https://bugzilla.redhat.com/):

1803849 - [RFE] Include per volume encryption with Vault integration in RHCS
4.1
1814681 - [RFE] use topologySpreadConstraints to evenly spread OSDs across
hosts
1840004 - CVE-2020-7608 nodejs-yargs-parser: prototype pollution vulnerability
1850089 - OBC CRD is outdated and leads to missing columns in get queries
1860594 - Toolbox pod should have toleration for OCS tainted nodes
1861104 - OCS podDisruptionBudget prevents successful OCP upgrades
1861878 - [RFE] use appropriate PDB values for OSD
1866301 - [RHOCS Usability Study][Installation] “Create storage cluster” should
be a part of the installation flow or need to be emphasized as a crucial step.
1869406 - must-gather should include historical pod logs
1872730 - [RFE][External mode] Re-configure noobaa to use the updated RGW
endpoint from the RHCS cluster
1874367 - "Create Backing Store" page doesn't allow to select
already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider
1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability
1886112 - log message flood with Reconciling
StorageCluster","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"
1886416 - Uninstall 4.6: ocs-operator logging regarding noobaa-core PVC needs
change
1886638 - CVE-2020-8565 kubernetes: Incomplete fix for CVE-2019-11250 allows
for token leak in logs when logLevel >= 9
1888839 - Create public route for ceph-rgw service
1892622 - [GSS] Noobaa management dashboard reporting High number of issues
when the cluster is in healthy state
1893611 - Skip ceph commands collection attempt if must-gather helper pod is
not created
1893613 - must-gather tries to collect ceph commands in external mode when
storagecluster already deleted
1893619 - OCS must-gather: Inspect errors for cephobjectoreUser and few ceph
commandd when storage cluster does not exist
1894412 - [RFE][External] RGW metrics should be made available even if anything
else except 9283 is provided as the monitoring-endpoint-port
1896338 - OCS upgrade from 4.6 to 4.7 build failed
1897246 - OCS - ceph historical logs collection
1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of
very large numbers
1898509 - [Tracker][RHV #1899565] Deployment on RHV/oVirt storage class
ovirt-csi-sc failing
1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability
1898808 - Rook-Ceph crash collector pod should not run on non-ocs node
1900711 - [RFE] Alerting for Namespace buckets and resources
1900722 - Failed to init upgrade process on noobaa-core-0
1900749 - Namespace Resource reported as Healthy when target bucket deleted
1900760 - RPC call for Namespace resource creation allows invalid target bucket
names
1901134 - OCS - ceph historical logs collection
1902192 - [RFE][External] RGW metrics should be made available even if anything
else except 9283 is provided as the monitoring-endpoint-port
1902685 - Too strict Content-Length header check refuses valid upload requests
1902711 - Tracker for Bug #1903078 Deleting VolumeSnapshotClass makes
VolumeSnapshot not Ready
1903973 - [Azure][ROKS] Set SSD tuning (tuneFastDeviceClass) as default for OSD
devices in Azure/ROKS platform
1903975 - Add "ceph df detail" for ocs must-gather to enable support to
debug compression
1904302 - [GSS] ceph_daemon label includes references to a replaced OSD that
cause a prometheus ruleset to fail
1904929 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod
1907318 - Unable to deploy & upgrade to ocs 4.7 - missing postgres image
reference
1908414 - [GSS][VMWare][ROKS] rgw pods are not showing up in OCS 4.5 - due to
pg_limit issue
1908678 - ocs-osd-removal job failed with "Invalid value" error when
using multiple ids
1909268 - OCS 4.7 UI install -All OCS operator pods respin after storagecluster
creation
1909488 - [NooBaa CLI] CLI status command looks for wrong DB PV name
1909745 - pv-pool backing store name restriction should be at 43 characters
1910705 - OBCs are stuck in a Pending state
1911131 - Bucket stats in the NB dashboard are incorrect
1911266 - Backingstore phase is ready, modecode is INITIALIZING
1911627 - CVE-2020-26289 nodejs-date-and-time: ReDoS in parsing via
date.compile
1911789 - Data deduplication does not work properly
1912421 - [RFE] noobaa cli allow the creation of BackingStores with already
existing secrets
1912894 - OCS storagecluster is Progressing state and some noobaa pods missing
with latest 4.7 build -4.7.0-223.ci and storagecluster reflected as 4.8.0 instead of 4.7.0
1913149 - make must-gather backward compatibility for version <4.6
1913357 - ocs-operator should show error when flexible scaling and arbiter are
both enabled at the same time
1914132 - No metrics available in the Object Service Dashboard in OCS 4.7, logs
show "failed to retrieve metrics exporter servicemonitor"
1914159 - When OCS was deployed using arbiter mode mon's are going into
CLBO state, ceph version = 14.2.11-95
1914215 - must-gather fails to delete the completed state compute-xx-debug pods
after successful completion
1915111 - OCS OSD selection algorithm is making some strange choices.
1915261 - Deleted MCG CRs are stuck in a 'Deleting' state
1915445 - Uninstall 4.7: Storagecluster deletion stuck on a partially created
KMS enabled OCS cluster + support TLS configuration for KMS
1915644 - update noobaa db label in must-gather to collect db pod in noobaa dir
1915698 - There is missing noobaa-core-0 pod after upgrade from OCS 4.6 to OCS
4.7
1915706 - [Azure][RBD] PV taking longer time ~ 9 minutes to get deleted
1915730 - [ocs-operator] Create public route for ceph-rgw service
1915737 - Improve ocs-operator logging during uninstall to be more verbose, to
understand reasons for failures - e.g. for Bug 1915445
1915758 - improve noobaa logging in case of uninstall - logs do not specify
clearly the resource on which deletion is stuck
1915807 - Arbiter: OCS Install failed when used label =
topology.kubernetes.io/zone instead of deprecated failureDomain label
1915851 - OCS PodDisruptionBudget redesign for OSDs to allow multiple nodes to
drain in the same failure domain
1915953 - Must-gather takes hours to complete if the OCS cluster is not fully
deployed, delay seen in ceph command collection step
1916850 - Uninstall 4.7- rook: Storagecluster deletion stuck on a partially
created KMS enabled OCS cluster(OSD creation failed)
1917253 - Restore-pvc creation fails with error "csi-vol-* has unsupported
quota"
1917815 - [IBM Z and Power] OSD pods restarting due to OOM during upgrade test
using ocs-ci
1918360 - collect timestamp for must-gather commands and also the total time
taken for must-gather to complete
1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the
P-224 curve
1918925 - noobaa operator pod logs messages for other components - like
rook-ceph-mon, csi-pods, new Storageclass, etc
1918938 - ocs-operator has Error logs with "unable to deploy Prometheus
rules"
1919967 - MCG RPC calls time out and the system is unresponsive
1920202 - RGW pod did not get created when OCS was deployed using arbiter mode
1920498 - [IBM Z] OSDs are OOM killed and storage cluster goes into error
state during ocs-ci tier1 pvc expansion tests
1920507 - Creation of cephblockpool with compression failed on timeout
1921521 - Add support for VAULT_SKIP_VERIFY option in Ceph-CSI
1921540 - RBD PVC creation fails with error "invalid encryption kms
configuration: "POD_NAMESPACE" is not set"
1921609 - MongoNetworkError messages in noobaa-core logs
1921625 - 'Not Found: Secret "noobaa-root-master-key" message'
in noobaa logs and cli output when kms is configured
1922064 - uninstall on VMware LSO+ arbiter with 4 OSDs in Pending state:
Storagecluster deletion stuck, waiting for cephcluster to be deleted
1922108 - OCS 4.7 4.7.0-242.ci and beyond: osd pods are not created
1922113 - noobaa-db pod init container is crashing after OCS upgrade from OCS
4.6 to OCS 4.7
1922119 - PVC snapshot creation failing on OCP4.6-OCS 4.7 cluster
1922421 - [ROKS] OCS deployment stuck at mon pod in pending state
1922954 - [IBM Z] OCS: Failed tests because of osd deviceset restarts
1924185 - Object Service Dashboard shows alerts related to
"system-internal-storage-pool" in OCS 4.7
1924211 - 4.7.0-249.ci: RGW pod not deployed, rook logs show - failed to create
object store "must be no more than 63 characters"
1924634 - MG terminal logs show `pods "compute-x-debug" not found` even
though pods are in Running state
1924784 - RBD PVC creation fails with error "invalid encryption kms
configuration: failed to parse kms configuration"
1924792 - RBD PVC creation fails with error "invalid encryption kms
configuration: failed to parse kms configuration"
1925055 - OSD pod stuck in Init:CrashLoopBackOff following Node maintenance in
OCP upgrade from OCP 4.7 to 4.7 nightly
1925179 - MG fix [continuation from bug 1893619]: Do not attempt creating
helper pod if storagecluster/cephcluster already deleted
1925249 - KMS resources should be garbage collected when StorageCluster is
deleted
1925533 - [GSS] Unable to install Noobaa in AWS govcloud
1926182 - [RFE] Support disabling reconciliation of monitoring related
resources using a dedicated reconcile strategy flag
1926617 - osds are in Init:CrashLoopBackOff with rgw in CrashLoopBackOff on KMS
enabled cluster
1926717 - Only one NOOBAA_ROOT_SECRET_PATH key created in vault when the same
backend path is used for multiple OCS clusters
1926831 - [IBM][ROKS] Deploy RGW pods only if IBM COS is not available on
platform
1927128 - [Tracker for BZ #1937088] When Performed add capacity over arbiter
mode cluster ceph health reports PG_AVAILABILITY Reduced data availability: 25 pgs inactive, 25 pgs incomplete
1927138 - must-gather skip collection of ceph in every run
1927186 - Configure pv-pool as backing store if cos creds secret not found in
IBM Cloud
1927317 - [Arbiter] Storage Cluster installation did not started because
ocs-operator was Expecting 8 node found 4
1927330 - Namespacestore-backed OBCs are stuck on Pending
1927338 - Uninstall OCS: Include events for major CRs to know the cause of
deletion getting stuck
1927885 - OCS 4.7: ocs operator pod in 1/1 state even when Storagecluster is in
Progressing state
1928063 - For FD: rack: actual osd pod distribution and OSD placement in rack
under ceph osd tree output do not match
1928451 - MCG CLI command of diagnose doesn't work on windows
1928471 - [Deployment blocker] Ceph OSDs do not register properly in the CRUSH
map
1928487 - MCG CLI - noobaa ui command shows wss instead of https
1928642 - [IBM Z] rook-ceph-rgw pods restarts continously with ocs version
4.6.3 due to liveness probe failure
1931191 - Backing/namespacestores are stuck on Creating with credentials errors
1931810 - LSO deployment(flexibleScaling:true): 100% PGS unknown even though
ceph osd tree placement is correct(root cause diff from bug 1928471)
1931839 - OSD in state init:CrashLoopBackOff with KMS signed certificates
1932400 - Namespacestore deletion takes 15 minutes
1933607 - Prevent reconcile of labels on all monitoring resources deployed by
ocs-operator
1933609 - Prevent reconcile of labels on all monitoring resources deployed by
rook
1933736 - Allow shrinking the cluster by removing OSDs
1934000 - Improve error logging for kv-v2 while using encryption with KMS
1934990 - Ceph health ERR post node drain on KMS encryption enabled cluster
1935342 - [RFE] Add OSD flapping alert
1936545 - [Tracker for BZ #1938669] setuid and setgid file bits are not
retained after a OCS CephFS CSI restore
1936877 - Include at OCS Multi-Cloud Object Gateway core container image the
fixes on CVEs from RHEL8 on "nodejs"
1937070 - Storage cluster cannot be uninstalled when cluster not fully
configured
1937100 - [RGW][notification][kafka]: notification fails with error: pubsub
endpoint configuration error: unknown schema in: kafka
1937245 - csi-cephfsplugin pods CrashLoopBackoff in fresh 4.6 cluster due to
conflict with kube-rbac-proxy
1937768 - OBC with Cache BucketPolicy stuck on pending
1939026 - ServiceUnavailable when calling the CreateBucket operation (reached
max retries: 4): Reduce your request rate
1939472 - Failure domain set incorrectly to zone if flexible scaling is enabled
but there are >= 3 zones
1939617 - [Arbiter] Mons cannot be failed over in stretch mode
1940440 - noobaa migration pod is deleted on failure and logs are not available
for inspection
1940476 - Backingstore deletion hangs
1940957 - Deletion of Rejected NamespaceStore is stuck even when target bucket
and bucketclass are deleted
1941647 - OCS deployment fails when no backend path is specified for cluster
wide encryption using KMS
1941977 - rook-ceph-osd-X gets stuck in initcontainer expand-encrypted-bluefs
1942344 - No permissions in /etc/passwd leads to fail noobaa-operaor
1942350 - No permissions in /etc/passwd leads to fail noobaa-operaor
1942519 - MCG should not use KMS to store encryption keys if cluster wide
encryption is not enabled using KMS
1943275 - OSD pods re-spun after "add capacity" on cluster with KMS
1943596 - [Tracker for BZ #1944611][Arbiter] When Performed zone(zone=a) Power
off and Power On, 3 mon pod(zone=b,c) goes in CLBO after node Power off and 2 Osd(zone=a) goes in CLBO after node Power on
1944980 - Noobaa deployment fails when no KMS backend path is provided during
storagecluster creation
1946592 - [Arbiter] When both the rgw pod hosting nodes are down, the rgw
service is unavailable
1946837 - OCS 4.7 Arbiter Mode Cluster becomes stuck when entire zone is
shutdown
1955328 - Upgrade of noobaa DB failed when upgrading OCS 4.6 to 4.7
1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log
files
1957187 - Update to RHCS 4.2z1 Ceph container image at OCS 4.7.0
1957639 - Noobaa migrate job is failing when upgrading OCS 4.6.4 to 4.7 on FIPS
environment

5. References:

https://access.redhat.com/security/cve/CVE-2020-7608
https://access.redhat.com/security/cve/CVE-2020-7774
https://access.redhat.com/security/cve/CVE-2020-8565
https://access.redhat.com/security/cve/CVE-2020-25678
https://access.redhat.com/security/cve/CVE-2020-26160
https://access.redhat.com/security/cve/CVE-2020-26289
https://access.redhat.com/security/cve/CVE-2020-28362
https://access.redhat.com/security/cve/CVE-2021-3114
https://access.redhat.com/security/cve/CVE-2021-3139
https://access.redhat.com/security/cve/CVE-2021-3449
https://access.redhat.com/security/cve/CVE-2021-3450
https://access.redhat.com/security/cve/CVE-2021-3528
https://access.redhat.com/security/cve/CVE-2021-20305
https://access.redhat.com/security/updates/classification/#moderate

6. Contact:

The Red Hat security contact is <secalert@redhat.com>. More contact
details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUBYKTXntzjgjWX9erEAQhWpRAApeS59vyJLmaSJLski/0+eXbsKF6T3p2J
FTz7NRK3Jlvw1LAwiKRuliKY9/dGb+n59hBXZMte80mBp3eBtCUNlp3NuK7iLGRk
glAZjVAR1qoaqY6989GF/Limxbv5DOzmZbFGUI2cAJxX2MEkkNk++lrn1Gg7MlF+
M5xjJ5qwORTxbpSgkLLIxNnCL05/pSSmQi1u4kesNXUGa1l7XKJZafVkkAWJj5/D
RA2kKF/p59jl1irJPRAHfIeCpn2IpjgvNkv5d2nTp26yhuBnVprI2dDiirOAaMcY
GMCmBf8fFkA+kYRW/ad9WvOrhgGNH2vIXwKTkwwNY41W8HNj2HhVfZ41yz67UYv+
Gia+r8/5MJhzJZeWnzcdSKB0w6U9CfE02c2cxjLubDepEOGisoCD9wlm/Zx/TQqp
bXvrQQ9rw5tMX1vwHMu8UCdcb2MErj0nW7cmd9nHv8efWCDrRDEmqC7DcHmT4JHl
eTa+YS820EGg8DDDZVqUz4Kwxua4u4YvtL64NvH3tnKrF9wyaXLz1FLQquCOpMFc
S8TBoIIm9seWLjDKy11FOmAnciBkHPoxS2YzmZWpbE5GWLw5R2/HWsFX1ZYk38tA
sMYul3HVTx5UChweURmthpnX9mwUK7oXYRNPMG14VV5718v85Mfw3GRcIM2ovDXF
k8d8Crjhc88=
=wRRT
-----END PGP SIGNATURE-----

--
RHSA-announce mailing list
RHSA-announce@redhat.com
https://listman.redhat.com/mailman/listinfo/rhsa-announce
Pro-Linux
Pro-Linux @Facebook
Neue Nachrichten
Werbung