$ ng new angular-tutorial
? Would you like to add Angular routing? No
? Which stylesheet format would you like to use? CSS
CREATE angular-tutorial/README.md (1069 bytes)
CREATE angular-tutorial/.editorconfig (274 bytes)
CREATE angular-tutorial/.gitignore (548 bytes)
CREATE angular-tutorial/angular.json (2750 bytes)
CREATE angular-tutorial/package.json (1047 bytes)
CREATE angular-tutorial/tsconfig.json (901 bytes)
CREATE angular-tutorial/tsconfig.app.json (263 bytes)
CREATE angular-tutorial/tsconfig.spec.json (273 bytes)
CREATE angular-tutorial/.vscode/extensions.json (130 bytes)
CREATE angular-tutorial/.vscode/launch.json (470 bytes)
CREATE angular-tutorial/.vscode/tasks.json (938 bytes)
CREATE angular-tutorial/src/main.ts (214 bytes)
CREATE angular-tutorial/src/favicon.ico (948 bytes)
CREATE angular-tutorial/src/index.html (301 bytes)
CREATE angular-tutorial/src/styles.css (80 bytes)
CREATE angular-tutorial/src/app/app.module.ts (314 bytes)
CREATE angular-tutorial/src/app/app.component.css (0 bytes)
CREATE angular-tutorial/src/app/app.component.html (23083 bytes)
CREATE angular-tutorial/src/app/app.component.spec.ts (922 bytes)
CREATE angular-tutorial/src/app/app.component.ts (220 bytes)
CREATE angular-tutorial/src/assets/.gitkeep (0 bytes)
✔ Packages installed successfully.
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Successfully initialized git.
Google Cloud AutoML:Googleが提供しているサービスで、非専門家でも高品質なカスタムモデルを構築できるようにすることを目指しています。データのアップロードからモデルの訓練、評価、デプロイまでの一連のプロセスを自動化しています。
AutoML in Microsoft Azure:Microsoft Azure内のMachine Learningサービスの一部として提供されています。AzureのAutoMLは、データの前処理、モデル選択、ハイパーパラメータ調整を自動化し、ビジネス上の問題に対する最適なモデルを導き出すことを可能にします。
Model loaded in 3.6s (load weights from disk: 0.3s, create model: 0.2s, apply weights to model: 1.8s, apply half(): 0.6s, load VAE: 0.1s, move model to device: 0.5s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 11.2s (import torch: 2.1s, import gradio: 1.7s, import ldm: 0.4s, other imports: 2.9s, load scripts: 0.3s, load SD checkpoint: 3.6s, create ui: 0.2s).
0%| | 0/20 [00:02<?, ?it/s]
Error completing request
Arguments: ('task(6gvckovujc9xon0)', 'sailing ship', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/processing.py", line 869, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling
return func()
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 152, in forward
devices.test_for_nans(x_out, "unet")
File "/Users/t0k0sh1/Workspace/stable-diffusion-webui/modules/devices.py", line 152, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
ちなみに、v1.4やv1.5ではこの事象は起きなかったので、v2.x系固有のエラーかもしれません。
これを解消するために、設定変更を行います。画面上部のタブからSettingsを選択し、
左のメニューからStable Diffusionを選択します。
バージョンにもよるかもしれませんが、一番下にUpcast cross attention layer to float32というチェックボックスがあるので、ここにチェックをつけてください。