구형 삼성 넷북 NF110에 깔린 윈도우 7 스타터가 너무 느려서 지워버렸다. 새로 XP를 설치하면서, 추가로 용량 확보를 위해 250 gb 하드디스크에서 복권 파티션도 싹 다 날려버렸다. 이 상태로 한동안 잘 썼으나 윈도우 8, 8.1로 업그레이드 되는 동안 XP로 버티기는 어려워졌다. 그래서 다시 원래의 윈도우 7 스타터로 돌아와서 새로 릴리즈된 윈도우 10으로 무료 업그레이드를 해보기로 했다.
윈도우 7 스타터를 다시 설치하고 정품 인증 문제는 전화 인증을 이용해서 해결한 다음, 윈도우 10 홈을 무사히 설치 완료했다. 듣던 대로, 윈도우 7 스타터보다 오히려 빨라진 듯한 느낌이 든다. 더 빠른 부팅을 위해 이리저리 알아보다 구형 BIOS에서 MBR로 생성된 파티션 테이블에서 빠른 부팅을 이용하는 방법은 powercfg /h on 명령으로 최대절전모드를 활성화 하는 방법 밖에 없는 것 같다는 결론을 내렸다.
하지만 이렇게 새로 설치, 파티션 수정 등을 하다보니 원래있던 "삼성 복원 솔루션 (samsung recovery solution, SRS)"을 되살려야 겠다고 결심했다. 가장 큰 이유는 윈도우7 재설치 후 귀찮았던 전화 인증 탓인데, 어렵게 업그레이드한 윈도우 10이 문제가 생길 경우 재설치하는 과정이 엄두가 나지 않았기 때문이다. 또 한가지는 역시 대기업 제품에서만 제공되는 전용 BIOS 지원으로 부팅시 F4를 누르면 복구 솔루션을 바로 띄운다는 편리함이다. 복구용 USB에 윈도우 PE 환경을 구성하거나, hiren's BootCD 사용, 리눅스 live CD 이미지를 이용하는 등 여러 방법이 있지만 해당 부팅 USB를 가지고 있어야 가능하다는 불편함이 매우 크다. 그런 식으로 복구 환경을 띄운 다음 acronis든, ghost든 백업 복구 프로그램을 다시 구동해야 하며 그 사용법이 제법 까다롭다는 점도 중요한 이유이다.
삼성 복원 솔루션을 다시 설치하려니 이미 설치된 윈도우 10을 지우지 않고 가능한지? 백업 용량이 어느 정도 되며 그에 따라 파티션 크기를 어떻게 해야 할지 모르는 부분이 많아 구글을 통해 많은 검색을 수행했다.
그 결과 설치된 윈도우 10을 유지하면서 복원 솔루션을 재 설치할 수 있었으며, 추가로 만들어진 파티션 환경을 이용해 더 유용할 수 있는 윈도우 PE 부팅 환경을 구축할 수 있었다. 결과적으로 삼성 복원 솔루션을 되살리려다 그냥 BIOS에서 제공하는 F4 핫키를 이용해 사용 가능한 윈도우 PE 환경을 만드는 것으로 결론이 났다.
결과는 일단은 대 만족. 원래 삼성 복원 솔루션으로 사용되는 winclon이라는 프로그램보다 용량면에서 유리하고 백업 및 복구 옵션이 다양하며 직관적인 acronis나 ghost를 모두 이용해서 백업을 수행하였다.
그 과정은 대략 다음과 같다.
1. 인터넷을 검색하여 samsung recovery solution admin tool 이미지를 다운로드.
이미 설치된 복원 솔루션에 들어가 ctrl+f10인가? 뭐 그런 핫키로 어드민 모드로 들어가면 admin tool을 CD나 USB로 만들수 있다는데 아예 새로 설치해야 하는 내게는 해당 사항이 없다. 그래서 이미 만들어진 admin tool iso 이미지를 인터넷 검색하여 다운로드. 원래 있던게 버젼 4인가 그랬는데 그새 5, 6,7 이 나와있나 보다. 윈도우 8 이상은 6을 써야 한대서 다운받아보니 GPT 파티션 테이블로만 백업 복원이 된다고 하여 5로 다시 받음.
2. 파티션 설정
5을 이용해 부팅해보니 윈도우 PE 환경으로 부팅하는 것으로 보인다. 다만 커맨더 창과 삼성 복원 솔루션 프로그램만 사용 가능한 것 같다. 여기서 1. 디스크 파티셔닝을 하려니 데이터를 싹 다 날리고 아예 하드디스크 전체를 초기화하는 개념이라 좌절...
이리저리 정보를 찾아보고 이런 저런 시도를 한 끝에 일단 AOMEI partition assistant라는 프로그램을 이용해 기존 파티션의 데이터를 유지하면서 크기를 조절하여, 마지막 부분에 25 gb 파티션을 생성함. 볼륨 레이블을 RECOVERY라 하고 NTFS로 포맷하여 다시 복원 솔루션 시도하니 역시나 윗 그림에서 처럼 "복원 영역이 존재하지 않습니다"란 붉은 경고 문구가 뜬다.
3. 복원 환경 만들기
인터넷에서 듣기로 원래의 NTFS는 파티션 id가 03인데 삼성 복원 파티션은 0x07이라 하여 수정하여 다시 삼성 복원 솔루션 (SRS) admin tool로 부팅해 봄. 역시나 복원 영역으로 인식하지 않음. SRS의 메뉴를 다시 차근차근 살펴보니 2. 복원 영역 초기화라는 메뉴가 있음. 이걸 execute하니 기존 파티션을 유지하면서 복원 영역 환경을 만들수 있음.
복원 환경이 위와 같이 만들어지고 다 만들어 지면 MBR fix까지 진행함. 아마 하드디스크의 MBR을 일부 수정해서 F4 키를 누르면 복원 환경이 뜨도록 만들어 주나 봄.
이 작업을 마치면 윈도우 설치를 하겠냐고 물어보는데 "아니오"를 한 다음 재부팅.
F4 키를 눌러 SRS 환경이 뜨는 것을 확인.
다시 정상 윈도우 10으로 부팅하여 이번에는 삼성전자 사이트에서 받은 데스크탑용 SRS 5를 설치함. 설치가 완료되어 백업을 수행하려 프로그램을 구동하니 "이 윈도우 버젼에서는 복원 솔루션을 사용할 수 없다"는 메시지 출력.......
몇번을 시도해도 안되어 그냥 F4 키를 이용한 SRS만 수행하려고 설치된 데스크탑 SRS는 undelete함. 파티션 정보가 다 바뀌니 그래도 진행하곘냐? 묻는데 "설마..."하면서 그렇게 하라고 했음. 진짜 복원 파티션을 홀라당 다 날려먹음....F4키 안 먹힘.
앞의 과정을 다시 반복하여 SRS 복원 파티션을 재구성하여 백업 시작.
윈도우 10이 설치된 파티션을 백업하는데 백업 파일을 저장하기 위해 25 gb로 생성한 복원 파티션은 보이지 않고 백업 파일 경로로 D 드라이브를 제시함. 일단 백업을 다 하고 보니 D 드라이브에 15 gb 정도의 WCL 확장자를 가진 백업 파일들이 생성됨. 단일 파일이 아니라 2 gb 씩 잘라서 여러개로 나눠서 생성되는 것 같음.
인터넷에서 다운받은 윈도우 8.1 PE 드라이브를 이용해서 USB로 부팅한 다음 생성된 복원 파티션을 살펴봄. NTFS로 포맷되어 있으나 숨김 속성으로 되어 있고, 파티션 id가 0x12로 되어 있어 OEM partition이라 표시됨. 아마 그래서 이 파티션이 백업 파일 경로로 나타나지 않은 것 같음.
4. 복원 환경 바꿔보기
파티션 용량 문제도 그렇고, 백업 파일의 생성에 걸리는 시간이나 만들어진 파일의 크기와 저장 경로 등 아쉬운 점이 많음. 그래서 부팅을 할떄 자세히 보니 종종 보아왔던 윈도우 7 PE 환경을 그냥 띄우는 것으로 보임. 아래 화면이 나온 후에 SRS 화면이 나타남.
결국 PE 부팅 환경에서 boot.wim 파일을 WINCLON.exe를 구동하는 것으로 만들어둔 게 아닌가 싶음. 요걸 잘 이용하면 25 gb 파티션 환경에서 훨씬 다양한 PE 드라이브로 구성할 수 있지 않나 생각이 들어 시도.
일단. 윈도우 8.1 PE 드라이브를 이용해서 USB로 부팅해서 복원 파티션에서 숨김 속성을 제거하고 재부팅. 다시 F4를 이용해서 SRS 환경으로 정상적으로 들어감을 확인. 하지만 여전히 파티션 id 탓인지 읽기 쓰기는 안됨. 이번엔 다시 PE로 부팅해서 파티션 id를 일반적인 NTFS인 0x03으로 수정하여 재부팅.
이렇게 수정하여도 SRS로 부팅됨. 이를 통해 파티션 id 변경이나 숨김 속성이 F4키를 이용해서 SRS 환경 (윈도우 7 PE 환경)으로 들어가는데 아무 영향이 없으며, 단지 MBR 수정을 통해 연결된 것 임을 확인.
USB PE 드라이브로 부팅하여 복원 파티션의 구조를 보니 전형적인 윈도우 7 PE 환경이며 WINCLON 폴더가 저장되어 있음을 확인.
여기서 SOURCES 폴더에 boot.wim이 있음을 확인하고 이 파일을 boot.wim.original로 이름을 바꿈. 인터넷에서 다운 받아 USB로 만든 USB 드라이브에 있는 wim 파일을 이 폴더에 boot.wim이란 이름으로 복사함.
이런저런 소프트웨어나 드라이버가 들어있는 ProgramFiles와 Drivers 폴더도 그대로 옮겨줌.
이렇게 해도 25 gb 용량의 복원 파티션은 20 gb 이상이 비어있음.
5. 윈도우 81. PE 부팅
새로 껏다 켜면서 F4 키를 눌러주면 원래 SRS의 환경의 경우 SRS 화면이 바로 뜨는데, boot.wim을 바꾸니 아래와 같이 부트 매니저 화면이 뜸.
이 환경으로 부팅을 해주면 아래 사진과 같이 PE 환경으로 부팅됨.
여기에 acronis와 ghost, disk snapshot 등의 파티션 이미지를 떠서 백업, 복원 하는 프로그램들이 포함되어 있었음. 세 프로그램을 다 써서 c 드라이브를 백업 해보니 복원 파티션의 25 gb 용량이 거의 다 참.
일단 SRS 복원 환경을 윈도우 PE 환경으로 바꾸어 사용 가능함을 확인함.
테스트에 사용된 Acronis나 Ghost나 Drive snapshot 등이 모두 freeware가 아닌 관계로 모두 삭제. 원래의 WINCLON을 사용해야 함. 하지만 SRS 5에 포함된 WINCLON은 윈도우 10에서 구동되지 않음. 따라서 SRS 6이나 7 버젼을 구해서 설치할 수 있는지 확인 필요함.
최근에 새로 PC를 만들거나 OS를 설치하려하면 과거 BIOS나 MBR과 다른 UEFI나 GPT 등의 개념에 대해 많이 헷갈리게 된다. 특히 윈도우 8 이후부터 기본 설치로 UEFI와 GPT가 채용됨에 따라 과거의 방식으로 설치하거나, 문제가 생겨 복구하려할 떄 과거 방식으로 시도하다가 완전히 꼬이는 경우가 많다. 윈도우 8의 빠른 부팅을 사용하려면 하드디스크가 GPT로 구성되어야 한다는데, 이미 하드 디스크 구매 후 기존대로 파티션 다 나눠놓고 바꿀 방법이 난감하기도 했다. 이리저리 인터넷 검색하여 MBR 파티션을 GPT로 변환하고 GPT에서 요구하는 대로 윈도우 파티션들을 새로 생성하였지만, 만약 문제가 생겨 다시 복구하려면 부팅이 되게 할 수 있을지는 미지수이다.
최근 그런 사례가 있었다. 구형 BIOS 체계인 삼성 넷북 NF110에 윈도우 7 스타터가 설치되어 있는것을 낮은 성능을 감안해 파티션을 모두 삭제하고 윈도우 XP를 설치해 사용했었다. 삼성 복원 솔루션(Samsung Recovery Solution 4)가 설치되어 부팅시 F4 키를 누르면 복구 파티션에 있던 디스크 이미지를 불러워 초기 상태로 되돌려 주는 기능이 있었는데, 완전히 지워졌었다.
윈도우 10이 속도가 제법 빠르다는 말에 정품 윈도우 7스타터를 정품 윈도우 10 홈으로 무로 업그레이드 해 주는 김에 설치해 보기로 하였다. 시도를 하다보니 복구 파티션이 완전히 지워져 최초 윈도우 7으로 복구가 불가능했다.
윈도우 7 스타터를 구해서 설치하고 전화 인증으로 정품 인증까지는 결국 성공했다. 이걸 이용해 정품 윈도우 10으로 설치도 하였는데, 나중에 다시 설치할 경우 정품 인증이 또 귀찮아 질까봐 삼성 복원 솔루션(SRS)를 다시 살려보기로 했다. 인터넷을 이리저리 검색하여 SRS admin tool이란 것도 다운로드 받아 설치를 시도하니 관련 규격 변경에 따라 헷갈리는 사태가 발생했다. 기존의 SRS4로는 윈동 8 이후 버젼은 지원이 안되며, SRS 6이상을 써야 한다고 한다. 그래서 SRS 6를 다운받아 admin tool로 부팅해보니 파티션 테이블을 새로 만들어야 한다고 한다. 만들어지는 파티션 schem을 보니 총 7개의 파티션을 새로 만드는데 아마도 윈도우 8에서 권장하는 GPT 파티션 세팅인듯 했다. BIOS 기반의 NF110에서 이게 될거 같지도 않은데다, 기존 설치된 데이터가 싹 날라가버리니, 기존 OS 백업 하려던 목적에서 너무너무 멀리 가버리게 된다. 구 버젼인 SRS 5까지는 윈도우7만 지원하니 윈도우 8이 정상 부팅이 되지 않는다고 한다. 어찌할 방법이 없는건가 고민하며 정보를 좀 찾다가 결국 MBR, GPT 차이로 그런게 아닌가 하는 심증이 든다. 그렇다면 나는 이미 MBR에 윈도우 10을 설치해 두었으니, MBR 기준으로 백업/복구하면 되지 않을까? 다시 말해 SRS 5 이하 버젼으로 백업해도 나중에 복원이 가능할 것도 같다.
실제로 될지는 시도해 본 다음에 알게 되겠지만, 그 과정에서 대략 정리된 EFI, BIO, GPT, MBR 개념을 정리하기 위해 wikipedia에서 해당 정보를 긁어서 정리해 둔다.
바이오스(BIOS; Basic Input/Output System)는 넓은 의미로 컴퓨터에 탑재된 프로그램 중에서 하드웨어와 가장 낮은 수준에서 입출력을 담당하는 프로그램을 가리킨다. 좁은 의미의 정의에선 IBM-PC 호환 기종에 탑재된 것을 말하며, 보통 이 뜻으로 쓰인다. 바이오스는 컴퓨터에서 하드웨어와 소프트웨어의 중간 형태를 가지는 펌웨어의 일종으로, 대부분 소프트웨어가 하드웨어를 제어하고 하드웨어에 의해 변경되거나 생성된 정보를 소프트웨어에서 처리할 수 있도록 전달하는 등 인간의 신경망과 같은 기능을 수행한다.
바이오스를 저장하는 매체로 초기에는 롬이 주로 사용되었으나 정보화의 발전 속도가 증가되고 새로운 기기나 매체의 발매 주기가 짧아짐에 따라 현재의 플래시 메모리가 바이오스의 주된 저장 매체로 쓰이기 시작했다. 하지만 플래시 메모리를 이용한 편리함은 1990년대 후반에 바이오스를 손상시키는 바이러스의 등장으로 수많은 컴퓨터를 무력화시키는 사태를 겪기도 했다. 대표적인 예로 CIH 바이러스를 들 수 있다. (다만 이 바이러스는 윈도 NT 계열 이후의 운영 체제에서는 동작하지 않으므로 현대에는 큰 문제로 여겨지지 않는다.)
이전까지 바이오스는 롬에 담겨 컴퓨터에 미리 맞춰진 형태로 제공되었다. 롬 라이터(ROM Writer)라고 불리는 기록 장치 없이는 바이오스의 내용을 바꿀 수 없는 것이 보통이었다. 그러나 1990년대로 들어서면서 바이오스의 저장매체로 기존의 마스크 롬이나 이피롬이 아닌 플래시 메모리를 사용한 제품이 일반화되었는데, 이것으로 예상치 못했던 문제점을 해결하거나 새로운 장치나 기기를 지원하기 위해 별도의 기록 장치 없이 PC에서 프로그램을 실행해 바이오스의 내용을 간편하게 바꿀 수 있게 되었다. 그러나 바이오스의 내용을 갱신할 때 정전 등을 비롯하여 예기치 못한 상황에서 기록이 정상적으로 되지 못하면, 그 컴퓨터는 시동조차 할 수 없게 되어 버린다. 바이오스는 컴퓨터의 하드웨어와 소프트웨어를 이어주는 신경망과 같은 프로그램으로 그 내용이 손상되면 BIOS 설정 화면을 띄울 수도 없다. 이렇게 손상된 바이오스를 복구하는 방법은 바이오스가 탑재된 플래시 메모리를 교환하는 방법 밖에 없다.
b. (Modern) UEFI
통일 확장 펌웨어 인터페이스(영어: Unified Extensible Firmware Interface, UEFI)는 운영 체제와 플랫폼 펌웨어 사이의 소프트웨어 인터페이스를 정의하는 규격이다. IBM PC 호환기종에서 사용되는 바이오스 인터페이스를 대체할 목적으로 개발되었다. 인텔이 개발한 EFI(Extensible Firmware Interface) 규격에서 출발하였으며, 현재는 통일 EFI 포럼이 UEFI 규격을 관리하고 있다.
원래 EFI가 개발된 동기는 1990년대 중반에 인텔과 HP가 초대 아이테니엄기의 개발 초기까지 거슬러 올라간다. 개인용 컴퓨터의 바이오스 제한(16비트 보호 모드, 1 메가바이트의 주소 공간, PC/AT 하드웨어에의 의존) 때문에 아이테니엄 대상으로 한 거대한 서버 플랫폼에는 채용할 수 없는 것이 판명되었다. 여기서 교훈을 얻은 첫 성과를 인텔 부트 이니셔티브(Intel Boot Initiative)로 부르는데 나중에 EFI와 이름을 바꾸는 것이었다.
EFI 규격은 2000년 12월 12일에 인텔이 공개했다.(첫 버전은 1.01이지만, 법적인 문제 등으로 바로 취소되었다). EFI 규격 1.10은 2002년 12월 1일에 인텔이 공개했다. 여기에는 버전 1.02부터 몇 가지 상세한 기능 강화와 EFI 드라이버 모델이 기재되어 있다.
2005년, 인텔은 이 규격을 UEFI 포럼에 공개했다. 이 포럼은 규격의 개발과 보급에 책임을 지며 실시한다. EFI는 이것을 반영하여 유나이티드 EFI(UEFI)와 이름을 바꾸어서, 많은 문서가 양쪽 모두의 용어를 함께 사용하게 되었다.
UEFI 포럼은 2007년 1월 7일에 UEFI 규격 버전 2.1을 공개했는데, 이것은 2007년 3월 현재의 최신 규격이 되어왔다. 이 규격에는 개선된 암호화, 네트워크 인증, 사용자 인터페이스의 아키텍처가 추가되어 있다.
2. 파티션 테이블 a. (Legacy) MBR 마스터 부트 레코드(영어: master boot record, MBR) 또는 파티션 섹터(영어: partition sector)는 파티션된 기억 장치(이를테면 하드 디스크)의 첫 섹터 (섹터 0)인 512 바이트 시동 섹터이다. (파티션되지 않은 장치의 시동 섹터는 볼륨 부트 레코드이다.) MBR은 다음의 것들 가운데 하나 이상을 위해 사용된다: - 디스크 프라이머리 파티션 테이블을 소유한다. - 부트스트래핑 운영 체제 (컴퓨터 바이오스가 실행을 MBR 안에 포함된 기계어 명령어로 통과시킨 뒤) - 32비트 디스크 서명이 있는 각 디스크 매체의 구별
MBR 파티션 테이블 스키마(=IBM PC 파티셔닝 스키마 규약)에 따라 파티션 작업을 한 기억 장치에는 MBR의 파티션 테이블 안에 프라이머리(primary) 파티션 엔트리들이 있다. 규약상 MBR 파티션 스키마에서는 4개의 프라이머리(primary) 파티션 엔트리만 있어야 하지만 일부 DOS 운영 체제에서는 엔트리를 5개(PTS-DOS)) 또는 8개(AST 및 NEC DOS)까지 확장하였다.
GUID 파티션 테이블 스키마로 파티션 작업을 한 기억 장치에도 MBR에 파티션 테이블이 있는데 이 파티션 테이블은 MBR 파티션 테이블 스키마만 인식할 수 있는 프로그램이 (GUID 파티션 테이블 스키마로 파티션 작업이 된) 기억 장치를 마치 비어 있는 것처럼 인식하고 그 위에 파티션을 만드는 것을 막기 위해, GUID 파티션이 존재한다고 알려주는 것만을 목적으로 한다. MBR 과 부트스트래핑 MBR 파티션 테이블 스키마를 사용하는 IA-32 IBM PC 호환 기종에서는 ROM 바이오스의 (부트스트래핑) 펌웨어가 MBR을 읽고 실행한다. i386 계열의 프로세서들은 리얼 모드로 시동되기 때문에 MBR의 코드는 리얼 모드 코드이어야 한다. 일반적으로 MBR의 코드는 체인 로딩 방식으로 부트 파티션의 볼륨 부트 레코드에게 시동 제어권을 넘기는데, 일부 시동 관리 프로그램들은 이러한 일반적인 방법이 아닌 독자적인 방법을 사용한다.
MBR 코드는 (MBR 파티션 테이블 스키마에 따라) 파티션 테이블의 파티션들 중에서 액티브 플래그가 있는 1개의 파티션을 찾아서 그 파티션의 볼륨 부트 레코드를 읽고 실행한다.(그래서 마스터 부트 레코드는 다른 부트 섹터들처럼 시동 섹터 바이러스의 공격 대상이 된다.)
b. (Modern) GPT
컴퓨터 하드웨어에서 GUID 파티션 테이블(GPT, GUID Partition Table)은 물리적인 하드 디스크에 대한 파티션 테이블 레이아웃 표준이다. 확장 펌웨어 인터페이스 (EFI) 표준의 일부로 형성되어 있기는 하지만 MBR 파티션 테이블의 제한 때문에 일부 바이오스 시스템에 사용되기도 한다. MBR 파티션 테이블의 경우 하나의 디스크 파티션 크기를 최대 2.2 TB (2.2 × 1012 바이트)). 로 제한한다. GPT는 최대 디스크 및 파티션 크기를 9.4 ZB(9.4 × 1021 바이트)까지 허용한다.
하이브리드 MBR은 표준이 아니며 운영 체제에 따라 다른 방식으로 해석한다. 아래에 아무런 언급이 없으면 운영 체제는 하이브리드 MBR 구성을 이용할 때 GPT 데이터에 우선 순위를 제공한다.
"이 아키텍처 및 버전에 대한 순수한 지원은 제공하지 않는다."는 다음과 같이 이해하여야 한다:
데이터 디스크로 지원하지 않는다, 보호 MBR에서 볼 수 있는 알려진 레거시 파티션만 운영 체제를 통해 접근할 수 있다. 탈착 가능한 디스크: MBR 파티션만 지원 안 함; 최종 사용자 응용으로 접근 권한이 없음. GPT가 포함된 순수 데이터는 낮은 수준의 디스크 접근을 위하여 서드 파티 관리자 도구를 통해 접근할 수 있다. 읽기 또는 읽기-쓰기 형태의 진정한 파일 시스템 수준 지원은 서드 파티 제조업체가 제공하는 소프트웨어에 종속된다.
파티션을 유틸리티 파티션으로 식별합니다. 유틸리티 파티션에는 드라이브 문자가 지정되지 않습니다. 이렇게 하려면 무인 설정(Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition\TypeID)을 0x27(BIOS 기반 시스템) 또는 e3c9e316-0b5c-4db8-817d-f92df00215ae(UEFI 기반 시스템)로 지정합니다.
파티션에 레이블을 추가합니다. 이렇게 하면 사용자가 컴퓨터 관리와 같은 도구로 디스크를 볼 때 파티션을 쉽게 식별할 수 있습니다.
추가 요구 사항 BitLocker
BitLocker는 별도의 암호화된 데이터 파티션에서 Windows를 실행하여 컴퓨터 보안 수준을 높여 줍니다.
BitLocker가 설치되어 있는 경우 시스템 파티션에 대한 추가 요구 사항은 다음과 같습니다.
Windows 파티션이 아닌 별도의 파티션이어야 합니다.
Windows RE에 대한 추가 요구 사항
Windows RE는 사용자가 시스템 오류를 복구하는 데 도움이 될 수 있습니다.
시스템 파티션, Windows 파티션 또는 별도의 복구 파티션에 Windows RE를 설치할 수 있습니다.
Windows RE가 시스템 파티션에 설치되어 있는 경우 결합된 파티션에 대한 추가 요구 사항은 다음과 같습니다.
파티션이 실제로 모든 사용자 파티션 앞에 있어야 합니다.
Windows RE 파일에 사용할 200MB의 추가 하드 드라이브 공간이 있어야 합니다.
예: 시스템 + Windows RE 파일 = 300MB
사용자 파일을 저장하는 데 사용하면 안 됩니다.
Windows RE가 별도의 파티션에 설치되어 있는 경우 Windows RE 파티션에 대한 추가 요구 사항은 다음과 같습니다.
파티션의 섀도 복사본을 만들 수 있을 만큼 사용 가능한 공간이 충분해야 합니다.
파티션이 500MB 이하인 경우 최소 50MB의 사용 가능한 공간이 있어야 합니다.
파티션이 500MB 이상인 경우 최소 320MB의 사용 가능한 공간이 있어야 합니다.
파티션이 1GB 이상인 경우 최소 1GB의 사용 가능한 공간을 확보해 두는 것이 좋습니다.
사용자 파일을 저장하는 데 사용하면 안 됩니다.
MSR(Microsoft Reserved) 파티션
MSR은 UEFI 시스템에서만 사용되며, 다른 시스템 파티션과 관련이 있고 Microsoft 응용 프로그램에서 사용되는 정보가 포함됩니다.
MSR 파티션은 다음 조건을 충족해야 합니다.
128MB의 하드 드라이브 공간이 있어야 합니다.
ESP와 Windows 운영 체제 파티션 사이에 있어야 합니다.
Windows 파티션 요구 사항
Windows 파티션은 다음 조건을 충족해야 합니다.
Windows 설치 중에 필요한 700MB의 사용 가능한 공간을 비롯하여 15GB 이상의 하드 드라이브 공간이 있어야 합니다.
최종 사용자가 시스템 오류를 복구하는 데 도움이 되도록 시스템 복구 도구를 시스템에 포함할 수 있습니다.
시스템 오류가 발생할 경우 컴퓨터가 복구 도구로 장애 조치(failover)되도록 구성하면 최종 사용자가 Windows를
복구하거나 다시 설치하는 데 도움이 될 수 있습니다.
복구 파티션에 대한 요구 사항은 구현하려는 복구 환경에 따라 달라집니다.
최종 사용자가 복구 도구를 삭제하지 않고 Windows를 복구하거나 다시 설치할 수 있도록 하려면 시스템 파티션이나 별도의 파티션에 복구 도구를 설치합니다.
사용자가 실수로 파티션을 수정하거나 삭제하는 것을 방지하려면
파티션을 유틸리티 파티션으로 식별합니다. 유틸리티 파티션에는 드라이브 문자가 지정되지 않습니다. 이렇게 하려면 무인 설정(Microsoft-Windows-Setup\DiskConfiguration\Disk\ModifyPartitions\ModifyPartition\TypeID)을 0x27(BIOS 기반 시스템) 또는 e3c9e316-0b5c-4db8-817d-f92df00215ae(UEFI 기반 시스템)로 지정합니다.
파티션에 레이블을 추가합니다. 이렇게 하면 사용자가 컴퓨터 관리와 같은 도구로 디스크를 볼 때 파티션을 쉽게 식별할 수 있습니다.
수많은 오픈 소스 프로젝트에서 버젼 관리를 위해 GitHub를 통해 소스 코드가 관리 및 배포된다. 이 Git를 리누스 토발즈가 만들었다는걸 처음 알았다. 만들어진지 10년이나 되었다는 것도 처음 안 사실.
이를 축하하는 뜻으로 linux.com에서 리누스와 인터뷰를 했다고 한다.
10년 전인 2005년 4월 둘째주, 리눅스 커널 커뮤니티는 위압적인 도전에 직면해 있었다. 최신화 관리 시스템인 BitKeeper를 더이상 사용할 수 없었고, 분산 시스템을 위한 그들의 요구를 충족하는 다른 소프트웨어 형상 관리(Software
Configuration Management, SCMs)는 없었다. 리눅스의 창조자인 Linus Torvalds는 이 도전을 자기 것으로 삼았고 주말 동안 사라졌고 다음 주에 Git를 들고 나타났다. 오늘날 Git는 수천개의 프로젝트에서 사용되고 있고 프로그래머들 간에 새로운 수준의 소셜 코딩을 제시해왔다.
이 업적을 축하하려고, 우리는 리누스에게 Git에 대한 뒷 이야기를 부탁했으며 프로젝트에 대한 그의 생각과 소프트웨어 개발에서 그 영향을 말해 달라고 했다. 아래 이야기에서 그의 코멘트를 볼 수 있다. 우리는 이 Q&A를 리비젼 관리 시스템을 사용하는 Git에 별도 프로젝트로 올리고 일주일간 매일 추적할 것이다. KVM,
Qt, Drupal, Puppet, Wine, 기타 프로젝트에 대한 뒷 이야기도 찾고 있다.
years ago this week, the Linux kernel community faced a daunting
challenge: They could no longer use their revision control system
BitKeeper and no other Software Configuration Management (SCMs) met
their needs for a distributed system. Linus Torvalds, the creator of
Linux, took the challenge into his own hands and disappeared over the
weekend to emerge the following week with Git. Today Git is used for
thousands of projects and has ushered in a new level of social coding
To celebrate this milestone,
we asked Linus to share the behind-the-scenes story of Git and tell us
what he thinks of the project and its impact on software development.
You'll find his comments in the story below. We'll follow this Q&A
with a week of Git in which we profile a different project each day that
is using the revision control system. Look for the stories behind KVM,
Qt, Drupal, Puppet and Wine, among others.
Why did you create Git?
really never wanted to do source control management at all and felt
that it was just about the least interesting thing in the computing
world (with the possible exception of databases ;^), and I hated all
SCM's with a passion. But then BitKeeper came along and really changed
the way I viewed source control. BK got most things right and having a
local copy of the repository and distributed merging was a big deal. The
big thing about distributed source control is that it makes one of the
main issues with SCM's go away - the politics around "who can make
changes." BK showed that you can avoid that by just giving everybody
their own source repository. But BK had its own problems, too; there
were a few technical choices that caused problems (renames were
painful), but the biggest downside was the fact that since it wasn't
open source, there was a lot of people who didn't want to use it. So
while we ended up having several core maintainers use BK - it was free
to use for open source projects - it never got ubiquitous. So it helped
kernel development, but there were still pain points.
That then came to a head when Tridge (Andrew Tridgell)
started reverse-engineering the (fairly simply) BK protocol, which was
against the usage rules for BK. I spent a few weeks (months? It felt
that way) trying to mediate between Tridge and Larry McVoy, but in the
end it clearly wasn't working. So at some point I decided that I can't
continue using BK, but that I really didn't want to go back to the bad
old pre-BK days. Sadly, at the time, while there were some other SCM's
that kind of tried to get the whole distributed thing, none of them did
it remotely well. I had performance requirements that were not even
remotely satisfied by what was available, and I also worried about
integrity of the code and the whole workflow, so I ended up just
deciding to write my own.
How did you approach it? Did you stay up all weekend to write it or was it just during regular hours?
Heh. You can actually see how it all took shape in the git source code
repository, except for the very first day or so. It took about a day to
get to be "self-hosting" so that I could start committing things into
git using git itself, so the first day or so is hidden, but everything
else is there. The work was clearly mostly during the day, but there's a
few midnight entries and a couple of 2 a.m. ones. The most interesting
part is how quickly it took shape ; the very first commit in the git
tree is not a lot of code, but it already did the basics - enough to
commit itself. The trick wasn't really so much the coding but coming up
with how it organizes the data.
So I'd like to stress that
while it really came together in just about ten days or so (at which
point I did my first *kernel* commit using git), it wasn't like it was
some kind of mad dash of coding. The actual amount of that early code is
actually fairly small, it all depended on getting the basic ideas
right. And that I had been mulling over for a while before the whole
project started. I'd seen the problems others had. I'd seen what I
wanted to avoid doing.
Has it lived up to your expectations? How is it working today in your estimation? Are there any limitations?
I'm very happy with git. It works remarkably well for the kernel and is
still meeting all my expectations. What I find interesting is how it
took over so many other projects, too. Surprisingly quickly, in
the end. There is a lot of inertia in switching source control systems;
just look at how long CVS and even RCS have stayed around, but at some
point git just took over.
Why do you think it's been so widely adopted?
think that many others had been frustrated by all the same issues that
made me hate SCM's, and while there have been many projects that tried
to fix one or two small corner cases that drove people wild, there
really hadn't been anything like git that really ended up taking on the
big problems head on. Even when people don't realize how important that
"distributed" part was (and a lot of people were fighting it), once they
figure out that it allows those easy and reliable backups, and allows
people to make their own private test repositories without having to
worry about the politics of having write access to some central
repository, they'll never go back.
Does Git last forever, or do you foresee another revision control system in another 10 years? Will you be the one to write it?
I'm not going to be the one writing it, no. And maybe we'll see
something new in ten years, but I guarantee that it will be pretty
"git-like." It's not like git got everything right, but it got all the
really basic issues right in a way that no other SCM had ever done
No false modesty ;)
Why does Git work so well for Linux?
Well, it was obviously designed for our workflow, so that is part of it.
I've already mentioned the whole "distributed" part many times, but it
bears repeating. But it was also designed to be efficient enough for a
biggish project like Linux, and it was designed to do things that people
considered "hard" before git - because those are the things *I* do
Just to pick an example: the
concept of "merging" was generally considered to be something really
quite painful and hard in most SCM's. You'd plan your merges,
because they were big deals. That's not acceptable to me, since I
commonly do tens of merges a day when in the merge window, and even
then, the biggest overhead shouldn't be the merge itself, it should be
testing the result. The "git" part of the merge is just a couple of
seconds, it should take me much longer just to write the merge
So git was basically designed and written for my requirements, and it shows.
People have said
that Git is only for super smart people. Even Andrew Morton said Git is
"expressly designed to make you feel less intelligent than you thought
you were." What's your response to this?
So I think it used to be true but isn't any more. There is a few reasons
people feel that way, but I think only one of them remains. The one
that remains is fairly simple: "you can do things so many ways."
You can do a lot of things
with git, and many of the rules of what you *should* do are not so much
technical limitations but are about what works well when working
together with other people. So git is a very powerful set of tools, and
that can not only be overwhelming at first, it also means that you can
often do the same (or similar) things different ways, and they all
"work." Generally, the best way to learn git is probably to first only
do very basic things and not even look at some of the things you can do
until you are familiar and confident about the basics.
There's a few historical reasons for why git was considered complicated. One of them is that it was
complicated. The people who started using git very early on in order to
work on the kernel really had to learn a very rough set of scripts to
make everything work. All the effort had been on making the core
technology work and very little on making it easy or obvious. So git
(deservedly) had a reputation for requiring you to know exactly what you
did early on. But that was mainly true for the first 6 months or a
The other big reason people
thought git was hard is that git is very different. There are people who
used things like CVS for a decade or two, and git is not CVS. Not even
close. The concepts are different. The commands are different. Git never
even really tried to look like CVS, quite the reverse. And if you've
used a CVS-like system for a long time, that makes git appear
complicated and needlessly different. People were put off by the odd
revision numbers. Why is a git revision not "1.3.1" with nice
incrementing numbers like it was in CVS? Why is it that odd scary
40-character HEX number?
But git wasn't "needlessly
different." The differences are required. It's just that it made some
people really think it was more complicated than it is, because they
came from a very different background. The "CVS background" thing is
going away. By now there are probably lots of programmers out there who
have never used CVS in their lives and would find the CVS way of doing
things very confusing, because they learned git first.
Do you think the rate of Linux kernel development would have been able to grow at its current rate without Git? Why or why not?
Well, "without git," sure. But it would have required that somebody else
wrote something git-equivalent: a distributed SCM that is as efficient
as git is. We definitely needed something *like* git.
What's your latest opinion of GitHub?
Github is an excellent hosting service; I have nothing against it at
all. Now, the complaints I've had is that GitHub as a development
platform - making commits, pull requests, keeping track of issues etc -
doesn't work very well at all. It's not even close, not for something
like the kernel. It's much too limited.
That's partly because of how
the kernel is developed, but part of it was that the GitHub interfaces
were actively encouraging bad behavior. Commits done on GitHub had bad
commit messages etc, because the web interfaces at GitHub were actively
encouraging bad behavior. They did fix some of that, so it probably
works better, but it will never be appropriate for something like the
What is the most interesting use you've seen for Git and/or GitHub?
Torvalds: I'm just happy that it made it so easy
to start a new project. Project hosting used to be painful, and with
git and GitHub it's just so trivial to do a random small project. It
doesn't matter what the project is; what matters is that you can do it.
Do you have side
projects up your sleeve today? Any more brilliant software projects that
will dominate software development for years to come?
Torvalds: Nothing planned. But I'll let you know if that changes.
I liked how plain and to the point this interview is.
That's not what SCM stands for.
SCM == 'Source Control Management'
not really :
SCM originally stands for Software Configuration Management.
Know before you go! :-)
why bother? I won't request speech rights :
Kernel will never be ready to go while base10 remain first math language.
He said "git" and "efficient" in the same
sentence! You should see the resources it consumes in an enterprise
with a few thousand users. Is there a *less* efficient SCM somewhere?
@Celyle Are you perhaps confusing GitLab
with Git? Or by "resources" are you referring to disk space usage from
hosting per-developer repos?
Alexander Salas :
Happy birthday Git!
Carsten Olsen :
Happy birthday GIT. ;-) Good interview Jennifer.
No comment :-) :
Happy birthday GIT & thank you Linus!
... end of test from first comment :-)
I evaluated Git and would like to migrate my
team's repository from SVN to Git. Unfortunately, the team doesn't see a
game-changig feature that justifies the needed effort for the
Does anybody here know one?
Ryan Collins :
Advantages of git:
- Ease of branching. Allows developers to easily create a branch to
experiment on, merge or delete when done.
- Ability to commit changes locally before pushing them upstream. A
developer can work and easily revert changes, even without Internet
- No server needed. Have some small scripts that only you use? Throw
them in a repo.
@mark, I think the problem is because of how
complicated git usage is. Switching from SVN wont be easy so I suggest
you to maybe pick up Mercurial instead of Git as it's more SVN User
Friendly, and allows to pick up many many different workflows.
@mark, you might be interested in Git for SVN Users:
Or a wiki page explaining why git works better for many workflows than SVN:
There is a full tutorial on using git in the open-source Git Book:
Until recently phased array radar has been very expensive,
used only for military applications where the cost of survival weighs in
the balance. With the advent of low-cost microwave devices and
unconventional architecture phased array radar is now within the reach
of the hobbyist and consumer electronics developer. In this post we will
review the basics of phased-array radar and show examples of how to
make low-cost short-range phased array radar systems — I built the one
seen here in my garage! Sense more with more elements by making phase
array your next radar project.
Phased array radar
In a previous post
the basics of radar were described where a typical radar system is made
up of a large parabolic antenna that rotates. The microwave beam
projected by this antenna is swept over the horizon as it rotates.
Scattered pulses from targets are displayed on a polar display known as a
Plan Position Indicator (PPI).
In a phased array radar (PDF)
system an array of antenna elements are used instead of the dish. These
elements are phase-coherent, meaning they are all phase-referenced to
the same transmitter and receiver. Each element is wired in series with a
phase shifter that can be adjusted arbitrarily by the radar’s control
system. A beam of microwave energy is focused by applying a phase
rotation to each phase shifter. This beam can be directed anywhere
within the array’s field of view. To scan the beam rotate the phases of
the phase shifters accordingly. Like the rotating parabolic dish, a
phased array can scan the horizon but without the use of moving parts.
To scan the entire horizon you often will need 4 or more arrays. This is why the SPY-1 radar uses 4 panels directed fore, aft, port, and starboard.
Phased array radars at short ranges
Long-range phased array radar systems focus their microwave beams in
the far field using relatively simple phasing techniques to steer the
beam. Most radar arrays that we might build for hobbyist or consumer
applications will be operating at short ranges using low-cost wide-band
microwave radar devices. At these short ranges radar targets are often
in the near-field where it becomes difficult to focus the antenna beam
using conventional far-field methods without accounting for wavefront
When a wave of any kind is emitted from a source it travels outwards
in a spherical pattern. At long distances this sphere appears planar (e.
g. a plane wave) as the spherical wave spreads out as it radiates away
from the source antenna.
There are several ways to account for wavefront curvature at
short-ranges. You can apply a parabolic phase function to each of your
phase shifters or you can receive (or transmit) with each element
independently and back-out the wavefront curvature in software. There
are many ways to achieve this all of which depart from the traditional
phased array architectures.
Unfortunately using numerous digitizers at useful bandwidths for
radar continues to be prohibitively expensive for consumer products and
the hobbyist when a design requires 50, 100, 1000, or 5000 elements. If
you are willing to trade acquisition time for cost you could implement a
much less expensive near-field array using switching techniques. In
this post we’ll discuss three examples of switched-array radar systems.
In a switched array system the transmit and receive ports of one
low-cost radar sensor are switched (or multiplexed) across an array of
antenna elements using microwave switches. Data from each combination of
transmit and receive elements is digitized and stored where focusing
(or image reconstruction) is computed in the digital domain. This method
can support frame rates of 10, 20, or even 40 FPS. Specific examples of
this technique are shown below.
Prototype thru-wall radar
Anyone can build a switched array radar system. Here is an example of one built in my garage from 80/20 aluminum and some Mini-Circuits
components. The size of the array was set by the longest 6 pieces of
80/20 I could find at the local junk yard, which were 8’3”. In this
system I multiplexed the transmitter port and receiver port across 13
and 8 antennas respectively. The switching sequence allows for the phase
center of this radar to be electronically moved down its length for a
total of 44 effective radar Transmit/Receive pairs. I used a Synthetic Aperture Radar (SAR) algorithm that accounts for wavefront curvature to form the imagery.
The purpose of this radar was to prove the concept of imaging through
concrete walls at stand-off ranges. It could image (by image I mean
display a small red blob at a top-down view of what is on the other side
of the wall) a 12 oz soda can through a 4” thick concrete wall at a
stand-off range of 20′, not bad for a garage-built system.
Given the interest in the MIT coffee can radar course, I worked with colleagues at Lincoln Laboratory to develop a phased array course.
To make the low-cost student built radar kits we added a pair of
microwave switches and used a switched-array layout nearly identical to
the thru-wall radar. These phased array radar devices were assembled
using pegboard and WiFi antennas. The latest iteration of this radar
device achieved 20 FPS. Anyone can build a phased array radar with WiFi
antennas and pegboard.
Phased arrayed radar has been very expensive and is traditionally
used in state-of-art air defense systems but today you can make your own
at short-ranges. Try phased array radar for your next project, sense or
image something fast, accurately, and without moving parts.
Gregory L. Charvat
makes his own phased arrayed devices, is the author of Small and
Short-Range Radar Systems, co-founder of Hyperfine Research Inc.,
Butterfly Network Inc. (both of which are 4catalyzer companies),
visiting research scientist at Camera Culture Group Massachusetts
Institute of Technology Media Lab, editor of the Gregory L. Charvat
Series on Practical Approaches to Electrical Engineering, and guest
commentator on CNN, CBS, Sky News, and others. He was a technical staff
member at MIT Lincoln Laboratory where his work on through-wall radar
won best paper at the 2010 MSS Tri-Services Radar Symposium and is an
MIT Office of the Provost 2011 research highlight. He has taught short
radar courses at MIT where his Build a Small Radar course was the
top-ranked MIT professional education course in 2011 and has become
widely adopted by other universities, laboratories, and private
organizations. Starting at an Early Age, Greg developed numerous radar
systems, rail SAR imaging sensors, phased array radar systems; holds
several patents; and has developed many other sensors and radio and
audio equipment. He has authored numerous publications and has received
press for his work. Greg earned a Ph.D in electrical engineering in
2007, MSEE in 2003, and BSEE in 2002 from Michigan State University, and
is a senior member of the IEEE where he served on the steering
committee for the 2010, 2013, and 2016 IEEE International Symposium on
Phased Array Systems and Technology and chaired the IEEE AP-S Boston
Chapter from 2010-2011.
Armadillo Aerospace, the
private rocket-building enterprise founded by gaming godfather John
Carmack, is being put on hold. At QuakeCon, Carmack told New Space Journal
that in the face of a failed landing in January and growing
organizational problems, "things are turned down to sort of a
hibernation mode." Armadillo, the Journal reports, had hit a snag
after giving up private contract work to chase a reusable cargo craft
of the kind used by NASA. That meant that instead of operating at a
profit, Carmack paid over a million dollars a year to finance the
company and narrowed its focus to producing working suborbital rockets
with existing technology.
"What happened was
disappointing," he said. "What should have been faster — the repackaging
of everything — turned out slower." He blamed the issue partly on his
giving up a certain amount of day-to-day control and partly on a culture
of "creeping professionalism" that delayed development. Instead of
prototyping and testing quickly, Carmack said teams were spending more
time on reviews or plans that did more to reassure potential buyers than
get results. By focusing on one line of rockets, a single failure also
became much more risky than if they'd had backup designs to turn to.
Armadillo was never on the
scale of a company like SpaceX, but it's been around for over a decade
with a combination of volunteer and full-time employees. Now, Carmack
doesn't seem optimistic about ambitious projects in the near future.
"I've basically expended my crazy money on Armadillo, so I don't expect
to see any rockets in the real near future unless we do wind up raising
some investment money on it," he said.
세계 대전 이전에 미국 로켓 기술의 초기 단계에 고체 추진과 관련한 많은 업적을 남기고 NASA JPL을 만들었던 frank
malina라는 과학자가 있었다. 궤도 위성을 발사하겠다는 꿈을 향해 나아갔지만 사회주의적 사상을 가지고 있었단 이유로 FBI로
부터 공산주의자이며 소련 스파이라는 의심을 받아 물러난다. 그 자릴 이어받은 사람이 나치 로켓 개발 책임자였던 폰 브라운
박사이다. 50년대 초 혐의가 벗겨졌지만 그는 더이상 로켓 과학자가 아니다. 매카시즘 시대의 선구적인 과학자가 서글픈
이야기....IEEE spectrum에 실린 글이다.
In the early decades of the 20th century, rocket science wasn’t
considered the brainy endeavor it is now. Far from it: Simply expressing
an interest in the field was enough to provoke ridicule. Becoming a
rocket scientist was enough to get you ostracized from whatever field
you were in before.
didn’t care. Overcoming incredible institutional resistance and rather
daunting technical and financial odds, the engineer, while still a grad
student at Caltech in the mid-1930s, started up a research program that
would lay the foundations for U.S. rocket and missile development.
During the run-up to World War II, that work took on new significance.
By the war’s end, Malina had become the top American rocket expert and
had cofounded the Jet Propulsion Laboratory, which today is one of the
world’s premier space research organizations.
And yet, you’ve probably never heard of him. Most histories of the U.S.
space program treat Malina and his group as a footnote. They say the
real work started only after the war, with the arrival of Wernher von Braun,
Hitler’s chief rocket scientist. Without the German’s genius, the story
goes, U.S. extraterrestrial explorations would never have gone so far
That version of events, though, overlooks the key contributions made by
Malina and his team of engineers, scientists, and technicians, who not
only advanced the state of rocketry but did so on a fraction of the
funding that their German counterparts enjoyed. Between 1936 and 1946,
Malina’s team pioneered the use of solid propellants, which in the
decades following World War II became crucial in both missiles and
launch vehicles, and they also did fundamental work with liquid
propellants. Equally important was Malina’s institutional legacy, in
cofounding both JPL and the Aerojet Engineering Corp. (now Aerojet Rocketdyne), a major aerospace player to this day.
What makes Malina’s story all the more compelling is that he was a man
of great contradictions: A professed pacifist, he nevertheless designed
powerful rockets to further the war effort. A communist sympathizer, he
made a fortune through his stake in Aerojet. A consummate engineer, he
opted to abandon his research career while still in his 30s and would
eventually dedicate himself full-time to artistic pursuits. And yet,
this sometimes deeply conflicted individual did more than anyone to
legitimize the pursuit of rocket propulsion and to pave the way for
others to pursue their paths to the stars.
Like most of the early rocketeers, Malina was drawn to the subject because rockets meant space travel. Born in 1912 in the tiny town of Brenham, Texas, Malina as a boy devoured Jules Verne’s classic From the Earth to the Moon, which vividly imagined an extraterrestrial trip. Even as an adolescent, Malina had an engineer’s mind-set. In a college essay on interplanetary travel,
he enumerated the great difficulties that would need to be overcome,
including the vast distances to traverse, the hostile atmosphere upon
arrival, and the lack of any means of communication between that distant
point and Earth.
In 1934, after getting a bachelor’s in mechanical engineering from
Texas A&M University, he headed to Caltech. There, he had the good
fortune to begin working for the renowned aerodynamicist Theodore von Kármán,
who became his thesis advisor. Von Kármán led one of the world’s
foremost centers of aeronautical research, the Guggenheim Aeronautical
Laboratory, California Institute of Technology (GALCIT). As the limits
of propeller propulsion for high-speed flight became obvious, von Kármán
and others eagerly sought alternatives.
Late in 1935, after hearing a fellow student’s presentation on
rocket-powered aircraft, Malina found his old interest in spaceflight
rekindled. Coincidentally, a newspaper article about the presentation
drew the attention of Pasadena resident John “Jack” Whitesides Parsons,
a self-taught chemist who had been experimenting with powder rockets
for some time. Parsons and a mechanic friend visited Caltech, seeking
advice on building a liquid-propellant rocket. The student who’d given
the presentation directed them to Malina, and Malina, recognizing what
he would later describe as Parsons’s “uninhibited fruitful imagination,”
agreed to work with Parsons.
Malina and Parsons were an odd pair. Methodical and reserved, the
23-year-old Malina was very much the academic. Parsons, two years
younger, had enough ingenuity, boldness, and exuberance to more than
compensate for the fact that he had no degree beyond high school. He
also dabbled in magic and the occult. Somehow, this unlikely duo
advanced rocket science further than either of them could have possibly
Malina proposed to von Kármán that he, Parsons, and Parsons’s mechanic
friend, Ed Forman, design a sounding rocket that would carry scientific
instruments into the upper atmosphere, to an altitude of about 40
kilometers. Despite the fact that Parsons and Forman had no Caltech
affiliation, von Kármán agreed to support the trio, although he could
provide only advice and use of the facilities but no actual funding.
Over the next several years, Malina and his crew would pursue their
investigations by taking part-time jobs, scrounging spare parts and
materials where they could, and “borrowing” from university labs as
needed. Malina and Parsons soon attracted additional graduate students
to join them, including A.M.O. Smith, who would go on to become chief aerodynamicist at Douglas Aircraft, and Hsue-Shen Tsien, who would later return to China to found its missile and space program.
Rocketry was still in its infancy. Although rocket clubs in Europe and
the United States gave amateurs an outlet for their interests, no
serious university programs existed, and thus not much had been done to
put theory into practice. Still, Malina’s group wasn’t quite starting
from scratch. Back in the 1890s, Russian mathematician Konstantin Tsiolkovsky’s calculations showed that extraterrestrial travel via rocket propulsion was theoretically feasible. And in 1926, the secretive U.S. engineer Robert Goddard
launched the world’s first liquid-fueled rocket. Its flight lasted just
2.5 seconds, rising to 12.5 meters. But his results inspired a new
generation of rocket enthusiasts, including Malina and Parsons and also
Consulting with von Kármán, Malina decided to focus first on the rocket
engine. Up to then, nobody had built an engine suitable for a sounding
rocket—that is, with enough thrust to reach an altitude of 40 km or so.
Goddard’s rockets, for example, never got higher than 2.6 km. “Until one
could design a workable engine with a reasonable specific impulse,”
Malina later recalled, “there was no point in devoting effort to the
design of the rocket shell, propellant supply, stabilizer, launching
method, payload parachute, etc.” Parsons, who loved nothing more than
the thrill of launching rockets, argued against Malina’s methodical
approach. Fortunately for the sake of science, Malina won the argument.
The scientific rigor imposed by Malina didn’t stop the team from taking
extraordinary risks. In one memorable experiment, a small rocket motor
they were testing misfired, releasing a corrosive cloud of dinitrogen
tetroxide that rusted equipment throughout the building. “We were told
to move our apparatus outside the building at once,” Malina wrote. “We
also were thereafter known at Caltech as the ‘Suicide Squad.’ ”
In late October 1936, the GALCIT rocketeers were ready
to test their rocket motor. Given the likely noise and possibility of
an explosion, they chose a spot off campus: the Arroyo Seco, a dry river
basin on the western edge of Pasadena. Hauling their makeshift
equipment out to the sand, they attached the motor to a 1-meter-tall
stand, positioning it so that the exhaust flame would shoot straight up
into the air. A spring would measure the rocket’s thrust. Hoses
connected the motor to a tank of oxygen and another of methyl alcohol
fuel. The tests would provide crucial data to back up their
calculations, including fuel consumption, thrust, and temperatures and
pressures inside the motor.
A cord fuse was supposed to ignite the fuel, but on the first three
attempts it detached before it could be lit. On the fourth and final
try, the fuse lit but then detached, managing to ignite some fuel that
had spilled onto the equipment and also the oxygen hose. “The oxygen
hose for some reason ignited and swung around on the ground, 40 feet
from us,” Malina wrote the next day in a letter to his parents. “We all
tore out across the country wondering if our check valves would work.”
The check valves, designed to prevent the fuel and oxygen from backing
up, did work, and although the resulting fire badly damaged their
apparatus, the rocketeers were ecstatic. As Parsons and Forman already
knew, there was just something awesome about setting things ablaze.
The group duly replaced the fuse with a spark plug, and on 15 November
the motor burned for 5 seconds. Two weeks later, it fired for 20
seconds, this time with a deafening roar that indicated complete
combustion. On their final test, on 16 January 1937, it burned for a
full 44 seconds.
Over the next year and a half, the team continued their experiments as
time and money allowed, and Malina published two landmark papers on
their work in the Journal of the Aeronautical Sciences. Despite the rocketeers’ steady progress, however, their results drew little outside interest.
Then came the war. As hostilities in Europe and the Pacific deepened,
U.S. military leaders began casting about for any new technologies that
might assist in the war effort—including rockets. In December 1938, at
von Kármán’s behest, Malina presented a report on “jet propulsion” to a
group of government and military advisors in Washington, D.C. (He
intentionally avoided the word “rockets,” which still had a poor
reputation in scientific circles.) The report impressed the U.S. Army
Air Corps’s commanding general, Henry “Hap” Arnold, who in January 1939
gave Malina’s group US $1,000 to develop rockets for jet-assisted
takeoff (JATO). Six months later, Arnold gave them an additional
$10,000. He hoped that JATO rockets, mounted on an airplane’s wings,
might help a heavily laden aircraft take off from short island runways
in the Pacific.
The group experimented with solid fuels, starting with the oldest:
gunpowder. After a series of unintended explosions, Malina and von
Kármán felt compelled to examine the theoretical stability of burning a
solid fuel under pressure. Their conclusion, now known as the von Kármán–Malina theory of constant-thrust long-duration engines,
showed the process could be made stable if the pressure inside the
chamber remained constant. By the summer of 1941 the group’s
gunpowder-based propellant was performing well enough to warrant flight
tests on an actual aircraft. At March Field, in California, JATO rockets cut the takeoff distance of an Ercoupe, a fighter-size civilian aircraft, by half.
The engineers still had a problem. When the JATO rockets were stored
for more than a few days, they would explode upon ignition. For the
better part of a year, Parsons and Malina searched in vain for a
solution. The bombing of Pearl Harbor in December 1941 only heightened
Then, in June 1942, the self-taught Parsons had a brilliant insight. He
was watching a construction crew mixing molten asphalt when it occurred
to him that he could cast asphalt with an oxidizer,
such as potassium perchlorate, to create a solid propellant. The
combustible asphalt would act as both fuel and binder. The concept
proved to be a fundamental technological breakthrough for all solid
propellants, and Parsons’s idea lives on in both missiles and launch
vehicles, including the Polaris, Minuteman, and Titan. That same year
Malina, Parsons, von Kármán, and two others formed the Aerojet
Engineering Corp. to manufacture JATO rockets for both the U.S. Army and
Of course, Malina and his colleagues weren’t the only
rocketeers. In 1943 came British intelligence reports that the Germans
were constructing an extraordinarily large rocket at Peenemünde, on the
Baltic Sea coast. Over the preceding decade, Wernher von Braun had been
leading a well-funded, top-secret effort that was about to show the
world the rocket’s destructive power.
Like Malina, von Braun was born in 1912 and had been drawn to rocketry as a youth.
While still an undergraduate, he joined the Verein für Raumschiffahrt
(Spaceflight Society), whose 500 amateur members followed the latest
developments in the field and also experimented with rockets. By 1932,
their work had attracted the attention of the German army, which
recruited von Braun and supported his classified doctoral thesis,
“Construction, Theoretical, and Experimental Solution to the Problem of
the Liquid Propellant Rocket.”
In stark contrast to the U.S. military, the Germans were serious about
rockets. By the end of the war, Germany would spend more on rockets than
the United States spent on the Manhattan Project—$3 billion versus $1.9
billion. With that kind of largesse, von Braun’s impressive facilities
included not just the Peenemünde research and production center but also
an additional manufacturing center near Nordhausen.
Although von Braun’s team developed a reusable JATO rocket, their
design was apparently never used. That work would have been eclipsed in
any event by the far bigger and more sophisticated V-2 rocket. The
world’s first production ballistic missile, it burned liquid oxygen and
ethyl alcohol, carrying a metric ton of explosives over distances of 320
km. Its guidance system relied on a pair of gyroscopes to steer the
fins and vanes; it was not very accurate.
Von Braun’s enterprise produced more than 6,000 V-2s, which were used
primarily against London and Antwerp, Belgium, starting in September
1944. The actual manufacturing was done by prisoners from the
concentration camp Mittelbau-Dora. As the historian Michael J. Neufeld
has documented, von Braun went so far as to handpick detainees with
technical qualifications for this work. (The prisoners were worked
literally to death. In all, about 12,000 died producing von Braun’s
rockets; for comparison, the rockets themselves would kill an estimated
9,000 people, many of them civilians.)
In 1943, the U.S. Army shared the British intelligence reports with von
Kármán and Malina and asked if they too could develop a long-range
guided missile. They could, they said, and the Army provided the newly
renamed Jet Propulsion Laboratory with $3 million for the first year of
operation. Von Kármán was named JPL’s director, and Malina was its chief
engineer. Construction of a new facility began in the Arroyo Seco, just
west of where Malina, Parsons, and Forman had conducted their crude
experiments in 1936. Today, JPL still occupies a 72-hectare campus at
By the end of 1944, von Kármán was spending much of
his time in Washington, so he resigned his JPL post, and Malina, who had
been overseeing daily operations anyway, was soon named acting
The lab’s crash program to build a smaller, lighter version of the V-2
called for a small solid-fueled rocket and then a larger liquid-fueled
guided missile. The former, dubbed the Private, stood 2 meters tall with
a planned range of 18 km. It used asphalt as a fuel with potassium
perchlorate as an oxidizer.
The liquid-fueled missile, called the Corporal, was 11 meters high and
had a range of 120 km. It burned hydrazine with red fuming nitric acid
as the oxidizer—a combination the JPL team had developed and patented.
The innovative fuel remained liquid at room temperature, so it needed no
cooling system, and it was hypergolic, meaning it ignited spontaneously
when its constituents were brought into contact, so no ignition system
was required. The same type of fuel would later be used in the Apollo
program, to propel the command and service module and the lunar module.
Before his team built the guided version of the Corporal, Malina
decided to start with an unguided rocket. It was dubbed the WAC, which
stood for either “without attitude control” or “Women’s Army Corps,”
because it was the Corporal’s “little sister.” The first was launched at
the White Sands Missile Range in New Mexico on 16 September 1945, two
weeks after the official end of World War II. A subsequent launch on 11
October reached an altitude of 70 km, nearly to the boundary of space. A
more advanced version of the Corporal would later be deployed in Europe
as the United States’ first nuclear missile.
Two months after the WAC Corporal launch, Malina’s team returned to
White Sands to test the Private. In all, they launched 24 rockets
without a single failure. Although never itself used as a weapon, the
Private was the direct predecessor of all solid-fueled missiles that
came after it.
For Malina, the success of the Private and WAC
Corporal was bittersweet. He finally had a high-altitude sounding
rocket, but it was already clear that the same machine was also a
stepping-stone to a nuclear-armed ballistic missile. Malina was at heart
a pacifist who had worked on military technology only because he
believed the fascists needed to be defeated. In a 1978 interview, he
recalled being “caught up in the wave at the end of the war of hate for
war, and fear of the development of the atom bomb, and seeing the things
we had been developing for space exploration being used for military
Malina tried after the war to convince Caltech’s board of directors to
support unclassified high-altitude research based on the WAC Corporal.
The board rejected the proposal. In 1946, he and another original member
of the Suicide Squad, Martin Summerfield, wrote a paper for the Army
describing how technology then available could be used to launch a
satellite into orbit. The Army showed no interest.
It seemed the military was interested in Malina’s ideas only as they
applied to weapons. Malina decided instead to leave rocketry altogether.
In 1947 he resigned from JPL and accepted a position with UNESCO in
Paris, a position in which he felt he could work toward peace rather
That wasn’t the only reason for Malina’s departure, however. The FBI
had been investigating him since 1942, suspecting him of being a member
of the Communist Party and, worse, a communist spy. In 1946, bureau
agents raided his house while he was out of town. His abrupt transition
from war hero to potential enemy of the state was surely galling.
Was Malina in fact a communist? In 2009 I studied Malina’s considerable
FBI file, and I also went through his papers at the Library of
Congress. The records show clearly that Malina was likely a member of a
Los Angeles branch of the Communist Party in the late 1930s. His FBI
file, for instance, contains a copy of a 1939 application to the
Communist Party, in what appears to be Malina’s handwriting. He was also
no fan of capitalism. In a 1936 letter to his parents, he wrote,
“Events in Europe are certainly leading to another war. There seems to
be only one hope, overthrowing of the capitalist system in all countries
and an economic union of all nations.” Of course, at the height of the
Great Depression, countless academics, artists, professionals, and
others held such views. And, as JPL historian Erik M. Conway has
written, the Communist Party branch to which Malina belonged “dissolved after the shocking announcement of the Soviet Union’s nonaggression pact with Germany in 1939.”
As for espionage, there was perhaps reason to at least suspect Malina.
Several security breaches occurred during his tenure at JPL, the most
significant involving classified lab documents that turned up in the
hands of a Russian courier. According to a 1942 FBI report, at least
five unnamed informants identified Malina as a possible spy; the report
concluded that “the loyalty of the subject would be questionable if he
had to decide between our form of government and that of Russia.” J.
Edgar Hoover himself repeatedly prodded the U.S. attorneys to indict
Malina, which they finally did in December 1952, for failing to disclose
his Communist Party status to the government. His U.S. passport was
And yet, despite numerous investigations from 1942 until 1960, the
bureau never found any evidence of spying or of more than a passing
interest in communism. More likely, Malina was just one of the thousands
of wrongly accused Americans ensnared by the Red Scare of the early
1950s. The indictment against him was dismissed in 1954, and his
passport was restored four years later.
By then, Malina had resigned from UNESCO, and, now wealthy from his
stock in Aerojet, he “cut loose from everything and became an artist,”
as he later told an interviewer. He enjoyed some success as a kinetic
sculptor, often invoking themes of science and engineering in his work.
He never returned to research, although in 1960 he helped found the International Academy of Astronautics with von Kármán and others, to foster international cooperation in space exploration.
Even as Malina’s place among rocketeers faded, von
Braun’s grew ever brighter. As the war in Europe transitioned to the
Cold War, the U.S. government brought more than 1,500 German scientists,
engineers, and technicians to the United States, including von Braun
and much of his staff. Nazi party affiliations and war crimes were
conveniently overlooked because the Germans’ expertise was now
considered crucial in the race against the Soviets. In October 1945, von
Braun and part of his team arrived at Fort Bliss, in El Paso, Texas,
just a short drive from where Malina’s WAC Corporal was being tested.
Von Braun would spend five years there, before being transferred to
Huntsville, Ala., where he led the Army’s rocket program. In 1960, he
became the first director of NASA’s new Marshall Space Flight Center, in
Huntsville, overseeing the development of the heavy-lift Saturn rockets
that would carry astronauts to the moon.
The ascendancy of the German engineers within the U.S. program rankled Malina. According to Malina’s son Roger, “He was philosophically bitter
that the Nazi engineers had become U.S. space heroes but the founders
of U.S. rocketry who had dedicated the war years to working for the
Allies had been dispersed.” And in a 1967 article, Malina himself wrote:
“Popular opinion, even the opinion of some who should know better, has
been that American rocket developments lagged far behind that of Nazi
Germany. This belief is false, but myths die hard.”
One last series of experiments is worth noting in this regard. Starting
in late 1946, JPL researchers at White Sands assembled and launched
two-stage rockets consisting of a WAC Corporal atop a German V-2. The
second-stage WAC Corporal would take off just as the bigger missile
reached its maximum velocity. On 24 February 1949, one such “Bumper”
rocket broke the altitude record by climbing to 393 km, approaching the
orbit of today’s International Space Station. The following year,
another Bumper became the first rocket to be launched from the newly
constructed Cape Canaveral. In so doing, this odd American-German hybrid
ushered in the space age.
This article originally appeared in print as “America’s Forgotten Rocketeer.”
About the Author
James L. Johnson is vice president of operations at Systems Seals Inc., in Cleveland, Ohio. He’s currently at work on a book about Frank Malina and the Jet Propulsion Laboratory’s rocket program.
점점 차량용 전장이 일상화, 고성능화되는 가운데서도 8비트 마이컴 cpu 수요는 여전하다고.
고성능 32비트 cpu가 많이 쓰이고 있긴 하지만, 간단한 제어를 위한 곳에는 속도, 저전력, 비용에 강점이 있는 8비트 cpu가 여전히 수요가 있어 시장은 확대되는 중.
전체 수요중 20~25% 정도가 8비트 cpu라고 하는데 기술 발전에 따라 저비용, 고성능이 됨에 따라 더 늘어날 가능성이 높다고. 그 바람에 중간에 낀 16비트 cpu 수요는 점점 줄어드는 추세라고...
역시 적정 기술의 수요는 여전하군. 하지만 현재 시장 유지에 안주한다면 미래가 어둡기는 마찬가지 아닐지. 나중에 32비트, 64비트 고성능 cpu가 저비용, 고성능을 무기로 기술 주도권을 늘려가고 통합 제어를 강화한다면 나중에는 사양될지도 모를 일이고....
STMicro’s 8-bit CPUs meet demands for low-simple, power electronic controls.
Powerful 32-bit microcontrollers often manage
new features and functions, but the market for simple 8-bit controllers
is holding up nicely. These devices give design teams the flexibility to
get to market quickly with inexpensive modules that use little power.
expanded its 8-bit automotive-grade microcontroller portfolio with
compact, inexpensive devices that run at 20 MIPS. The CPUs also offer
connectivity, timing, and analog functions for applications such as seat
controllers, window lifters, HVAC controls and gateways, as well as
Applications like these are keeping demand for simple controllers high. Tom Hackenberg, Principal MCU Analyst for IHS Technology,
said 8-bit CPUs accounted for 24% of the automotive microcontroller
market last year. He predicts that 8-bit chips will decline only
slightly in 2018, dropping to 22%.
“Companies like Microchip, Renesas, NXP, Atmel, Freescale, Cypress, Infineon, Silicon Labs,
STMicro, and many others have continued to advance the power efficiency
and integration features of 8-bit solutions to keep this market
thriving for the foreseeable future,” Hackenberg said.
Chip marketers and analysts explain that
there are many applications where the sub-$1 price of 8-bit chips is
only one of their attractive features. With more and more electronics in
vehicles, engineers see many reasons to deploy the small devices.
“They offer low pricing, small packages,
and low power consumption,” said Christophe Loiodice, Product Line
Manager at STMicroelectronics. “More and more mechanical systems are
being replaced by electronic applications. Time-to-market is very
important, so many engineers pick 8-bit chips because they’re easy to
use. The tasks don’t need a lot of software, so large memory sizes
New regulations are also helping fuel
demand for simple chips. In the U.S., rear-view cameras are being
mandated, opening up a new high-volume market.
“Developers want small packages; often a
rear-view camera’s printed circuit board is only 2 mm square,” Loiodice
said. “These applications also need low power; automakers want
solutions that help them reduce power consumption.”
Hackenberg predicts that 32-bit CPUs
will soon account for more than half the automotive CPU sales, but he
sees many new roles for 8-bit chips.
“As more nodes share their operational
status, sensor data, or other information, most of the controllers will
only need basic modes—basically waking, transmitting, and going back to
standby,” he said. “They’ll also see use with small motors that do
nothing but infrequently readjust a seat or window position. The 8-bit
market will benefit strongly from new power/price-sensitive Internet of
Even as these simple systems evolve,
they aren’t expected to migrate up to 16-bit CPUs. As 32-bit CPU pricing
declines and 8-bit capabilities rise, they’re squeezing out 16-bit
chips. Hackenberg predicts that 16-bit usage will shrink from 31% last
year to 23% in 2018.
Loiodice noted that STMicro hasn’t
introduced a 16-bit chip since 2009. He said that the drive to make
cabins more comfortable and increase options for drivers and passengers
is also helping maintain demand for these chips.
“If you use dedicated 8-bit controllers,
it’s much cheaper to replace a module for one function than to replace
one multi-function controller,” Loiodice said. “Another factor is that
not all modules in the dashboard are made by one company. The Tier 1s
use small, inexpensive controllers for each module.”
12개의 초음파 센서, 4백만 화소 광각 카메라, 차동차 앞면의 레이저 스캐너 등을 이용해서 완벽한 자동 주차를 구현한다고 한다.
기존의 주차 시스템이 완전 자동이 아닌 주차 보조였다면, valet 제품은 완전한 자동 주차를 실현했다는 것이다. 주차할 빈 공간을 찾는 것만 빼고 주차 전과정을 스스로 처리할 수 있다고.
주차가 부담스러운 많은 초보 운전자들은, 조만간 돈만 들이면 차량의 옵션으로 자동 주차 장치를 고를수 있겠지. 운전하다 주차할 위치를 발견하면 차를 세운 다음 내리고 스마트 폰의 차량 앱을 켜서는 '자동주차'를 눌러주면 된다.
운전자 입장에선 편하긴 하겠지만, 뒷차 입장에선 속 터질 상황일지도....
또 처음부터 이런 식으로 운전을 배운사람이라면 운전을 몇 년 이상 능숙하게 하게되어도 자동 주차 시스템이 없는 차량은 운전을 할 수 없게 되겠지....
An SUV fitted with Valeo's
prototype fully automated valet parking system can be activated remotely
via a smartphone. Valeo's near-term solution requires the driver to see
the vehicle at all times and keep a finger on the smartphone's
activation button during the entire parking process.
autonomous valet parking system enables an SUV to park itself as the
driver watches from outside the vehicle.Standing a few feet from a
driverless 2014 MY Range Rover Evoque, a Valeo engineer touches a smartphone screen to start the automated parking process.
“A 360-degree bird’s eye view of the
vehicle is relayed back to the driver’s smartphone. The phone indicates
what gear the vehicle is in and the vehicle’s speed, and the driver has
the ability to press pause, resume, and cancel,” Sam Azuz, Project
Technical Manager for Ultrasonic Systems R&D in Valeo’s Comfort and
Driving Assistance Systems Business Group, told Automotive Engineering during a June demonstration outside Valeo North America’s headquarters in Troy, MI.
Valeo’s self-park system uses the Tier
One supplier’s 12 ultrasonic sensors, its 360Vue system comprised of
4-megapixel digital cameras with image processing software, and a
Valeo-produced laser scanner (developed in partnership with Ibeo Automotive Systems
GmbH) mounted on the vehicle’s front lower grille. The patented laser
scanner with built-in ECU has a 140-degree opening angle. “It sees quite
wide, and it sees forward about 200 m (656 ft) to assist with the
exploration mode of detecting objects and defining its path while
driving itself,” said Azuz, an electrical engineer.
Also during the demonstration, a 2012 MY Volkswagen
Eos concept showed Valeo’s near-term, autonomous parking solution.
“With the Eos, the driver needs to keep a finger on the smartphone the
entire time the vehicle is self-parking. If you find a parking spot that
is too tight or you are just not comfortable parking, this system
handles the tasks of steering, braking, accelerating, and shifting to
park the vehicle, but not the task of finding the parking spot,” Azuz
Jean-Francois Tarabbia, Valeo group Senior Vice President, R&D and Product Marketing, said in an interview with Automotive Engineering
that vehicles with autonomous parking capabilities typically prompt a
different consumer response than fully self-driving vehicles.
“The acceptance is better because the
car is driving in low-speed, so people do not really feel any threat of
an accident. It appears safer, and it is completely safe,” said
Tarabbia. He also noted that, for many consumers, parking is not a
beloved undertaking. “The trend is, first the need is there. And,
secondly, that the technology we are able to provide for that need is
affordable,” Tarabbia said.
Valeo’s automated parking system makes
its production debut in 2016 via a European market passenger car
application in which the driver must keep a finger on the smartphone
during the maneuver, according to Tarabbia.
“It’s a way to respect the current
regulations: the driver still has control of the car. The purpose is to
park the car where you can see it through the whole process. Europe has
very small parking spots, so this is a very, very convenient feature for
this kind of a situation,” Tarabbia said.
Prime Minister Shinzo Abe of Japan and President Park Geun-hye of South
Korea are pushing to have high school history textbooks in their
countries rewritten to reflect their political views.
Abe has instructed the Education Ministry to approve only textbooks
that promote patriotism. He is primarily concerned about the World War
II era, and wants to shift the focus away from disgraceful chapters in
that history. For example, he wants the Korean “comfort women” issue
taken out of textbooks, and he wants to downplay the mass killings
committed by Japanese troops in Nanking. His critics say he is trying to
foster dangerous nationalism by sanitizing Japan’s wartime aggression.
Park is concerned about the portrayal of Japanese colonialism and the
postcolonial South Korean dictatorships in history books. She wants to
downplay Korean collaboration with the Japanese colonial authorities and
last summer pushed the South Korean Education Ministry to approve a new
textbook that says those who worked with the Japanese did so under
coercion. (A majority of professionals and elite civil servants today
come from families that worked with the Japanese colonizers.) Academics,
trade unions and teachers have accused Ms. Park of distorting history.
Abe and Ms. Park both have personal family histories that make them
sensitive to the war and collaboration. After Japan’s defeat in the war,
the Allied powers arrested Mr. Abe’s grandfather, Nobusuke Kishi, as a
suspected class A war criminal. Ms. Park’s father, Park Chung-hee, was
an Imperial Japanese Army officer during the colonial era and South
Korea’s military dictator from 1962 to 1979. In both countries, these
dangerous efforts to revise textbooks threaten to thwart the lessons of
A version of this editorial appears in print on January 14, 2014, in The International New York Times
Examining the Japanese History Textbook Controversies
with individuality are multi-dimensional; they are
pluralistic; a pluralism must permeate them that allows teachers
in the classroom to select what is most suitable for their own use.
-Ienaga Saburo, Court testimony, 1969.(1)
The Importance of History Textbooks in Japan and the United States
controversy surrounding the adoption of middle school
history textbooks in Japan raises the question, Why are
textbooks—history textbooks in particular—important enough to
fight over? Historians Laura Hein and Mark Selden tell us that "history
and civics textbooks in most societies present an 'official'
story highlighting narratives that shape contemporary
patriotism"; "people fight over textbook content because
education is so obviously about the future, reaches so deeply
into society, and is directed by the state. Because textbooks
are carried into neighborhood schools and homes, and because, directly
or indirectly, they carry the imprimatur of the state, they
have enormous authority."(2) Richard H. Minear, Japanese
historian, answers the question this way: "As a practicing
historian, I encounter at every turn the power textbooks
exercise over my students' minds. In...Japan it is the
government that influences the content of textbooks. In the United
States today the problem is not the government but textbook
publishers. As far as the effects on students go, the
difference is not great. . . . our students believe
absolutely what they read in textbooks."(3)
than one American scholar has suggested that Japan is a
mirror for Americans. If so, then we Americans can learn from
the controversy over textbooks in Japan. At the very least, this
controversy in Japan should raise a related question for Americans:
Why aren't there more debates in this country over textbook
content? Why don't American teachers—and their students—look
more deeply into the authorization process that American
publishing companies and state and local adoption agencies
follow here at home?
Background on the Japan Textbook Controversies
present system of screening and approving textbooks dates to
pre-war Japan: "Struggles over the national narrative
existed . . . before and even during World War II, when official
narratives such as the Imperial Rescript on Education and
other 'fine militarist stories' played a crucial role in
Japanese identity formation."(4) In the early months of the
postwar period, Japanese bureaucrats changed existing
textbook policy by blotting out passages that might offend
the American occupiers. By 1946 the Supreme Command for the Allied
Powers (SCAP), in an effort to ensure that textbooks did not
encourage emperor-worship and militarism, imposed on the
nation a system of government "certification" of schoolbooks.
That system continued after the Americans left.
In Japan, each public and private school selects one history
textbook from a list of seven or eight authorized by the
Ministry of Education, Culture, Sports, Science and Technology
(Monbukagakusho) every four years. This screening process
then lasts one full year. In the United States (where
adoption takes place on no set schedule at the state or local
level), for all the talk of alternative means of
instruction, the conventional textbook remains the core and
often the sole teaching tool in most middle and high school
classrooms. Japanese textbook companies submit manuscripts to the
Ministry of Education, whose appointed committees examine
them according to prescribed criteria. The Ministry offers
the textbook companies opportunities to revise their drafts,
and copies of the Ministry-approved manuscripts are then
available for consideration by the local districts.
1965 Ienaga Saburo, a prominent historian, filed the first
of his three lawsuits against the Ministry of Education,
charging that the process of textbook approval was unconstitutional
and illegal. The Ministry had rejected Ienaga's history textbook
because it contained "too many illustrations of the 'dark
side' of the war, such as an air raid, a city left in ruins
by the atomic bomb, and disabled veterans."(5) His second
suit two years later also involved the issue of
constitutionality and, in addition, focused on points related to
Ienaga's characterization of Japan's foundation myths and a
description of the 1941 Japan-USSR neutrality pact.
1982 the screening process in Japan became a diplomatic
issue when the media of Japan and neighboring countries gave
extensive coverage to changes required by the Ministry of Education.
The Ministry had ordered Ienaga to remove critical language in
his history textbook, insisting that he write of the Japanese
army's "advance into" China instead of its "aggression in"
China, of "uprising among the Korean people" instead of the
"March First Independence Movement." Pressure applied by
China and Korea succeeded in getting the Ministry to back
down and resulted in the Ministry's adding a new authorization
criterion: that textbooks must show understanding and
international harmony in their treatment of modern and
contemporary historical events involving neighboring Asian
Ienaga's lawsuits lasted
thirty years. Although in 1997—in response to Ienaga's third
lawsuit instituted in 1986—the Supreme Court of Japan
unanimously upheld the Ministry's right to continue screening
textbooks, Ienaga and his fellow critics enjoyed a partial
victory. The court requested "that the Government refrain
from intervening in educational content as much as possible."(7)
By the time of the final ruling,
however, Ienaga and the tens of thousands of Japanese who
joined him in his battle against the authorization process
had been victorious in fact if not in law. The most widely
used Japanese textbooks in the mid- and late-1990s contained
references to the Nanjing Massacre, anti-Japanese resistance
movements in Korea, forced suicide in Okinawa, comfort women, and Unit
731 (responsible for conducting medical experiments on
prisoners of war)—all issues raised in Ienaga's suits.
The Current Situation
conservative (many would argue ultra-conservative) movement
toward reform in the Japanese history curriculum was initiated
in the early 1990s by Fujioka Nobukatsu and his Liberal View of
History Study Group. Fujioka, a professor of education at
Tokyo University, set out to "correct history" by emphasizing
a "positive view" of Japan's past and by removing from
textbooks any reference to matters associated with what he
calls "dark history," issues such as the comfort women, that
might make Japanese schoolchildren uncomfortable when they read
about the Pacific War.
By early 2000
Fujioka and his group had joined with others to form the
Japanese Society for History Textbook Reform, now headed by
Nishio Kanji. It is the Society's textbook, The New History Textbook
(one of eight junior high school history textbooks
authorized by the Ministry of Education in April 2001), that has caused
such debate in Japan over the past year. Nishio summarized
the views of the Society in an article in the August 2001 Japan Echo,
a bimonthly journal of opinion on a wide range of topics of
current interest within Japan. The article maintained that
rather than asserting the Society members' personal views of
history the textbook aims to restore common sense to the
teaching of the subject. Nishio insisted that "history stop
being treated like a court in which the figures and actions of
the past are called to judgment."(8)
Widespread protests against the textbook erupted much earlier
in Japan, China, and North and South Korea. By December
2000, reacting to a draft textbook circulated by the Society
and shown on national television, a long list of Japanese historians
and history educators expressed misgivings about the content of The New History Textbook
and its rendering of Japan's past. Their complaints centered
around the text's presentation of Japan's foundation myths
as historical fact and its characterization of wars launched by modern
Japan as wars to liberate Asia.
intellectuals' appeal to people inside and outside Japan
appeared on the internet prior to authorization of the
textbook by the Ministry. Following authorization, their voices
were joined by an international group of scholars. This
"International Scholars' Appeal Concerning the 2002-Edition
Japanese History Textbooks" aimed to "ensure that textbooks
are consistent with values of peace, justice and truth." It
declared The New History Textbook "unfit as a
teaching tool because it negates both the truth about Japan's record
in colonialism and war and the values that will contribute to a
just and peaceful Pacific and World community." (For more
information on the scholars' claim, visit their Web site
Reactions in China and Korea took
various forms. China Radio International announced that the
Chinese government and people were "strongly indignant about
and dissatisfied with the new Japanese history textbook for
the year 2002 compiled by right-wing Japanese scholars."
Foreign Ministry spokesman Zhu Banzao warned that the Chinese
people would not accept the interpretation of wartime events
put forth by the new textbook.(10) An article in the August 25, 2001
issue of Korea Now, a biweekly magazine published in
English, reported that as Seoul prepared to celebrate its
Liberation Day (from the Japanese) on August 15, angry
Koreans continued to stage anti-Japan protests ignited by the
new Japanese "textbooks that allegedly gloss over atrocities
by Japanese soldiers during World War II."(11)
the Japanese system, local school authorities determine
whether the new textbook is to be used in district
classrooms. On August 15—the deadline for school districts to make
their selections—Associated Press writer Mari Yamaguchi reported
in The Japan Times that the new textbook had been
shunned, that nearly all of Japan's school districts had
rejected it. She quoted a spokesman for the civic group
Children and Textbooks Japan Network 21 as saying, "We have
gained nationwide support to say 'no' to the textbook. . . .
it's the conscience of the Japanese public."(12) According to
a Kyodo News Service survey released August 16, not a single municipal
government run or state run junior high school in the country
adopted The New History Textbook.(13)
Lessons for Americans
Why should American teachers and their students bother
themselves about this textbook controversy? What can they and
their students learn from it?
First, as a
mirror for Americans, Japan's textbook controversy may shed
light on what could happen here if the dominant narrative—our
"official" story of our past—were challenged by a
counternarrative, one that threatens to alter or even replace a
textbook narrative. Japan lost the war that is the center of the
textbook controversy. American teachers and students might
ask how that fact has influenced Japan's textbook narrative.
Does the victor's interpretation of the past differ from that
of the vanquished? For example, James Loewen, author of Lies My Teacher Told Me,
points out that most American history textbooks published
before 1990 omitted all the important photographs of the
Second, many Americans see
Japan as a harmonious, one-dimensional society; the fact
that teachers' brought this textbook controversy—which
involved lawsuits supported by tens of thousands of Japanese
people—to the attention of their students may help to break
down that stereotype. At least two individuals are prominent
in the textbook controversy in Japan. By introducing to
students Ienaga Saburo and Fujioka Nobukatsu, American teachers add
a human dimension to Japan's textbook controversy. The more human
faces put on Japan, the better.
Japan's history textbooks have for years come under the
scrutiny of Japan's past adversaries, its Asian neighbors.
Together with their students, American teachers might examine
American textbook narratives while imagining that Mexican, Japanese,
Vietnamese, or Middle Eastern scholars and students are reading
over their shoulders as they teach and learn about American
interpretations of the war with Mexico, the war in the
Pacific, in Southeast Asia, or in the Middle East. Finally,
American teachers might also consider presenting this
passionate debate in Japan as an example for Americans to follow.
(1) Ienaga Saburo, Japan's Past/Japan's Future: One
Historian's Odyssey, trans. Richard H. Minear (New York:
Littlefield Publishers, Inc., 2001), 155.
Laura Hein and Mark Selden, "The Lessons of War, Global
Power, and Social Change," in Censoring History: Citizenship
and Memory in Japan, Germany, and the United States, ed. Laura Hein
and Mark Selden (Armonk, N.Y.: M.E. Sharpe, 2000), 3-4.
Nozaki Yoshiko and Inokuchi Hiromitsu, "Japanese Education,
Nationalism, and Ienaga Saburo's Textbook Lawsuits," in
Censoring History: Citizenship and Memory in Japan, Germany, and
the United States, ed. Laura Hein and Mark Selden (Armonk, N.Y.:
M.E. Sharpe, 2000), 97.
(5) Ibid., 108.
(6) Murai Atsushi, "Abolish the Textbook Authorization System," Japan Echo, (Aug. 2001): 28.
Quoted in Nicholas D. Kristof, "Japan Bars Censorship of
Atrocities in Texts," The New York Times, 30 Aug. 1997.
(8) Nishio Kanji, "Restoring Common Sense to the Teaching of History," Japan Echo, (Aug. 2001): 33.
(11) "Seoul Stands Firm: President Rebukes Japan for
Textbooks, Shrine Visit," Korea Now, (21 Aug. 2001): 6-7.
(12) Mari Yamaguchi, "Japanese History Textbook Shunned," The Japan Times, 16 Aug. 2001.
(13) "Only 0.03% of junior high students to use disputed textbook," Kyodo News, 16 Aug. 2001.
James W. Lowen, Lies My Teacher Told Me: Everything Your
American History Textbook Got Wrong, (New York: The New Press,
Kathleen Woods Masalski is the Projects Director for the Five College Center for East Asian Studies
and the 2000 recipient of the John E. Thayer III Award for a lifetime
of work furthering understanding between the U.S. and Japan.
cryptoLocker라는 ransomware(개인 피시의 파일을 인질로 삼아서 돈을 주면 풀어주는 말웨어)가 많이 퍼지고 있단다. 개인 피시의 파일을 인질로 잡는다는 말이 좀 이상하긴 하지만, 바이러스처럼 설치된 암호화 프로그램이 피시 내의 모든 파일들을 암호화 한다. 이 암호화된 파일에 대한 해지는 바이러스 제작자만이 가능하므로 파일의 원래 생성자라 할지라도 풀 방법이 없다. 따라서 내 파일의 내용을 보려면 제작자에게 돈을 보내야만 풀어주는 것이다.
Update 12/20/2013: A
new version of Cryptolocker—dubbed Cryptolocker 2.0—has been discovered
by ESET, although researchers believe it to be a copycat of the original
Cryptolocker after noting large differences in the program’s code and
operation. You can read the full blog comparing the two here.
Just last month, antivirus companies discovered a new ransomware known as Cryptolocker.
This ransomware is particularly nasty because infected users are in danger of losing their personal files forever.
Spread through email attachments, this ransomware has been seen targeting companies through phishing attacks.
Cryptolocker will encrypt users’ files using asymmetric encryption, which requires both a public and private key.
The public key is used to encrypt and verify data, while private key is used for decryption, each the inverse of the other.
Below is an image from Microsoft depicting the process of asymmetric encryption.
The bad news is decryption is impossible unless a user has the private key stored on the cybercriminals’ server.
Currently, infected users are instructed to pay $300 USD to receive this private key.
Infected users also have a time limit to
send the payment. If this time elapses, the private key is destroyed,
and your files may be lost forever.
Files targeted are those commonly found on most PCs today; a list of file extensions for targeted files include:
3fr, accdb, ai, arw, bay, cdr, cer, cr2, crt, crw, dbf, dcr, der, dng,
doc, docm, docx, dwg, dxf, dxg, eps, erf, indd, jpe, jpg, kdc, mdb, mdf,
mef, mrw, nef, nrw, odb, odm, odp, ods, odt, orf, p12, p7b, p7c, pdd,
pef, pem, pfx, ppt, pptm, pptx, psd, pst, ptx, r3d, raf, raw, rtf, rw2,
rwl, srf, srw, wb2, wpd, wps, xlk, xls, xlsb, xlsm, xlsx
In some cases, it may be possible to recover
previous versions of the encrypted files using System Restore or other
recovery software used to obtain “shadow copies” of files. The folks at
BleepingComputer have some additional insight on this found here.
Malwarebytes detects Cryptolocker infections as Trojan.Ransom,
but it cannot recover your encrypted files due to the nature of
asymmetric encryption, which requires a private key to decrypt files
encrypted with the public key.
In order to make removal even easier, a video was also created to guide users through the process (courtesy of Pieter Arntz).
While Malwarebytes cannot recover your
encrypted files post-infection, we do have options to prevent infections
before they start.
Users of Malwarebytes Anti-Malware Pro are protected by malware execution prevention and blocking of malware sites and servers.
To learn more on how Malwarebytes stops malware at its source, check out this blog.
Free users will still be able to detect the
malware if present on a PC, but will need to upgrade to Pro in order to
access these additional protection options.
Also, the existence of malware such as Cryptolocker reinforces the need to back up your personal files.
However, a local backup may not be enough in
some instances, as Cryptolocker may even go after backups located on a
network drive connected to an infected PC.
Cloud-based backup solutions are advisable for business professionals and consumers alike. Malwarebytes offers Malwarebytes Secure Backup,
which offers an added layer of protection by scanning every file before
it is stored within the cloud in an encrypted format (don’t worry, you
can decrypt these).
Joshua Cannell is
a Malware Intelligence Analyst at Malwarebytes where he performs
research and in-depth analysis on current malware threats. He has over 5
years of experience working with US defense intelligence agencies where
he analyzed malware and developed defense strategies through reverse
engineering techniques. His articles on the Unpacked blog feature the latest news in malware as well as full-length technical analysis. Follow him on Twitter @joshcannell